Class FastIca

java.lang.Object
edu.cmu.tetrad.search.FastIca

public class FastIca extends Object
Translates a version of the FastICA algorithm used in R from Fortran into Java for use in Tetrad. This can be used in various algorithms that assume linearity and non-Gaussianity as for example, LiNGAM and LiNG-D. There is one difference from the R, in that in R FastICA can operate over complex numbers, whereas here it is restricted to real numbers. A useful reference is this:

Oja, E., & Hyvarinen, A. (2000). Independent component analysis: algorithms and applications. Neural networks, 13(4-5), 411-430.

The documentation of the R version is as follows, all of which is true of this translation (so far as I know) except for its being in R and its allowing complex values.

Description:

This is an R and C code implementation of the FastICA algorithm of Aapo Hyvarinen et al. (URL: http://www.cis.hut.fi/aapo/) to perform Independent Component Analysis (ICA) and Projection Pursuit.

Usage:

fastICA(X, n.comp, alg.typ = c("parallel","deflation"), fun = c("logcosh","exp"), alpha = 1.0, method = c("R","C"), row.norm = FALSE, maxit = 200, tol = 1e-04, verbose = FALSE, w.init = NULL)

Arguments:

X: a data matrix with n rows representing observations and p columns representing variables.

n.comp: number of components to be extracted

alg.typ: if 'alg.typ == "parallel"' the components are extracted simultaneously (the default). If 'alg.typ == "deflation"' the components are extracted one at a time.

fun: the functional form of the G function used in the approximation to neg-entropy (see details)

alpha: constant in range [1, 2] used in approximation to neg-entropy when 'fun == "logcosh"'

method: if 'method == "R"' then computations are done exclusively in R (default). The code allows the interested R user to see exactly what the algorithm does. If 'method == "C"' then C code is used to perform most of the computations, which makes the algorithm run faster. During compilation, the C code is linked to an optimized BLAS library if present, otherwise stand-alone BLAS routines are compiled.

row.norm: a logical value indicating whether rows of the data matrix 'X' should be standardized beforehand.

maxit: maximum number of iterations to perform

tol: a positive scalar giving the tolerance at which the data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e., X = SA where columns of S contain the independent components and A is a linear mixing matrix. In short, ICA attempts to `un-mix' the data by estimating an un-mixing matrix W where XW = S.

Under this generative model, the measured `signals' in X will tend to be `more Gaussian' than the source components (in S) due to the Central Limit Theorem. Thus, to extract the independent components/sources, we search for an un-mixing matrix W that maximizes the non-gaussianity of the sources.

In FastICA, non-gaussianity is measured using approximations to neg-entropy (J) which are more robust than kurtosis- * based measures and fast to compute.

The approximation takes the form

J(y)=[E{G(y)}-E{G(v)}]^2 where v is an N(0,1) r.v.

The following choices of G are included as options G(u)=frac{1}{alpha} log cosh (alpha u) and G(u)=-exp(frac{-u^2}{2})

Algorithm*

First, the data is centered by subtracting the mean of each column of the data matrix X.

The data matrix is then `whitened' by projecting the data onto its principle-component directions--i.e., X > XK where K is a pre-whitening matrix. The user can specify the number of components.

The ICA algorithm then estimates a matrix W s.t XKW = S . W is chosen to maximize the neg-entropy approximation under the constraints that W is an orthonormal matrix. This constraint ensures that the estimated components are uncorrelated. The algorithm is based on a fixed-point iteration scheme for maximizing the neg-entropy.

Projection Pursuit*

In the absence of a generative model for the data, the algorithm can be used to find the projection pursuit directions. Projection pursuit is a technique for finding `interesting' directions in multidimensional datasets. These projections are useful for visualizing the dataset and in density estimation and regression. Interesting directions are those which show the least Gaussian distribution, which is what the FastICA algorithm does.

Author(s):

J L Marchini and C Heaton

References:

A. Hyvarinen and E. Oja (2000) Independent Component Analysis: Algorithms and Applications, _Neural Networks_, *13(4-5)*:411-430

Version:
$Id: $Id
Author:
josephramsey