Abstracts of all papers |

Blind source separation (BSS) and independent component analysis (ICA) are fields that have received much recent interest in various scientific and engineering communities. The procedures and methods developed for BSS and ICA represent a confluence of tools and ideas whose influences extend beyond the boundaries of these two tasks. Extensions of these ideas have led to useful procedures for other problem areas. In this paper, we show how several methods and concepts in BSS and ICA have direct connections to one such area--adaptive signal processing--resulting in novel procedures for several tasks including phase-only adaptive filters, recursive least-squares estimation, and adaptive subspace analysis.

The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this paper, we consider the task of independent component analysis when the independent sources are known to be non-negative and well-grounded, which means that they have a non-zero pdf in the region of zero. We propose the use of a `Non-Negative PCA' algorithm which is a special case of the nonlinear PCA algorithm, but with a rectification nonlinearity, and we show that this algorithm will find such non-negative well-grounded independent sources. Although the algorithm has proved difficult to analyze in the general case, we give an analytical convergence result here, complemented by a numerical simulation which illustrates its operation.

This papers proposes a fast algorithm for estimating the mutual information, difference score function, conditional score and conditional entropy, in possibly high dimensional space. The idea is to discretise the integral so that the density needs only be estimated over a regular grid, which can be done with little cost through the use of a cardinal spline kernel estimator. Score functions are then obtained as gradient of the entropy. An example of application to the blind separation of post-nonlinear mixture is given.

We prove theorems that ensure identifiability, separability and uniqueness of linear ICA models. The currently used conditions in ICA community are hence extended to wider class of mixing models and source distributions. Examples illustrating the above concepts are presented as well.

In this paper we derive algebraic means for Independent Component Analysis (ICA) with more sources than sensors. The results are based on the structure of the fourth-order cumulant tensor. We derive bounds on the number of sources that generically guarantee uniqueness of the decomposition. The mixing matrix is computed by means of simultaneous diagonalization or off-diagonalization techniques.

This paper addresses a mathematically sound technique for the matrix optimization with orthogonal constraints. The orthogonal matrix optimization has broad applications in recent signal processing problems including the independent component analysis. Unfortunately, due to the nonconvexity of the real Stiefel manifold defined as the set of all orthogonal matrices, most of the existing approaches to the problem are suffering from tracking the manifold. In this paper, we propose Dual Cayley parametrization technique that can decompose the original problem into a pair of simple constraint-free optimization problems.

Component estimation arises in Independent Component Analysis (ICA), Blind Source Separation (BSS), wavelet analysis, signal denoising, image reconstruction, Factor Analysis, and sparse coding. In theoretical and algorithmic developments, an important distinction is commonly made between sub- and super-gaussianity, super-gaussian densities being characterized by having high kurtosis, or having a sharp peak and heavy tails. In this paper we present a generalized convexity framework similar to a classical concept of E. F. Beckenbach, which we refer to as relative convexity. Based on a partial ordering induced by relative convexity, we derive a new measure of function curvature and a new criterion for super-gaussianity that is both simpler and of wider application than the kurtosis criterion. The relative convexity framework also provides an inequality that can be used to derive stable and effective descent algorithms for estimation of the parameters in the Bayesian linear model when sub- or super-gaussian priors are used. Apparently almost all common symmetric densities are comparable to Gaussian in this ordering, and thus are either sub- or super-gaussian, despite the fact that the measure is instantaneous in contrast to the global moment-based measures. We present several algorithms for component estimation that are shown to be descent algorithms based on the relative convexity inequality arising from the assumption of super-gaussian priors. We also show an interesting relationship between the curvature of a convex or concave function and the curvature of its Fenchel-Legendre conjugate, which results in an elegant duality relationship between estimation with sub- and super-gaussian densities.

Independent component analysis (ICA) has proved to be a highly useful tool for modeling brain data and in particular electroencephalographic (EEG) data. In this paper, a new method is presented that may better capture the underlying source dynamics than ICA algorithms hereto employed for brain signal analysis. We suppose that a brief, impulse-like activation of an effective signal source elicits a short sequence of spatio-temporal activations in the measured signals. This leads to a model of convolutive signal superposition, in contrast to the instantaneous mixing model commonly assumed for independent component analysis of brain signals. In the spectral-domain, convolutive mixing is equivalent to multiplicative mixing of complex signal sources within distinct spectral bands. We decompose the recorded mixture of complex signals into independent components by a complex version of the infomax ICA algorithm. Some results from a visual spatial selective attention experiment illustrate the differences between real time-domain ICA and complex spectral-domain ICA, and highlight properties of the obtained complex independent components.

This paper presents a new result of independent component analysis of the electrocgastrogram (EGG), which is a measurement of gastric myoelectrical activity by several electrodes attached on the abdomen. Our analysis is done under the assumption that each myoelectrical activity at an organ is statistically independent of others and EGG is a convolutive mixture of the components. The result of the analysis shows that the number of dominantly independent components is guessed to be two: one of those is associated with the gastric activity.

In this work we present a new biomedical application of independent component analysis (ICA) to solve the problem of atrial activity (AA) extraction from real electrocardiogram (ECG) recordings of atrial fibrillation (AF). The proper analysis and characterization of AA from ECG recordings requires, as a first step, the cancellation of ventricular activity (VA). The present contribution demonstrates the appropriateness of ICA to solve this problem based on three considerations: firstly AA and VA are generated by independent bioelectric sources, secondly AA and VA are subgaussian and supergaussian activities, respectively, and finally the surface ECG can be regarded as an instantaneous linear mixing process. After ICA algorithm application to recordings from 7 patients in AF, we prove that the AA source can be identified using a kurtosis-based reordering of the separated sources and a ulterior spectral analysis for those sources with subgaussian kurtosis.

The problem of extracting a blood vessel-related component from dynamic brain PET images is similar to the ICA analysis of fMRI data. Unique characteristics of this problem are: (1) the spatial distribution of vessels can be acquired by PET, and therefore the property of the probability distribution of the vessel component is known; and (2) independent maps and the mixing matrix are all nonnegative. We have proposed a method for extracting the pTAC based on ICA (EPICA). EPICA is a method designed for extracting the vessel component. We investigate (A) the variation of the estimated pTAC with changing parameters of a contrast function of EPICA, and (B) the effect of the nonnegative constraints in ICA using the ensemble learning algorithm. Our results show that (A) a penalty term influences the tail of the estimated pTAC, and (B) a nonnegative assumption in ICA is feasible for extracting a vessel component.

In this paper we describe a non-negative matrix factorization (NMF) for recovering constituent spectra in 3D chemical shift imaging (CSI). The method is based on the NMF algorithm of Lee and Seung , extending it to include a constraint on the minimum amplitude of the recovered spectra. This constrained NMF (cNMF) algorithm can be viewed as a maximum likelihood approach for finding basis vectors in a bounded subspace. In this case the optimal basis vectors are the ones that envelope the observed data with a minimum deviation from the boundaries. Results for $^{31}$P human brain data are compared to Bayesian Spectral Decomposition (BSD) which considers a full Bayesian treatment of the source recovery problem and requires computationally expensive Monte Carlo methods. The cNMF algorithm is shown to recover the same constituent spectra as BSD, however in about $10^{-4}$ less computational time.

We propose a method for image enhancement in colour word when a scattering environment reduces the vision. The main advantage of blind technique is that it does not require any a priori information about the scattering environment but supposes that the observed signals are linear mixtures of sources. Here, the natural logarithm of the degraded image provides an approximative additive mixture of reflectivity and transmittivity coefficients, the colour images provide three coloured mixtures (red, green, blue). They are processed by a Blind Source Separation (BSS) method in low spatial frequencies to display gray levels of pertinent features, which help one to vision enhancement. To display a cleaner vision, the set of mixtures is enriched thanks to classical signal processing technique. The chrominance information is restituted using post-processing techniques on HSV (Hue, Saturation, Value) space of degraded colour image. Experiments are made on images for which scattering environment is simulated in the laboratory. Keywords- Scattering environment, Artificial vision, Blind Source Separation, Second order blind identification, Independent component analysis, Wavelet denoising

In many models, variances are assumed to be constant although this assumption is known to be unrealistic. Joint modelling of means and variances can lead to infinite probability densities which makes it a difficult problem for many learning algorithms. We show that a Bayesian variational technique which is sensitive to probability mass instead of density is able to jointly model both variances and means. We discuss a model structure where a Gaussian variable which we call variance neuron controls the variance of another Gaussian variable. Variance neuron makes it possible to build hierarchical models for both variances and means. We report experiments with artificial data which demonstrate the ability of learning algorithm to find the underlying explanations---variance sources---for the variance in the data. Experiments with MEG data verify that variance sources are present in real-world signals.

A sparse representation approach of data matrix is presented in this paper and the approach is then used in blind source separation. The algorithm in this paper can deal with the problem of blind source separation with less sensors than sources, which is difficult to be solved based on general ICA approach. Next, blind source separation is discussed based on this kind of sparse factorization approach. A recoverability result is obtained which is suitable for the case of less sensors than sources. The blind separation approach includes two steps, one is to estimate mixing matrix (basis matrix in the sparse representation), another is to estimate sources (coefficient matrix). If the sources are sufficiently sparse, blind separation can be carried out directly. Otherwise, blind separation can be implemented in time-frequency domain after wavelet package transformation preprocessing of the mixture signals. Third, four simulation examples are presented to illustrate the algorithms and reveal algorithms performance. Finally, concluding remarks review the the approach and state the further studying.

In this work we show how temporal structures in time signals can be use in the framework of independent component analysis assuming the signals arise from Markov chains with finite order. Taking into account the past of the underlying processes, by using time embedding vectors, not only instantaneous independent but also uncoupled sources can be found. As a result signals which are gaussian distributes at each time can be decomposed as long as the time embedding vectors are non-gaussian. Using the model of independent time embedding vectors, we derive an algorithm (FastTeICA) which is similar to the well known FastICA algorithm introduced by Hyvaerinen (1999). A weakening of the strict assumption of independent embedding vectors which still takes the dynamics of the processes for signal decomposition into account can be achieved by assuming independent increments, i.e. the change of state of the processes. Both approaches, independent states and independent dynamics, are a special case of the independence of the time embedding vectors.

Similarities and distinctions have been pointed out between ICA and traditional multivariate methods such as factor analysis, principal component analysis and projection pursuit. In this paper, a new important connection between ICA and traditional factor analysis is made. The key of the connection is "factor rotation."

Mixture modelling techniques such as Mixtures of Principal component and Factor analysers are very powerful in finding and representing Gaussian clusters in data. Meaningful representations may be lost, however, if these clusters are non-Gaussian. In this paper we propose extending the Gaussian-based analysers mixture model to an Independent Component Analysers mixture model. We employ recent developments in variational Bayesian inference and structure determination to construct a novel approach for modelling non-Gaussian clusters. We automaticaly determine the local dimensionality of each cluster and use variational Bayesian inference to calculate the most likely number of clusters in the data space. We demonstrate our framework by finding areas in images which are `self-similar' under the independence assumption of ICA.

MISEP has been proposed as a generalization of the INFOMAX method in two directions: (1) handling of nonlinear mixtures, and (2) learning the nonlinearities to be used at the outputs, making the method suitable to the separation of components with a wide range of statistical distributions. In all implementations up to now, MISEP had used multilayer perceptrons (MLPs) to perform the nonlinear ICA operation. Use of MLPs sometimes leads to a relatively slow training. This has been attributed, at least in part, to the non-local character of the MLP's units. This paper investigates the possibility of using a network of radial basis function (RBF) units for performing the nonlinear ICA operation. It shows that the local character of the RBF network's units allows a significant speedup in the training of the system. The paper gives a brief introduction to the basics of the MISEP method, and presents experimental results showing the speed advantage of using an RBF-based network to perform the ICA operation.

This contribution contains a theoretical analysis on asymptotic stability requirements in {\em blind source separation} (BSS) algorithms. BSS extracts independent component signals from their mixtures without knowing either the mixing coefficients or the probability distributions of the source signals. It is known that some algorithms work surprisingly well. ``blind'' means that no {\em a prior} information is assumed to be available both on the mixture and on the sources. This feature make BSS approach versatile because it is not relying on the modeling of some physical phenomena. Nevertheless, few papers mention either convergence or stability of the estimators in the case where one make wrong assumptions on the distribution of the sources. This paper presents and discusses stability conditions for BSS algorithms to avoid spurious stationnary points in the case of instantaneous mixtures of independent and identically distributed sources.

In this paper sparseness measures are reviewed, extended and compared. Special attention is paid on measuring sparseness of noisy data. We review and extend several definitions and measures for sparseness, including the $\ell^{0}$, $\ell^{p}$ and $\ell^{\epsilon}$ norms. A measure based on order statistics is also proposed. The concept of sparseness is extended to the case where a signal has a dominant value other than zero. The sparseness measures can be easily modified to correspond to this new definition. Eight different measures are compared in three examples. It turns out that different measures may give complete opposite results if the distribution does not have a unique mode at zero. As conclusion, we suggest that the kurtosis should be avoided as a sparseness measure and recommend tanh-functions for measuring noisy sparseness.

The underdetermined Blind Source Separation (BSS) problem consist of estimating n sources from the measurements provided by m < n sensors. In the noise-free linear model, the measurements are a linear combination of the sources, so that the mixing process is represented by a rectangular mixing matrix of m rows and n columns. The solution process can be decomposed in two stages: first estimate the mixing matrix from the measurements, and then estimate the best solution compatible with the underdetermined set of linear equations. Most of the results presented for the underdetermined BSS problem are based on geometric ideas valid for the m=2 scenario. In this paper we extend these ideas to higher dimensions, and develop techniques to both estimate the mixing matrix and to invert the underdetermined linear problem that are valid for an arbitrary number of sources and measurements, provided 1 < m < n.

A large number of Independent Component Analysis (ICA) algorithms are based on the minimization of the statistical mutual information between the reconstructed signals, in order to achieve the source separation. While it has been demonstrated that a global minimum of such cost function will result in the separation of the statistically independent sources, it is an open problem to show that such cost function has a unique minimum (up to scaling and permutations of the signals). Without such result, there is no guarantee that the related ICA algorithms will not get stuck in local minima, and hence, return signals that are statistically dependent. We derive a novel result showing that for the special case of mixtures of two independent and identically distributed (i.i.d.) signals with symmetric, nearly gaussian probability density functions, such objective function has no local minima. This result is shown to yield a useful extension of the well-known entropy power inequality.

Spectral Flatness Measure is a well-known method for quantifying the amount of randomness (or stochasticity) that is present in a signal. This measure has been widely used in signal compression, audio characterization and retrieval. In this paper we present an information-theoretic generalization of this measure that is formulated in terms of a rate of growth of multi-information of a non-Gaussian linear process. Two new measures are defined and methods for their estimation are presented: 1) considering a source-filter model, a Generalized Spectral Flatness Measure is developed that estimates the excessive structure due to non-Gaussianity of the innovation process, and 2) using a geometrical embedding, a block-wise information redundancy is formulated using signal representation in an Independent Components basis. The two measures are applied for the problem of voiced/unvoiced determination in speech signals and analysis of spectral (timbral) dynamics in musical signals.

Usually, noise is considered to be destructive. We present a new method that constructively injects noise to assess the reliability and the group structure of empirical ICA components. Simulations show that the true root-mean squared angle distances between the real sources and some source estimates can be approximated by our method. In a toy experiment, we see that we are also able to reveal the underlying group structure of extracted ICA components. Furthermore, an experiment with fetal ECG data demonstrates that our approach is useful for exploratory data analysis of real-world data.

In this paper, the nonlinear blind source separation problem is addressed by using a multilayer perceptron (MLP) as separating system, which is justified in the universal approximation property of MLP networks. An adaptive learning algorithm for a perceptron with two hidden-layers is presented. The algorithm minimizes the mutual information between the outputs of the MLP. The performance of the proposed method is illustrated by some experiments.

We describe a method of visualising geometrically the dependency structure of a distributed representation. The mutual information between each pair of components is estimated using a nonlinear correlation coefficient, in terms of which a distance measure is defined. Multidimensional scaling is then used to generate a spatial configuration that reproduces these distances, the end result being a spatial representation of dependency between the components, from which an appropriate topology for the representation may be inferred. The method is applied to ICA representations of speech and music.

Multidimensional 1H nmr spectra of biomolecules dissolved in light water are contaminated by an intense water artefact. We discuss the application of the generalized eigenvalue decomposition method using a matrix pencil to explore the time structure of the signals in order to separate out the water artefacts. 2D NOESY spectra of simple solutes as well as dissolved proteins are studied. Results are compared to those obtained with the FastICA algorithm, too.

Methods to combine speedup terms and supervisory concept injection are presented. The speedup is based upon iterative optimization of the convex divergence. The injection of supervisory information is realized by adding a term which reduces an additional cost for a specified concept. Since the convex divergence includes usual logarithmic information measures, its direct application gives faster algorithms than existing logarithmic methods. This paper first shows a list of newly obtained general properties of the convex divergence. Then, these properties are used to derive faster algorithms for the independent component analysis. Then, an additional term for incorporating supervisory information is introduced. The efficiency of the total algorithm is tested using a set of real data - - brain fMRI time series. Successful results in view of convergence speed, software complexity, and extracted brain maps are reported. Finally, another class of the convex divergence optimization, the alpha-EM algorithm, is commented upon.

Atrial fibrillation (AF) is the most common arrhythmia encountered in clinical practice. Adaptive noise canceller (ANC) and independent component analysis (ICA) are powerful tools for separating signals from their mixtures. For better understanding of the mechanism and characteristics of AF, the artifacts were eliminated in the measured signals from atrial. We use ANC to subtract the influences of ventricle from the recorded signals, and manipulate the ICA technique to remove other artifacts so as to improve the signal-to-noise ratio of atrial signals.

A new way of dynamical understanding of brain function of SEF is proposed by using the decorrelation method of ICA and our blind identification method of transfer functions based on feedback system theory. First, MEG data are analyzed at special frequency by the decorrelation method of ICA, and two components that have dipole-type of patterns on scalp are selected. Taking the inverse ICA from two components we can have time series data oriented to SEF. From SEF time series data transfer functions between two regions on scalp can be obtained by using blind identification method based on feedback system theory. Impulse responses between main brain region of SEF and the other region in opposite hemispheric is analyzed, and their directional dependence is found.

The purpose of this preliminary work was to examine the usefulness of independent component analysis (ICA) as preprocessing tool for the decomposition of electromyograms (EMG) into their constitutive elements (motor unit action potentials). An experiment was carried out with a healthy subject performing isometric contractions at different force levels. Surface electromyograms were measured with an electrode array. Satisfactory decompositions were obtained for levels up to 10% of maximal voluntary contraction (MVC). For 20 and 30% MVC, the ICA results were irregular and in many cases difficult to interpret.

In this paper, an attempt is made to classify the EEG signals of letter imagery tasks using a combined independent component analysis and probabilistic neural network. The role of the principal/independent component analysis is to mitigate the effect of EOG artifacts within each single-trial EEG pattern. Experimental results show an overall performance improvement of around $22.2%$ in terms of the pattern classification accuracy, in comparison with the LPC spectral analysis which is commonly employed in speech recognition tasks.

In this paper, a novel approach based on the fourth-order cross-moment in linear blind source separation (BSS) is considered. The algorithm is similar to the FastICA algorithm and has the same efficiency. The BSS algorithm is applied in spectral analysis to extract two component images from a given spectral image. The separated images have different statistics, spectra and contrast. This is used for color contrast enhancement when a color image is reproduced from the reconstructed spectral image.

Digital watermark technology has been developed quickly during the recent few years and widely applied to protect the copyright of digital image. A digital watermark is the information that is imperceptibly and robustly embedded in the host data such that it cannot be removed. This paper proposes a method based on the independent component analysis (ICA) for the digital image watermarking. The experimental results indicate that the presented approach is remarkably effective in detecting and extracting the digital image watermark. The problem of robust watermarking is also tested to demonstrate the effectiveness of the proposed approach.

The scheme proposed in this paper combines independent components analysis (ICA) with discrete wavelet transform (DWT) and discrete cosine transform (DCT). Firstly, the original image is decomposed by 2-D DWT and the detail sub-bands are reserved. Then, the approximate image is transformed by DCT and embedded with watermark. The watermark is detected through ICA. The simulation results demonstrate its good performance of the robustness and invisibility. The watermark detection is improved greatly compared to the traditional subtracting detection scheme.

We propose a new method for recognizing the fine structure vegetation change in a farmland, where a covering plant is unknown, using a remote sensing image, where a pixel value indicates spectral data. This technique enables to separate the qualitative change due to ecological characteristics such as the chlorophyll quantity, and the quantitative coverage change of vegetation, which used to be difficult in conventional methods. In this paper, the peculiar covering pattern of vegetation in a farmland is modeled to generate mixed spectra as simulation data, then the Independent component analysis (ICA) is applied to the mixed spectra for estimating original pure spectra and mixture ratio simultaneously. It is demonstrated that this technique is useful even when the mixed spectra include the vegetation covering fluctuation, and noise components such as thermal sensor noise and atmospheric noise in real data. Moreover, this technique is applicable to recognition of periodically distributed covering patterns observed e.g. in fiberscope images, microscope images, or semiconductor inspection images, etc.

We address the problem of Blind Source Separation (BSS) of superimposed images and, in particular, consider the recovery of a scene recorded through a semirefective medium (e.g. glass windshield) from its mixture with a virtual reflected image. We extend the Sparse ICA (SPICA) approach to BSS and apply it to the separation of the desired image from the superimposed images, without having any a priory knowledge about its structure and/or statistics. Advances in the SPICA approach are discussed. Simulations and experimental results illustrate the efficiency of the proposed approach, and of its specific implementation in a simple algorithm of a low computational cost. The approach and the algorithm are generic in that they can be adapted and applied to a wide range of BSS problems involving one-dimensional signals or images.

We consider the problem of detecting note onsets in music under the hypothesis that the onsets, and events in general, are essentially \emph{surprising moments}, and that event detection should therefore be based on an explicit probability model of the sensory input, which generates a moment-by-moment trace of the probability of each observation as it is made. Relatively unexpected events should thus appear as clear spikes. In this way, several well known methods of onset detection can be understood in terms of an implicit probability model. We apply ICA to the problem as an adaptive non-Gaussian model, and investigate the use of ICA as a conditional probability model. The results obtained using several methods on two extracts of piano music are presented and compared. Finally, we tentatively suggest an information theoretic interpretation of the approach.

Recognition of the blood cell types is of great importance due to its high clinical diagnostic applications. Here, a method based on PCA followed by Bayesian classification, for identification of blood cells has been introduced. We have modified the work by Turk and Pentland on face recognition and extended it to cell recognition. Their method uses the standard method of eigenvector selection. Also only monochrome images have been considered and the method is not tolerant enough to geometrical changes. Here the idea has been extended to colour patterns. We pre-processe the images adjusting their size and rotation, with fast methods. The eigencells are selected based on minimisation of similarities among various sets and finally a classifier identifies the cell types by looking at the three-fold intensity-colour information. This overcomes many problems in cell classification where either certain cells are recognised or some constraints such as geometrical variations are incorporated.

In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A fundamental difficulty in the nonlinear BSS problem and even more so in the nonlinear ICA problem is that they are nonunique without extra constraints, which are often implemented by using a suitable regularization. Post-nonlinear mixtures are an important special case, where a nonlinearity is applied to linear mixtures. For such mixtures, the ambiguities are essentially the same as for the linear ICA or BSS problems. In the later part of this paper, various separation techniques proposed for post-nonlinear mixtures and general nonlinear mixtures are reviewed.

The building blocks introduced earlier by us in [1] are used for constructing a hierarchical nonlinear model for nonlinear factor analysis. We call the resulting method hierarchical nonlinear factor analysis (HNFA). The variational Bayesian learning algorithm used in this method has a linear computational complexity, and it is able to infer the structure of the model in addition to estimating the unknown parameters. We show how nonlinear mixtures can be separated by first estimating a nonlinear subspace using HNFA and then rotating the subspace using linear independent component analysis. Experimental results show that the cost function minimised during learning predicts well the quality of the estimated subspace.

This work focuses on a quadratic dependence measure which can be used for blind source separation. After defining it, we show some links with other quadratic dependence measures used by Feuerverger and Rosenblatt. We develop a practical way for computing this measure, which leads us to a new solution for blind source separation in the case of nonlinear mixtures. It consists in first estimating the theoretical quadratic measure, then computing its relative gradient, finally minimizing it through a gradient descent method. Some examples illustrate our method in the post nonlinear mixtures.

At the previous workshop (ICA2001) we proposed the ACE-TD method that reduces the post-nonlinear blind source separation problem (PNL BSS) to a linear BSS problem. The method utilizes the Alternating Conditional Expectation (ACE) algorithm to approximately invert the (post-){non-linear} functions. In this contribution, we propose an alternative procedure called Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed. This heuristic, but simple and efficient procedure yields similar results as the ACE method and can thus be used as a fast and effective equalization method. After equalizing the nonlinearities, temporal decorrelation separation (TDSEP) allows us to recover the source signals. Numerical simulations on realistic examples are performed to compare "Gauss-TD" with "ACE-TD".

We present a new algorithm for nonlinear blind source separation, which is based on the geometry of the mixture space. This space is decomposed in a set of concentric rings, in which we perform ordinary linear ICA after central transformation; we show that this transformation can be left out if we use linear geometric ICA. In any case, we get a set of images of ring points under the original mixing mapping. Putting those together we can reconstruct the mixing mapping. Indeed, this approach contains linear ICA and postnonlinear ICA after whitening. The paper finishes with various examples on toy and speech data.

Independent component analysis (ICA) has found a fruitful application in the analysis of functional magnetic resonance imaging (fMRI) data. A principal advantage of this approach is its applicability to cognitive paradigms for which detailed a priori models of brain activity are not available. ICA has been successfully utilized in a number of exciting fMRI applications including the identification of various signal-types (e.g. task and transiently task-related, and physiology-related signals) in the spatial or temporal domain, the analysis of multi-subject fMRI data, the incorporation of a priori information, and for the analysis of complex-valued fMRI data (which has proved challenging for standard approaches). In this paper, we 1) introduce fMRI data and its properties, 2) review the basic motivation for using ICA on fMRI data, and 3) review the current work on ICA of fMRI with some specific examples from our own work. The purpose of this paper is to motivate ICA research to focus upon this exciting application.

Ten spatial infomax ICA decompositions were performed on two fMRI data sets collected from the same subject. The maximally- independent spatial components were then tested across decompositions for one-to-one correspondences. Matching independent component maps by mutual information alone proved ineffective. Matching component map pairs by correlating their z-transformed voxel map weights demonstrated that the top 100 components were stably reproduced in each of the ten decompositions. Infomax ICA therefore provided a stable decomposition of fMRI data into spatially independent components.

Independent Component Analysis is becoming a popular exploratory method for analysing complex data such as that from FMRI experiments. The application of such `model-free' methods, however, has been somewhat restricted both by the view that results can be uninterpretable and by the lack of ability to quantify statistical significance. We present an integrated approach to Probabilistic ICA for FMRI data that allows for non-square mixing in the presence of Gaussian noise. We employ an objective estimation of the amount of Gaussian noise through Bayesian analysis of the true dimensionality of the data, i.e. the number of activation and non-Gaussian noise sources. Reduction of the data to this `true' subspace before the ICA decomposition automatically results in an estimate of the noise, leading to the ability to assign significance to voxels in ICA spatial maps. By this we not only are able to carry out probabilistic modelling, but also reduce problems of interpretation and overfitting. We use an alternative-hypothesis testing approach for inference based on Gaussian + Gamma mixture models. The performance of our approach is illustrated and evaluated on real and complex artificial FMRI data, and compared to the spatio-temporal accuracy of results obtained from standard ICA and standard analyses in the 'General Linear Model' (GLM) framework.

The basic principles used by the auditory cortex to decipher the sensory streams are less understood than those used by the visual cortex. Based on previous research on animals, functional Magnetic Resonance Imaging (fMRI) responses in the human auditory cortex to simple streams of colored acoustic noise are expected to follow both linear (sustained) and non-linear (transient) spatio-temporal patterns. Analysis employed in previous fMRI studies only allows the detection of linear responses. Here we present a data analysis strategy, that allows the detection and separation of both linear and non-linear responses. This strategy is based on the hierarchical combination of two ``transposed'' variants of the independent component analysis (ICA), operating in the space and in the time domain.

The cerebral cortex is the main target of analysis in many functional magnetic resonance imaging (fMRI) studies. Since only about 20% of the voxels of a typical functional data set lie within the cortex, statistical analysis can be restricted to a subset of the voxels obtained after cortex segmentation. Here, we describe a novel approach for data-driven analysis of single-subject fMRI time-series that combines techniques for reconstruction of the cortical surface of the subject's brain and spatial independent component analysis (ICA) of functional time-courses (cortex-based ICA or cbICA). Compared to conventional, anatomically unconstrained ICA, besides reducing computational demand (20,000 vs 100,000 voxels time-courses), our approach improves detection of cortical components and estimation of their time-courses, particularly in the case of event-related designs.

To be applicable in realistic scenarios, blind source separation approaches should deal evenly with non-square cases and the presence of noise. We consider an additive noise mixing model with an arbitrary number of sensors and possibly more sources than sensors (the non-square case) when sources are disjointly orthogonal. We formulate the maximum likelihood estimation of the coherent noise model, suitable when sensors are nearby and the noise field is close to isotropic, and also under the direct-path far-field assumptions. The implementation of the derived criterion involves iterating two steps: a partitioning of the time-frequency plane for separation followed by an optimization of the mixing parameter estimates. The structure of the solution is surprising at first but logical: it consists of a beamforming linear filter, which reduces noise, and a filter across time-frequency domain to separate sources. The solution is applicable to an arbitrary number of microphones and sources. Experimentally, we show the capability of the technique to separate four voices from two, four, six, and eight channel recordings in the presence of isotropic noise.

Independent Component Analysis is a new data analysis method, and its computation algorithms and applications are widely studied recently. Most applications, however, are for the field of one-dimensional data analysis, e.g. sound data analysis, and few applications for two-dimensional data (e.g., image data) are studied. In this paper we give a new blind deconvolution algorithm for images. In our method, Gabor filters are applied to the blurred image, and their output images and the original blurred image are provided as input data of ICA. One of the separated components is a restored image. We give a few experiments for artificial data and an experiment of a real blurred image. We also give a simple model to explain the validity of our algorithm.

A speech enhancement scheme is presented integrating spatial and temporal signal processing methods for blind denoising in non stationary noise environments. In a first stage, spatially localized interferring point sources are separated from noisy speech signals recorded by two microphones using a Blind Source Separation (BSS) algorithm assuming no a priori knowledge about the sources involved. Spatially distributed background noise is removed in a second processing step. Here, the BSS output channel containing the desired speaker is filtered with a time-varying Wiener filter. Noise power estimates for the filter coefficients are computed from desired speaker absent time-intervals identified by comparing signal energy of separated source files from the BSS stage. The scheme's performance is illustrated by speech recognition experiments on real recordings corrupted by babble noise and compared to conventional beamforming and single channel denoising techniques.

Whitening processing methods are proposed to improve the effectiveness of blind separation of speech sources based on ADF. The proposed methods include preemphasis, prewhitening, and joint linear prediction of common component of speech sources. The effect of ADF filter lengths on source separation performance was also investigated. Experimental data were generated by convolving TIMIT speech with acoustic path impulse responses measured in real acoustic environment, where microphone-source distances were approximately 2 m and initial target-to-interference ratio was 0 dB. The proposed methods significantly speeded up convergence rate and improved accuracy of automatic phone recognition on target speech. The preemphasis and prewhitening methods alone produced large impact on system performance, and combined preemphasis with joint prediction yielded the highest phone recognition accuracy.

We newly propose a stable algorithm for blind source separation (BSS) combining multistage ICA (MSICA) and linear prediction. The MSICA is the method previously proposed by the authors, in which frequency-domain ICA (FDICA) for a rough separation is followed by time-domain ICA (TDICA) to remove residual crosstalk. For temporally correlated signals, we must use TDICA with a nonholonomic constraint to avoid the decorrelation effect from the holonomic constraint. However, the stability cannot be guaranteed in the nonholonomic case. To solve the problem, the linear predictors estimated from the roughly separated signals by FDICA are inserted before the holonomic TDICA as a prewhitening processing, and the dewhitening is performed after TDICA. The stability of the proposed algorithm can be guaranteed by the holonomic constraint, and the pre/dewhitening processing prevents the decorrelation. The experiments in a reverberant room reveal that the algorithm results in higher stability and separation performance.

In this paper we propose a general method for separating mixtures of multiple audio signals observed in a real acoustic environment. The multipath nature of acoustic propagation is addressed by the use of the FIR polynomial matrix algebra, while spatio-temporal separation is achieved by entropy maximization using the natural gradient algorithm. The undesired temporal whiteness of the estimates is overcome with the use of linear prediction (LP) analysis. As opposed to a previous LP-based method, no assumptions on relative strengths of individual sources to specific mixtures are made. Other benefits such as reduced computational complexity and increased convergence speed are also highlighted. Finally, a number of experiments demonstrate the validity and general applicability of the proposed method.

We address in this paper a unified presentation of previous works by A. Belouchrani, K. Abed-Meraim and co-authors about separation of FIR convolutive mixtures using (joint) block-diagonalization. We first present general equations in the stochastic context. Then the implementation of the general method is studied in practice and linked with previous works for stationary and non-stationary sources. The non-stationary case is especially studied within a time-frequency framework: we introduce Spatial Wigner-Ville Spectrum and propose a criterion based on single block auto-terms identification to select efficiently, in practice, the matrices to be joint block-diagonalized.

This paper presents a novel approach for speech extraction by a combined subband independent component analysis and neural memory. In the approach, probabilistic neural networks followed by the subband independent component analysis processing units are used for the neural memory to identify firstly the speaker and then compensate for the `side-effects', i.e., the scaling and the permutation disorder, both of which are particularly problematic for subband blind extraction. Simulation study shows that the combined scheme can effectively extract the speech signal of interest from the instantaneous / delayed mixtures, in comparison with the conventional subband/fullband approaches.

This paper deals with the blind separation of convolutively mixed speech sources. The proposed methods take advantage of the a priori knowledge that speech signals contain silences. They consist in first detecting silence phases in these source signals and then identifying each filter of the considered separating systems in such a phase. The criteria used in both stages of these approaches are based on the power (cross)-spectra of the observations: their time-segmented coherence function is first used to detect silence phases and the filters to be identified are then expressed as the ratios of observation power (cross)-spectra. This general approach is applied to various separating systems, depending i) whether the considered structures are symmetrical, asymmetrical, or asymmetrical with a complementary part, and ii) whether they include or not a post-processing stage for filtering the extracted sources. The performance of all these approaches and of two methods from the literature is investigated by means of experimental tests performed with speech sources mixed by means of real acoustical in-car transfer functions. This shows that the proposed approaches yield an interesting performance/complexity trade-off as compared to previously reported methods.

We propose a new algorithm for blind source separation (BSS), in which independent component analysis (ICA) and beamforming are combined to resolve the low-convergence problem through optimization in ICA. The proposed method consists of the following four parts: (1) frequency-domain ICA with direction-of-arrival (DOA) estimation, (2) null beamforming based on the estimated DOA, (3) diversity of (1) and (2) in both iteration and frequency domain, and (4) subband elimination (SBE) based on the independence among the separated signals. The temporal alternation between ICA and beamforming can realize fast- and high-convergence optimization. Also SBE enforcedly eliminates the subband components in which the separation could not be performed well. The experiment in a real car environment reveals that the proposed method can improve the qualities of the separated speech and word recognition rates for both directional and diffusive noises.

Blind separation of multi-microphone speech signals is of great interest for applications such as in-car speech recognition or hands-free telephony. Due to the convolutive mixing process, source separation is ideally carried out in the frequency domain, separately for each frequency band. The major weakness of this approach is the indeterminacy of the permutation matrix, which can differ between frequency bands, leading to severe degradation of separation performance. However, if the beampatterns of the ICA demixing filters are calculated, their spatial zero directions can be used to resolve the permutation ambiguity. Since the direction of the spatial zeros varies strongly over frequency due to reflections and reverberation, EM estimation has been employed to obtain a statistical model of the source directions, and subsequently, permutations are corrected by a maximum likelihood criterion. In this way, blind separation of sources is achieved even in reverberant environments, with SNR-improvements of up to 6dB at a reverberation time of 320ms.

A blind source deconvolution method without indeterminacy of permutation and scaling, is proposed by using notable features of split spectrum and locational information on signal sources. A method for extracting human speech exclusively is also proposed by taking advantage of the rule and the facts that FastICA separates sources in order of large non-Gaussianity from their mixtures and that human speeches are usually larger in non-Gaussianity than noises. The proposed methods have been verified by several experiments in a real room. %From several experiments in a real room, it has been confirmed that the proposed rule and method are valid.

ICA deals with finding linear projections in the input space along which the data shows most independence. Therefore, mutual information between the projected outputs, which are usually called the separated outputs due to links with blind source separation (BSS), is considered to be a natural criterion for ICA. Minimization of the mutual information requires primarily the estimation of this quantity from the samples, and then adaptation of the separation matrix parameters using a suitable optimization approach. In this paper, we present a numerical procedure to estimate an upper bound for the mutual information based on density estimates motivated by Jaynes' maximum entropy principle. The gradient of the mutual information with respect to the adaptive parameters, then turns out to be extremely simple.

We present a new gradient algorithm to perform blind signal separation (BSS). The algorithm is obtained by taking a trade-off between the ordinary gradient algorithm and the natural gradient algorithm. It provides a better performance than the ordinary gradient algorithm and is free from small-step-size restriction of the natural gradient algorithm. In addition, the algorithm has less computation than the other gradient algorithms. For theoretical support of our algorithm, local stability on desired solutions is proven for a simple network. Simulation results indicate that the algorithm efficiently provides a solution for BSS.

The separation of unobserved sources from mixed observed data is a fundamental signal processing problem. Most proposed techniques for solving this problem rely on independence or at least uncorrelation assumption of source signals. In this paper an algorithm is introduced for source signals that are correlated with each other. The method uses a preprocessing technique based on Wold decomposition principle for extracting desired and proper information from the predictable part of the observed data, and exploits approaches based on second-order statistics to estimate the mixing matrix and source signals.

In this paper, the linear (feed-forward) multilayer ICA algorithm is proposed for the blind separation of high-dimensional mixed signals. There are two main phases in each layer. One is the local ICA phase, where the mixed signals are divided into small local modules and a simple ICA is applied to each module. Another is the mapping phase, where the locally-separated signals are arranged as a line so that the higher correlated signals are nearer. By repetition of these two phase, this multilayer ICA algorithm can find all the (global) independent components through only the ICA processing on local modules. Some numerical experiments on artificial data and natural scenes show the validity of this algorithm, and verify that it is more efficient than the standard fast ICA algorithm for "locally-biased" and high-dimensional observed signals such as natural scenes.

The FastICA algorithm by Hyvarinen and Oja is a popular block-based technique for independent component analysis and blind source separation tasks. In this paper, we provide a complete convergence analysis of the FastICA algorithm employing a cubic update nonlinearity for linear source mixtures. Our analysis shows that all of the FastICA algorithm's stationary points correspond to desirable separating solutions. In addition, numerical studies of the analysis equations indicate during the initial stages of adaptation that every iteration of the FastICA algorithm decreases the normalized inter-channel interference by one-third or 4.77 dB. Simulations verify that the analysis is accurate.

Missing data is common in real-world datasets and is a problem for many estimation techniques. We have developed a variational Bayesian method to perform Independent Component Analysis (ICA) on high-dimensional data containing missing entries. Missing data are handled naturally in the Bayesian framework by integrating the generative density model. Modeling the distributions of the independent sources with mixture of Gaussians allows sources to be estimated with different kurtosis and skewness. The variational Bayesian method automatically determines the dimensionality of the data and yields an accurate density model for the observed data without overfitting problems. This allows direct probability estimation of missing values in the high dimensional space and avoids dimension reduction preprocessing which is not feasible with missing data.

This paper presents novel Newton algorithms for the blind decorrelation of real and complex processes. They are globally convergent and exhibit an interesting relationship with the natural gradient algorithm for blind decorrelation and the Goodall learning rule. Indeed, we show that these two later algorithms can be obtained from their Newton decorrelation versions when an exact matrix inversion is replaced by an iterative approximation to it.

In blind source separation, convergence and separation performance are highly dependent on a relation between a probability density function (pdf) of the output signals $y$ and nonlinear functions $f(y)$ used in updating coefficients of a separation block. This relation was analyzed based on kurtosis $\kappa_4$ of the output signals. The nonlinear functions, $\tanh(y)$ and $y^3$ have been suggested for super-Gaussian ($\kappa_4 \ge 0$) and sub-Gaussian ($\kappa_4<0$) distributions, respectively. Furthermore, an adaptive nonlinear function, which can be continuously controlled, was proposed. The nonlinear function is formed as a linear combination of $y^3$ and $\tanh(y)$. Their linear weights are controlled by the estimated $\kappa_4$. Although the latter can improve separation performance, its performance is still limited especially in difficult separation problems. In this paper, a new method is proposed. Nonlinear functions are directly controlled by the estimated pdf $p(y)$ of the separation block outputs $y$. $p(y)$ is expressed by a mixture Gaussian model, whose parameters are iteratively estimated sample by sample. $f(y)$ and $p(y)$ are related by the stability condition $f(y)=-(dp(y)/dy)/p(y)$. Blind source separation using 2$\sim$ 5 channel music signals are simulated. The proposed method is superior to the above conventional methods. Three Gaussian functions are enough to express the output pdf.

This paper deals with radar detection and identification problems. Nowadays, To improve radar detection capability, engineers use high resolution methods (i.e. ESPRIT or MUSIC, etc). Recently, some methods based on High Order Statistics (HOS) have been used for the same purpose. Here, a comparaison among different methods is proposed. In addition, the application of ICA algorithms in this field is discussed.

In this paper, we propose recursive methods to obtain a separating matrix based on mutual information, via the simultaneous perturbation optimization method. The simultaneous perturbation method updates the separating matrix by using only two values of the mutual information. Then, probability densities of source signals, which are used in ordinary gradient methods, are not required. A simple example is shown to confirm a feasibility of the proposed methods.

This paper presents a new method for Blind Source Separation (BSS) based on dual adaptive control, which allows successful separation of linear mixtures of independent source signals. The method reformulates a BSS problem to get a dual adaptive control problem. Then a Sigmoid MLP neural network is used to approximate the widesence-mixing matrix defined in the BSS problem. By solving the dual adaptive control problem, in which unknown parameters of the neural network are estimated by applying the Extended Kalman Filter, we then obtain the widesense-mixing matrix. Experimental results show that individual source signals can be separated effectively from the known linear mixture signals using this method. And faster convergence speed as well as good performance can be achieved.

In many data analysis problems it is useful to consider the data as generated from a set of unknown (latent) generators or sources. The observations we make of a system are then taken to be related to these sources through some unknown function. Furthermore, the (unknown) number of underlying latent sources may be less than the number of observations. Recent developments in Independent Component Analysis (ICA) have shown that, in the case where the unknown function linking sources to observations is linear, such data decomposition may be achieved in a mathematically elegant manner. In this paper we extend the general ICA paradigm to include a very flexible source model, prior constraints and conditioning on sets of intermediate variables so that ICA forms one part of a hierarchical system. We show that such an approach allows for efficient discovery of hidden representation in data and for unsupervised data partitioning.

The algorithm for the blind separation of two instantaneous mixed sources is presented based on decorrelation. Its convergence properties are analyzed qualitatively and quantitatively. The algorithm is suitable for both stationary and nonstationary, super-Gaussian and sub-Gaussian signal separation. It has the advantages of low computation complexity, fast convergence, independence on the probability distribution, noise robustness and numerical stability. Numerical experiments are presented to illustrate the validity of our algorithm.

In this paper a multivariate contrast function is proposed for the blind signal extraction of a subset of the independent components from a linear mixture. This contrast combines the robustness of the joint approximate diagonalization techniques with the flexibility of the methods for blind signal extraction. Its maximization leads to hierarchical and simultaneous ICA extraction algorithms which are respectively based on the thin QR and thin SVD factorizations. The interesting similarities and differences with other existing contrasts and algorithms are commented.

We present a new approach to approximate joint diagonalization of a set of matrices. The main advantages of our method are computational efficiency and generality. We develop an iterative procedure, called LSDIAG, which is based on multiplicative updates and on linear least-squares optimization. The efficiency of our algorithm is achieved by the first-order approximation of the matrices being diagonalized. Numerical simulations demonstrate the usefulness of the method in general, and in particular, its capability to perform blind source separation without requiring the usual pre-whitening of the data.

Many estimators in the literature of blind source separation can be considered as the estimators derived through the framework of the maximum likelihood estimation with various choices of density functions. In other words, they are the minimizers of the Kullback-Leibler divergence between the empirical distribution and a certain form of density function. Unfortunately, this type of estimators is not B-robust, that is, it can be easily affected by outliers. Minami and Eguchi (2002) proposed a robust estimator for blind source separation based on the $\beta$-divergence. We call it the minimum $\beta$-divergence estimator. It was shown that the estimator is locally consistent and B-robust, and the necessary and sufficient condition for asymptotical stability was given. The tuning parameter $\beta$ plays a key role on the robust property for the minimum $\beta$-divergence estimator. The larger $\beta$ is, the more robust the corresponding estimator is. However, too large $\beta$ might produce a less efficient estimator. We propose a selection procedure of the tuning parameter $\beta$ for the minimum $\beta$-divergence estimator.

We propose a nonparametric independent component analysis (ICA) algorithm for the problem of blind source separation with instantaneous, time-invariant and linear mixtures. Our Init-NLE algorithm combines minimization of correlation among nonlinear expansions of the output signals with a good initialization derived from search guided by statistical tests for independence based on the power-divergent family of test statistics. Such initialization is critical to reliable separation. The simulation results obtained from both synthetic and real-life data show that our method yields consistent results and compares favorably to the existing ICA algorithms.

In this paper, we describe the relationship between independent component analysis (ICA) and the Helmholtz machine, which is learning machine. From the viewpoint of learning method, both techniques are classified as unsupervised learning. We derive an algorithm for ICA on basis of the scheme of the machine. Our algorithm is given by the summation of the conventional learning rule and a new term. The derived algorithm has robustness against additive noise at the observation. The new term plays the role of making the reflection of the noise as small as possible. We demonstrate the robustness by computer simulation.

We investigate the assumption that sources have disjoint support in the time domain, time-frequency domain, or frequency domain. We call such signals disjoint orthogonal. The class of signals that approximately satisfies this assumption includes many synthetic signals, music and speech, as well as some biological signals. We measure the disjoint orthogonality of the benchmark signals in the ICALAB Toolbox in the time, time-frequency, and frequency domains and show that most satisfy the assumption in at least one representation. In order to compare this assumption with other common source assumptions, we derive a demixing algorithm for noisy instantaneous mixtures based on disjoint orthogonality and compare its performance to the algorithms in the ICALAB Toolbox, all of which rely on the second-order statistics, non-stationarity, or higher-order statistics of the sources. The results indicate that space-time-frequency diversity is a useful assumption for the design of BSS/ICA algorithms.

We propose subband-based blind source separation (BSS) for convolutive mixtures of speech. This is motivated by the drawback of frequency-domain BSS, i.e., when a long frame with a fixed frame-shift is used for a few seconds of speech, the number of samples in each frequency bin decreases and the separation performance is degraded. In our proposed subband BSS, (1) by using a moderate number of subbands, a sufficient number of samples can be held in each subband, and (2) by using FIR filters in each subband, we can handle long reverberation. Subband BSS achieves better performance than frequency-domain BSS. Moreover, subband BSS allows us to select the separation method suited to each subband. Using this advantage, we propose efficient separation procedures that take the frequency characteristics of room reverberation and speech signals into consideration, (3) by using longer unmixing filters in low frequency bands, and (4) by adopting overlap-blockshift in BSS's batch adaptation in low frequency bands. Consequently, subband processing appropriate for each frequency bin is successfully realized with the proposed subband BSS.

This paper presents a robust and precise method for solving the permutation problem of frequency-domain blind source separation. It is based on two previous approaches: the direction of arrival estimation and the inter-frequency correlation. We discuss the advantages and disadvantages of the two approaches, and integrate them to exploit their respective advantages. We also present a closed form formula to estimate the directions of source signals from a separating matrix obtained by ICA. Experimental results show that our method solved permutation problems almost perfectly for a situation that two sources were mixed in a room whose reverberation time was 300 ms.

BSS performance is not still enough for realistic acoustic signals, particularly when the lengths of the impulse responses are long. We propose to use a complete model expressed in frequency domain but which is exactly equivalent to a linear convolution in time-domain. This model is applied to acoustic responses by segmenting the overall impulse responses in K short segments. The data are then transformed for each frequency bin into convolutive mixtures of K taps. Finally, the separation is achieved with a natural gradient algorithm based on a maximum-entropy cost function. The interest of this complete model consists in combining both short Fourier Transforms (with N samples) and convolutive mixtures with few taps K. The resulting impulse responses lengths in the time-domain will be of (K.N) samples and can best suit with long real acoustic responses.

In this paper we propose and investigate a recursive method for blind source separations (BSS) or/and for independent component analyses (ICA). The relation of this recursive method of BSS/ICA to the conventional gradient-based method is quite similar to the relation of the RLS method to the LMS method in adaptive filtering. Based on this method we present a novel algorithm for real-time blind source separation of convolutive mixing. When we employ the algorithm to acoustic signals, simulations show a superior rate of convergence over its counterpart of gradient-based method. By applying the algorithm in a real-time BSS system for realistic acoustic signals, we also give experiments to illustrate the effectiveness and validity of the algorithm.

We propose a new algorithm for blind source separation (BSS), in which frequency-domain independent component analysis (FDICA) and time-domain ICA (TDICA) are combined to achieve a superior source-separation performance under reverberant conditions. Generally speaking, conventional TDICA fails to separate source signals under heavily reverberant conditions because of the low convergence in the iterative learning of the separation system. On the other hand, the separation performance of conventional FDICA also degrades seriously because the independence assumption of narrow-band signals collapses when the number of subbands increases. In the proposed method, the separated signals of FDICA are regarded as the input signals for TDICA, and we can remove the residual crosstalk components of FDICA by using TDICA. The experimental results obtained under the reverberant condition reveal that the separation performance of the proposed method is superior to those of TDICA- and FDICA-based BSS methods.

This paper presents a technique for extracting multiple source signals when only a single channel observation is available. The proposed separation algorithm is based on a subspace decomposition. The observation is projected onto subspaces of interest with different sets of basis functions, and the original sources are obtained by weighted sums of the projections. A flexible model for density estimation allows an accurate modeling of the distributions of the source signals in the subspaces, and we develop a filtering technique using a maximum likelihood (ML) approach to match the observed single channel data with the decomposition. Our experimental results show good separation performance on simulated mixtures of two music signals as well as two voice signals.

It is important to reduce noise in MEG measurement, since the signal to noise ratio is smaller than 1.0, even in magnetically shielded room environment. ICA is a powerful tool for noise reduction in MEG measurements. We have applied ICA to various MEG data. By using ICA, we can remove the cardiac field, power line noise and other noises from MEG data. Also, we succeeded in extracting auditory evoked field from non-averaged MEG data. ICA produces many independent components in MEG, but usually their classification into relevant and irrelevant components depends largely on subjective judgement. We propose a criterion for judging which of the obtained independent components comprise MEG components, and in particular the evoked response using the signal subspace obtained from the averaged response. This method often worked effectively to reconstruct single evoked responses based on the objective criterion. Although there still remain many problems. The application of ICA to MEG data, should further be studied because `noninvasive' study of the brain activities intrinsically implies `blind' separation of activities.

This paper describes two efficient realizations of an adaptive multichannel blind deconvolution algorithm based on the natural gradient algorithm originally proposed by Amari, Douglas, Cichocki, and Yang. The proposed algorithms use fast convolution and correlation techniques and operate primarily in the frequency domain. Since the cost function minimized by the algorithms is well-defined in the time domain, the algorithms do not suffer from the so-called frequency-domain permutation problem. The proposed algorithm can be viewed as an multi-channel extension of a single-channel blind deconvolution algorithm recently proposed by the authors.

We propose a new novel two-stage blind separation and deconvolution (BSD) algorithm for a real convolutive mixture of temporally correlated signals, in which a new Single-Input Multiple-Output (SIMO)-model-based ICA (SIMO-ICA) and blind multichannel inverse filtering are combined. SIMO-ICA consists of multiple ICAs and a fidelity controller, and each ICA runs in parallel under fidelity control of the entire separation system. SIMO-ICA can separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources as they are at the microphones. Thus, the separated signals of SIMO-ICA can maintain the spatial qualities of each sound source. After the separation by SIMO-ICA, a simple blind deconvolution technique based on multichannel inverse filtering for the SIMO model can be applied even when the mixing system is the nonminimum phase system and each source signal is temporally correlated. The experimental results obtained under the reverberant condition reveal that the sound quality of the separated signals in the proposed method is superior to that in the conventional ICA-based BSD.

In this paper, we present an algorithm for the problem of multi-channel blind deconvolution which can adapt to unknown sources with both sub-Gaussian and super-Gaussian probability density distributions using a generalized exponential source model. We use a state space representation to model the mixer and demixer respectively, and show how the parameters of the demixer can be adapted using a gradient descent algorithm incorporating the natural gradient extension. We also present a learning method for the unknown parameters of the generalized exponential source model. The performance of the proposed generalized exponential source model on a typical example is compared with those of two other algorithms, viz., the learning algorithm with a fixed nonlinearity, without any regard to the underlying probability distribution of sources, and the switching nonlinearity algorithm proposed by Lee et al.

The present paper proposes a class of iterative deflation algorithms to solve the blind source-factor separation for the outputs of multiple-input multiple-output finite impulse response (MIMO-FIR) channels. Using one of the proposed deflation algorithms, filtered versions of the source signals, each of which is the contribution of each source signal to the outputs of MIMO-FIR channels, are extracted one by one from the mixtures of source signals. The proposed deflation algorithms can be applied to various sources, that is, i.i.d. signals, second-order white but higher-order colored signals, and second-order correlated (non-white) signals, which are referred to as generalized deflation algorithms (GDA's). The conventional deflation algorithms were proposed for each source signal mentioned above, that is, a class of deflation algorithms which can deal with each of the source signals mentioned above has not been proposed until now. Some simulation results are presented to show the validity of the proposed algorithms.

Binaural blind source separation algorithm for noisy mixtures is proposed. We consider slowly time-varying noise signals and nonstationary source signals. The proposed blind source separation combines signal estimation from noisy observations with source identification through mixing parameter estimation. A minimum mean square error estimator in frequency domain is implemented to estimate the signal spectrums from noisy observations. The K-means clustering algorithm is utilized to identify the sources. With the help of calculating the signal absence probability for each frequency bin, noises are effectively eliminated from the source signals and mixing parameter estimation becomes more accurate in noisy environments. The sparseness of source signals assumption enables this noisy, underdetermined, binaural blind source separation.

A novel stereophonic noise reduction method is proposed based upon a combination of cascaded subspace filters, with delay and advancing elements alternatively inserted between the adjacent cascading stages, and two-channel adaptive signal enhancers. Simulation results based upon real stereophonic speech contaminated by two correlated noise components show that the proposed method gives improved enhancement quality, as compared to conventional nonlinear spectral subtraction approaches, in terms of both segmental gain and cepstral distance performance indices.

This paper describes a fixed-point independent component analysis (ICA) algorithm in combination with the null beamforming technique to sieve out speech signals from their convoluted mixture observed using a linear microphone array. The fixed-point algorithm shows fast convergence to the solution, however it is highly sensitive to the initial value from which iteration starts. A good initial value leads to faster convergence and yields better results. We propose the use of a null beamformer-based initial value for iteration and explore its effects on separation performance under different acoustic conditions by examining the noise reduction rate (NRR) and convergence speed. The result of simulation confirms the efficacy and accuracy of the proposed algorithm.

This article investigates a theoretical basis for estimating autoregressive (AR) processes for linear-recurrent signals in convolutive mixtures. Whitening of such signals is sometimes a problem in blind source separation, or equivalently multichannel blind equalization, which is intended to extract the original signals even if the signals are of a convolutive mixture type. This whitening is due to inverse-filtering which deconvolves the AR process that generates the linear-recurrent signals. To avoid this excessive deconvolution, it is effective to remove the contributions of such processes from the inverse-filters. Unfortunately, no method seems to have been able to extract the AR parameter-sets for respective signals included in convolutive mixtures.

Convolution is a linear operation, and, consequently, can be formulated as a linear system of equations. If only the output of the system (the convolved signal) is known, then the problem is blind so that given one equation, two unknowns are sought. Here, the blind deconvolution problem is solved using independent component analysis (ICA). To facilitate this, several time lagged versions of the convolved signal are extracted and used to construct realizations of a random vector. For ICA, this random vector is the, so called, mixture vector, created by the matrix-vector multiplication of the two unknowns, the mixing matrix and the source vector. Due to the properties of convolution, the mixing matrix is banded with its nonzero elements containing the convolution's filter. This banded property is incorporated into the ICA algorithm as prior information, giving rise to a banded ICA algorithm (B-ICA) which is, in turn, used in a new blind deconvolution method. B-ICA produces as many independent components as the dimension of the filter; whereas for blind deconvolution, only one signal is sought (the deconvolved signal). Fortunately, the convolutional model provides additional information which enables one best independent component to be extracted from the pool of candidate solutions. This, in turn, yields estimates of both the filter and the deconvolved signal.

Time-frequency domain blind source separation (BSS) leads to an important problem that generally the independence assumption between source signals collapses in frequency domain due to inadequate samples. It consequently degrades the performance of all the ICA-based BSS methods. To remedy the defect, we propose introducing the beamforming into the conventional BSS system taking the advantage that the null beamforming does not depend upon the assumption of independence but only upon the estimation of the directions of arrival (DOA). We set up a criterion on the performance of separation. It is used to compare the separation results by ICA and beamforming, and select the result that is thought better. The separations at certain bins are greatly improved, which results in a better separation.

This paper presents adaptive noise canceling(ANC) using convolved reference noise.In many practical ANC applications, the reference noise has channel distortion, which may degrade its performance. If the distortion includes nonminimum-phase parts, the inverse cannot be implemented as a causal filter. Therefore, the conventional ANC system may not provide a satisfactory performance. In this paper, we propose the delayed ANC system to deal with the problem, and derive learning rules for the adaptive FIR and IIR filter coefficients based on independent component analysis(ICA). Simulation results show that the proposed algorithms are adequate for the considered situations.

Oriented PCA (OPCA) extends standard Principal Component Analysis by maximizing the power ratio of a pair of signals rather than the power of a single signal. We show that OPCA in combination with almost arbitrary temporal filtering can be used for the blind separation of linear instantaneous mixtures. Although the method works for almost any filter, the design of the optimal temporal filter is also discussed for filters of length two. Compared to other SOS methods this approach avoids the spatial prewhitening step. Further, it achieves similar or better performance since it can combine several time lags for the estimation of the mixing parameters. Finally, the method is a batch approach without requiring iterative solution.

Different approaches have been suggested in recent years to the blind source separation problem, in which a set of signals is recovered out of its instantaneous linear mixture. Many widely-used algorithms are based on second-order statistics, and some of these algorithms are based on time-frequency analysis. In this paper we set a general framework for this family of second-order statistics based algorithms, and identify some of these algorithms as special cases in that framework. We further suggest a new algorithm that is based on the fractional Fourier transform (FRT), and is suited to handle non-stationary signals. The FRT is a tool widely used in time-frequency analysis and therefore takes a considerable place in the signal-processing field. In contrast to other blind source separation algorithms suited for the non-stationary case, our algorithm has two major advantages: it does not require the assumption that the signals' powers vary over time, and it does not require a pre-processing stage for selecting the points in the time-frequency plane to be considered. We demonstrate the performance of the algorithm using simulation results.

Although the idea that the model probability density function (pdf) is learned together with the de-mixing matrix W for independent component analysis (ICA) first proposed by Xu etc. (1996) was adopted in the learned parametric mixture based ICA (LPM-ICA) algorithm, theoretical guidance on how to design the learnable density model is not yet mentioned. Recently, the ICA one-bit-matching theorem stating that all the sources can be separated as long as there is a one-to-one same sign correspondence between the kurtosis signs of all source pdf's and the kurtosis signs of all model pdf's is theoretically proved under the assumption of zero skewness. In this paper, we propose a simplified LPM-ICA algorithm based on the theorem. Compared with the original algorithm which adopts mixture density as the model pdf, the simplified LPM-ICA has the advantage of improved computational efficiency by using only one free parameter for each model pdf.

The use of adaptive noise cancellers (ANCs) to reduce the noise level prior to source separation is investigated in this paper. The foetal electrocardiogram (ECG) extraction problem in particular is addressed, which as well as with noise, is compounded by the non-stationary nature of the measurements. Consequently, computer simulations show that the combined Kalman filter and natural gradient algorithm [1], cascaded with a parallel ANC network, leads to a technique that can significantly improve separation performance. Moreover, it is shown that in some cases, the performance of the method is better than that of the JADE algorithm [2].

In this paper we first estimate the delay times or the distances between receivers and sound sources as well as their power spectra. This method enables us to find the sound source positions or the directions of arrival sounds. Further, then we construct an inverse filter for real mixed signals to reproduce the high-quality separated signals, becuase the usual complex ICA requires sysnthesizing all frequency components of separated signals. Finally, through simulation, we demonstrate the high-quality blind separation as well as the effectiveness for the estimation of the source parameters.

This paper proposes a very simple method for increasing the algorithm speed for separating sources from PNL mixtures or inverting Wiener systems. The method is based on a pertinent initialization of the inverse system, whose computational cost is very low. The nonlinear part is roughly approximated by pushing the observations to be Gaussian; this method provides a surprisingly good approximation even when the basic assumption is not fully satisfied. The linear part is initialized so that outputs are decorrelated. Experiments shows the impressive speed improvement.

A novel online algorithm for Blind Source Extraction (BSE) of instantaneous signal mixtures is proposed. The algorithm is derived for a structure comprising of a demixing stage and an adjacent adaptive predictor, whereby signal extraction is based upon the predictability of an unknown source signal. This way, the coefficients of the demixing matrix and adaptive predictor are estimated simultaneously. To improve the convergence and to be able to cope with the dynamics of the mixture, the algorithm is further normalised based upon the minimisation of the a posteriori prediction error. This makes it also suitable for environments where the mixing process is time varying. No assumptions or constraints on the norm of the coefficients or signals are required. Simulations on mixtures of real world physiological data for both the fixed and time varying mixing matrix support the analysis.

Blind source separation (BSS) based on spatial time-frequency distributions (STFDs) provides improved performance over blind source separation methods based on second-order statistics, when dealing with signals that are localizable in the time-frequency (t-f) domain. In this paper, we introduce a simple method for autoterm and crossterm selection, and propose the use of STFD matrices for both pre-whitening and mixing matrix recovery. T-f grouping is also proposed for improved blind separation of nonstationary signals. This method provides robust performance to noise and allows reduction of the number of sources considered for separation.

In this contribution we generalize some links between contrasts functions and considering of a reference signal in the field of source separation. This yields a new contrasts that allows us to show that a function proposed in [4] is also a contrast, and frees us from the constraints on the introduced reference signal [1]. Associated optimization criteria is shown to have a close relation to a joint-diagonalization criterion of a matrices set. Moreover, the algorithm is of the same kind as JADE algorithm. Also, the number of matrices to be joint diagonalized is reduced in relation to SSARS algorithm [4] and to JADE one [2]. Simulations studies are used to show that the convergence properties of the new contrasts, even in real environment, are much improved upon those of the conventional algorithms.

This paper shows the possibility to blindly separate instantaneous mixtures of sources by means of a criteria exploiting order statistics. Properties of higher order statistics and second order methods are first underlined. Then a brief description of the order statistics shows that they gather all these properties and a new criteria is proposed. Next an iterative algorithm able to simultaneously extract all the sources is developed. The last part is comparison of this algorithm with well known methods (JADE and SOBI). The most striking result is the possibility to exploit together independence and correlation owing to order statistics.

We consider a deterministic approach to the noise free blind image separation and deconvolution problem with positivity constraints. This is necessary because in some real world applications (telescope images in astronomy, remotely sensed images, etc.) the pixel values correspond to intensities and must be positive. Also mixing matrix itself must be positive if it for example represents point spread function of an imaging system in astronomy or spectral reflectance matrix in remote sensing. In related papers the blind source separation (BSS) problem with positivity constraints is being solved by using probabilistic approach assuming independence between the sources that requires use of all the pixel data. Implicit assumption of this approach is that unknown mixing matrix is space invariant. Here we propose solution that is deterministic and solves the problem on the pixel by pixel basis. Consequently, algorithm is capable to solve the space variant problems. This is accomplished by minimizing the 2nd law of thermodynamics based contrast function called Helmholtz free energy. Formulation of our algorithm is equivalent to the MaxEnt formulation of the supervised separation problem with essential difference that the mixing matrix is unknown in our case. We demonstrate the algorithm capability to perfectly recover images from the synthetic noise free linear mixture of two images.

This paper deals with the blind separation and reconstruction of source images from mixtures with unknown coefficients, in presence of noise. We address the blind source separation problem within the ICA approach, i.e. assuming the statistical independence of the sources, and reformulate it in a Bayesian estimation framework. In this way, the flexibility of the Bayesian formulation in accounting for prior knowledge can be exploited to describe correlation within the individual source images, through the use of suitable Gibbs priors. We propose a MAP estimation method and derive a general algorithm for recovering both the mixing matrix and the sources, based on alternating maximization within a simulated annealing scheme. We experimented with this scheme on both synthetic and real images, and found that a source model accounting for correlation is able to increase robustness against noise.

In this paper, a new algorithm for blind inversion of Wiener systems is presented. The algorithm is based on minimization of mutual information of the output samples. This minimization is done through a Minimization-Projection (MP) approach, using a nonparametric ``gradient'' of mutual information.

Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling combined text/image data for the purpose of cross-media retrieval.

We use kernel Canonical Correlation Analysis to learn a semantic representation of Web images and their associated text. This representation is used in two applications. In first application we consider classification of images into one of three categories. We use SVM in the semantic space and compare against the SVM\\ on raw data and against previously published results using ICA. In the second application we retrieve images based only on their content from a text query. The semantic space provides a common representation and enables a comparison between the text and image. We compare against a standard cross-representation retrieval technique known as the Generalised Vector Space Model.

Search by similarity in sound effects is needed for musical authoring and search by content (MPEG-7) applications. Sounds such as outdoor ambience, machine noises, speech or musical excerpts as well as many other man made sound effects (so called Foley sounds) are complex signals that have a well perceived acoustic characteristic of some random nature. In many cases, these signals can not be sufficiently represented based on second order statistics only and require higher order statistics for their characterization. Several methods for statistical modeling of such sounds were proposed in the literature: non-gausian linear and non-linear source-filter models using HOS, optimal basis / sparse geometrical representations using ICA and methods that combine ICA-based features with temporal modeling (HMM). In this paper we review several such approaches and evaluate them in the context of multimedia sound retrieval.

This paper presents a methodology for extracting meaningful audio/visual features from video streams. We propose a statistical method that does not distinguish between the auditory and visual data, but one that operates on a fused data set. By doing so we discover audio/visual features that correspond to events depicted in the stream. Using these features, we can obtain a segmentation of the input video stream by separating independent auditory and visual events.

We propose a preliminary step towards the construction of a global evaluation framework for Blind Audio Source Separation (BASS) algorithms. BASS covers many potential applications that involve a more restricted number of tasks. An algorithm may perform well on some tasks and poorly on others. Various factors affect the difficulty of each task and the criteria that should be used to assess the performance of algorithms that try to address it. Thus a typology of BASS tasks would greatly help the building of an evaluation framework. We describe some typical BASS applications and propose some qualitative criteria to evaluate separation in each case. We then list some of the tasks to be accomplished and present a possible classification scheme.

Speech Enhancement is a technique required to grant the success of speech recognition systems working under strong noisy conditions, and to grant understandability in speech transmission and coding. Array beamforming has been traditionally used to produce improvements in the signal-to-noise ratio. Two-sensor systems based on First-Order Differential Beamformers (FODB) have been proposed as a promising alternative [2]. Nevertheless null beamformers are not sufficient to grant enough separation levels. Through this paper FODB's and Joint-Process Estimators (JPE's) are combined to grant speech source separation. Results for superposed sinusoidal sources are presented.

Standardized wireless transmissions include fixed-size fragments per packet. Relying on this structure, two schemes for distributing redundancy across fragments are considered in this paper to enable blind identification without higher-order statistics of received signals. We prove that both schemes guarantee blind identifiability as well as symbol detectability and propose a subspace based blind identification method. We test their relative merits, and compare them with existing alternatives using simulations.

The most important use of a spread spectrum (SS) communication system is that of interference mitigation. In fact, a spread spectrum communication system has an inherent temporal interference mitigation capability, usually called a processing gain. This gain enables the system to work properly in many cases. At times, however, the interference can be too strong, or the requirements for the link quality are more stringent, so that additional interference mitigation is needed. In a cellular network the interference originating from the neighboring cells, called inter-cell interference, is one of the reasons for the need for additional interference mitigation capability in a receiver. In this paper we consider Independent Component Analysis (ICA) for the mitigation of inter-cell interference. Namely, we consider the use of ICA as an advanced pre-processing tool, which first mitigates the interference from the received array data and passes the residual signal for the conventional detection. Numerical experiments are given to evaluate the performance of two variants of the ICA-assisted receiver chains. They indicate clear performance gains in comparison to conventional detection without interference mitigation. In addition, the ICA-assisted receiver chains are robust against the fluctuations in the cells' load, which is an important feature in practice.

In this paper, a grouping method by measuring independence of decomposed signals is discussed. Independent closeness between decomposed components proposed method calculated using test statistics. The relevance of these ideas is illustrated with synthesized data and MEG data applied in MEG data.

When independent component analysis (ICA) is applied on real data, the source signals as well as the mixing matrix are blind to users. In such case testing for mutual independence of estimated source signals is of vital importance. In this paper, we illustrate and discuss how testing mutual independence of estimated sources signals can be done in the context of testing multivariate uniformity.

Given an arbitrary standardized (zero mean and unit variance) probability density, we measure its departure from the standard normal density by the {$L^2$} distance between the two density functions. In particular, we consider three different {$L^2$} norms, each distinguished by their weight functions. We investigate the reciprocal Gaussian, uniform, and Gaussian weight functions, and present respective Hermite series representations of non-Gaussianity. We show that the {$L^2$} metric defined with the reciprocal Gaussian weight is directly related to the moment-based approximation of differential entropy. We argue that this is a non-robust measure of non-Gaussianity, as the division by the Gaussian places heavy weights on the tails of the density. Improved robustness is achieved by using the uniform or Gaussian weight functions, both of which effectively suppresses the sensitivity to outliers in the estimations. We choose the {$L^2$} Euclidean metric to define a measure of non-Gaussianity, and show how it leads to an {$L^2$} de-Gaussianization algorithm for independent component analysis.

In this paper, we address a few issues related to the evaluation of the performance of source separation algorithms. We propose several measures of distortion that take into account the gain indeterminacies of BSS algorithms. The total distortion includes interference from the other sources as well as noise and algorithmic artifacts, and we define performance criteria that measure separately these contributions. The criteria are valid even in the case of correlated sources. When the sources are estimated from a degenerate set of mixtures by applying a demixing matrix, we prove that there are upper bounds on the achievable Source to Interference Ratio. We propose these bounds as benchmarks to assess how well a (linear or nonlinear) BSS algorithm performs on a set of degenerate mixtures. We demonstrate on an example how to use these figures of merit to evaluate and compare the performance of BSS algorithms.

Simulations are often needed when the performance of new methods is evaluated. If the method is designed to be blind or robust, simulation studies must cover the whole range of potential random input. It follows that there is a need for advanced tools of data generation. The purpose of this paper is to introduce a technique for the generation of correlated multivariate random data with non-Gaussian marginal distributions. The output random variables are obtained as linear combinations of independent components. The covariance matrix and the first four moments of the output variables may be freely chosen. Moreover, the output variables may be filtered in order to add autocorrelation. Extended Generalized Lambda Distribution (EGLD) is proposed as a distribution for the independent components. Examples demonstrate the diversity of data structures that can be generated.

In this paper, we discuss approaches for blind source separation where we can use more sensors than the number of sources for a better performance. The discussion focuses mainly on reducing the dimension of mixed signals before applying independent component analysis. We compare two previously proposed methods. The first is based on principal component analysis, where noise reduction is achieved. The second is based on geometric considerations and selects a subset of sensors according to the fact that a low frequency prefers a wide spacing and a high frequency prefers a narrow spacing. We found that the PCA-based method behaves similarly to the geometry-based method for low frequencies in the way that it emphasizes the outer sensors and yields superior results for high frequencies. These results provide a better understanding of the former method.

In this paper we link the blind identification of a MIMO Moving Average (MA) system to the calculation of the Canonical Decomposition (CD) in multilinear algebra. This conceptually allows for the blind identification of systems that have many more inputs than outputs. We also derive a new theorem guaranteeing uniqueness of a high-rank CD and an algebraic algorithm for its computation.

By elucidating a parallel between canonical correlation analysis (CCA) and least squares regression (LSR), we show how regularization of CCA can be performed and interpreted in the same spirit as the regularization applied in ridge regression (RR). Furthermore, the results presented may have an impact on the practical use of regularized CCA (RCCA). More specifically, a relevant cross validation cost function for training the regularization parameter, naturally follows from the derivations.

Many algorithms based on information theoretic measures and/or temporal statistics of the signals have been proposed for ICA in the literature. There have also been analytical solutions suggested based on predictive modeling of the signals. In this paper, we show that finding an analytical solution for the ICA problem through solving a system of nonlinear equations is possible. We demonstrate that this solution is robust to decreasing sample size and measurement SNR. Nevertheless, finding the root of the nonlinear function proves to be a challenge. Besides the analytical solution approach, we try finding the solution using a least squares approach with the derived analytical equations. Monte Carlo simulations using the least squares approach are performed to investigate the effect of sample size and measurement noise on the performance.

Mixture modelling techniques such as Mixtures of Principal component and Factor analysers are very powerful in representing and segmenting Gaussian clusters in data. Meaningful segmentations may be lost, however, if these self-similar areas are non-Gaussian. For such data, an intuitive model is a Mixture of Independent Component Analysers. Such a model, however, ignores dynamics, both between clusters and within clusters. The former can be remedied by enforcing a Markov prior over the compnent mixture variables, leading to a Hidden Markov model (HMM) with ICA generators. The latter can be modelled if the source models of these ICA generators are themselves dynamic, for example by utilising HMM sources. HMMs are models for picking up dynamic changes of state in the underlying data generation process, and are therefore useful in capturing high-order temporal information. The proposed method is a piecewise approach to detecting dynamic movement, focussing on abrupt changes in the observation model and/or in the source model, while assuming static statistics in between. The hierarchical approach allows the analysis of signals which have macro- and micro-dynamics, such as stock indices.

Variational Bayesian learning is an approximation to the exact Bayesian learning where the true posterior is approximated with a simpler distribution. In this paper we present an on-line variant of variational Bayesian learning. The method is based on collecting likelihood information as the training samples are processed one at a time and decaying the old likelihood information. The decay or forgetting is very important since otherwise the system would get stuck to the first reasonable solution it finds. The method is tested with a simple linear independent component analysis (ICA) problem but it can easily be applied to other more difficult problems.

This paper seeks to incorporate temporal information into the Independent Component Analysis process by marrying ICA models to Hidden Markov Models (HMMs). HMMs are models for picking up dynamic changes of state in the underlying data generation process, and are therefore useful in capturing high-order temporal information. In previous work, we introduced Bayesian ICA with mixture of Gaussian sources learnt using variational methods. In such a model, HMM methodology can be incorporated by stipulating a Markov prior over the mixture coefficients in the variational Bayesian ICA source models. This results in ICA with flexible, dynamic sources. The proposed method is a piecewise approach to detecting dynamic movement, focussing on abrupt changes in the source model. The proposed model will be shown to be more powerful than stationary ICA at blindly separating very noisy mixtures of images.

Monaural separation of linear mixtures of `white' source signals is fundamentally ill-posed. In some situations it is not possible to find the mixing coefficients for the full `blind' problem. If the mixing coefficients are known, the structure of the source prior distribution determines the source reconstruction error. If the prior is strongly multi-modal source reconstruction is possible with low error, while source signals from the typical `long tailed' distributions used in many ICA settings can not be reconstructed. We provide a qualitative discussion of the limits of monaural blind separation of white noise signals and give a set of no go cases.

Many algorithms for Independent Component Analysis rely on a simultaneous diagonalization of a set of matrices by means of a nonsingular matrix. In this paper we provide means to determine the matrix when it has more columns than rows.

Independent component analysis (ICA) of textured images is presented as a computational technique for creating a new data dependent filter bank for use in texture segmentation. We show that the ICA filters are able to capture the inherent properties of textured images. The new filters are similar to Gabor filters, but seem to be richer in the sense that their frequency responses may be more complex. These properties enable us to use the ICA filter bank to create energy features for effective texture segmentation. Our experiments using multi-textured images show that the ICA filter bank yields similar or better segmentation results than the Gabor filter bank.

Fetal magnetocardiography (FMCG) have been extensively reported in the literature as a non-invasive, prenatal technique, in which magnetic fields are used to monitor the function of the fetal heart rate. On the other hand, FMCG may be highly noisy, due to the fetal heart dimensions when compared, for example, to the mother's. In the field of source separation, many works have shown its efficiency of extracting signals even in this condition of low signal-to-noise ration. In this work, we propose an extension of the work of Barros and Cichocki, where they idealized an algorithm to extract a desired source with resonance at a given delay. We model the system as an autoregressive one, and our proposal is based on the calculation of the poles of the autocorrelation function. We show that the method is efficient and much less computationally expensive than the ones proposed in the literature.

Blind Multi User Detection (BMUD) is the process to simultaneously estimate multiple symbol sequences associated with multiple users in the downlink of a Code Division Multiple Access (CDMA) communication system using only the received data. In this paper, we propose BMUD algorithms based on the Natural Gradient Blind Source Recovery (BSR) techniques in feedback and feedforward structures developed for linear convolutive mixing environments. The quasi-orthogonality of the spreading codes and the inherent independence among the various transmitted user symbol sequences form the basis of the proposed BMUD methods. The application of these algorithms is justified since a slowly fading multipath CDMA environment is conveniently represented as a linear combination of convolved independent symbol sequences. The proposed structures and algorithms demonstrate promising results as compared to (i) the conventional techniques comprising matched filters and MMSE receivers, and (ii) previous BMUD structures and algorithms. This paper also extends the earlier proposed BMUD CDMA models by including effects of the channel symbol memory. Illustrative simulation results compare the BER performance of the various natural gradient algorithms, both in the instantaneous (i.e., on-line) and the batch modes, to the conventional detectors.

Irregular changes of electric currents called Seismic Electric Signals (SESs) are often observed in Telluric Current Data (TCD). Recently, detection of SESs in TCD has attracted notice for short-term earthquake prediction. Since most of the TCD collected in Japan is affected by train noise, detecting SESs in TCD itself is an extremely arduous job. The goal of our research is automatic separation of train noise and SESs, which are considered to be independent signals, using Independent Component Analysis (ICA). In this paper, we propose an effective ICA evaluation function for train noise considering statistic analysis. We apply the evaluation function to TCD and analyze the results.

The analysis and separation of audio signals into their original components is an important prerequisite to automatic transcription of music, extraction of metadata from audio data, and speaker separation in video conferencing. In this paper, a method for the separation of drum tracks from polyphonic music is proposed. It consists of an Independent Component Analysis and a subsequent partitioning of the derived components into subspaces containing the percussive and harmonic sustained instruments. With the proposed method, different samples of popular music have been analyzed. The results show sufficient separation of drum tracks and non-drum tracks for subsequent metadata extraction. Informal listening tests prove a moderate audio quality of the resulting audio signals.

In this paper we analyze this aspect of redundancy reduction as it appears in MPEG1-Layer 1 codec. Specifically, we consider the mutual information that exists between filter bank coefficients and show that the normalization operation indeed reduces the amount of dependency between the various channels. Next, the effect of masking normalization is considered in terms of its compression performance and is compared to linear, reduced rank short term ICA analysis. Specifically we show that a local linear ICA representation outperforms the traditional compression at low bit rates. A comparison of MPEG1 Layer 1 and a new ICA based audio compression method are reported in the paper. Certain aspects of ICA quantization and its applicability for low-bitrate compression are discussed.

In this paper we give a lifting factorization of any matrix of order 3 or 4 whose determinant is equal to one. In the case of matrices of order 4, this factorization has been obtained by solving a system of algebraic equations, thanks to a Groebner basis computation. The lifting factorization associated with rounding allows to construct a transformation that maps integers to integers and that is close to the separation matrix of an instantaneous mixture of independent components, given by any blind source separation algorithm. We have applied such transforms to images in order to reduce mutual information between inter-band pixels in a multiresolution decomposition. Results of our simulations are presented.

In this paper we study image features obtained by independent component analysis (ICA) in one, two and three dimensions. As three-dimensional data we use an image of a human head obtained with magnetic resonance imaging (MRI). We also study the features when cutting one or two dimensions from the data, and make comparisons to ordinary optical natural image data. Additionally, we look at feature distributions and scale invariance in both data types.

Electroencephalogram (EEG), related to fast eye movement (saccade), has been the subject of research by our group toward developing a brain-computer interface (BCI): eye tracking system with EEG signal. In the previous research, however, the EEG was analyzed using ensemble averaging. Ensemble averaging is not suitable for analyzing raw EEG data. In order to process raw EEG data, therefore, saccade-related EEG was experimentally processed by using the improved Fast ICA, which can extract a desired signal with a reference signal simultaneously. Visually guided saccade tasks were performed and the EEG signal generated in the saccade was recorded. The EEG processing is performed by the improved Fast ICA. Form the results, the saccade-related independent components were extracted: The extracting rate was 72 %. The components were extracted just before eye movements.

Despite extensive studies in Parkinson's disease (PD) in recent decades, the neural mechanisms of this common neurodegenerative disease remain incompletely understood. Functional brain imaging technique such as single photon emission computerized tomography has emerged as a tool to help us understand the disease pathophysiology by assessing regional cerebral blood flow (rCBF) changes. This study applies Independent Component Analysis (ICA) to assess the difference in rCBF between PD patients and healthy controls to identify brain regions involving in PD. The brain areas identified by ICA include many regions in the basal ganglia, the brainstem, the cerebellum, and the cerebral cortex. Some of the regions have been largely overlooked in neuroimaging studies using region-of-interest approaches, yet they are consistent with previous pathophysiological reports. ICA thus might be valuable to suggest a new alternative disease and brain circuitry model in PD with a broader and more comprehensive aspect.

The analysis of kinetic data monitored using spectroscopic techniques and its resolution into its unknown components is described in the paper. Independent Component Analysis can also be considered as a calibration free technique. The outcome of the analyses are the spectral profiles of the unknown species. This provides qualitative information that is encapsulated within the mixture spectra. This enables the analyst to identify the number and type of components within the mixture that are present within the reaction over time. For a first order synthetic reaction, the Independent Component Analysis (ICA) approaches of FastICA and JADE and the calibration free technique of multivariate curve resolution-alternating least squares were applied to the mixture spectra. For all approaches the signal from the constituent components was successfully separated.

There exists a lot of redundancy in 3D meshes that can be exploited by Blind Source Separation (BSS) and Independent Component Analysis (ICA) for the goal of mesh compression. The 3D meshes geometry is spatially correlated on each direction in the Cartesian coordinate system. In the context of mesh compression with BSS algorithm, it is proposed to take the correlated geometry of 3D mesh as observations and decorrelated geometry as sources corresponding at largest energy. The geometry decorrelation is obtained by using EVD, SOBI, and Karhunen-Loève transform. In Karhunen-Loève transform case, to reduce the transform matrix dimension, the correlated geometry is divided into blocks and then by averaging the covariance matrices of each block, a global covariance matrix is performed. The transform matrix represents the bitstream of compressed file. Its elements are quantized and encoded using arithmetic coding, providing effective compression. Keywords - Blind Source Separation, Independent Component Analysis, Second Order Blind Identification, Eigenvalues Decomposition, Karhunen-Loève transform, Prediction, Quantization, Arithmetic coding, VRML, 3D mesh, compression.

We present a class of algorithms that find clusters in independent component analysis: the data are linearly transformed so that the resulting components can be grouped into clusters, such that components are dependent within clusters and independent between clusters. In order to find such clusters, we look for a transform that fits the estimated sources to a forest-structured graphical model. In the non-Gaussian, temporally independent case, the optimal transform is found by minimizing a contrast function based on mutual information that directly extends the contrast function used for classical ICA. We also derive a contrast function in the Gaussian stationary case that is based on spectral densities and generalizes the contrast function of Pham (2002) to richer classes of dependency.

We study a relative optimization framework for the quasi-maximum likelihood blind source separation and relative Newton method as its particular instance. Convergence of the Newton method is stabilized by the line search and by the modification of the Hessian, which forces its positive definiteness. The structure of the Hessian allows fast approximate inversion. In order to separate sparse sources, we use a non-linearity based on smooth approximation to the absolute value function. Sequential optimization with the gradual reduction of the smoothing parameter leads to the super-efficient separation.

We address the problem of blind separation of linear instantaneous mixtures of stationary, mutually uncorrelated sources with distinct spectra, where the mixing matrix is time-varying (rendering the observations nonstationary). The time variation of the mixing is parameterized using the assumption of linear time-dependence. Relying on second-order statistics in the framework of Second-Order Blind Identification (SOBI Algorithm, Belouchrani et al., 1997), we offer an expansion of SOBI, such that the further parameterization is properly accommodated. In actual situations of slowly varying mixtures, such first-order parameterization thereof enables the estimation of the necessary statistics over longer observation intervals than those possible with classical static methods (essentially zero-order approximations). The prolonged validity of the approximate model is thus utilized in improving the statistical stability of the estimates. In this paper we identify the "raw" statistics that need to be estimated from the data, and then propose several approaches for the estimation of the mixing model parameters, pointing out some trade-offs involved. We demonstrate the enhanced performance relative to conventional SOBI when applied to time-varying mixtures.

Static linear mixtures with more sources than sensors are considered. The Blind Source Identification (BSI) of underdetermined mixtures problem is addressed by taking advantage of Sixth Order (SixO) statistics and the Virtual Array (VA) concept. It is shown how SixO cumulants can be used to increase the effective aperture of an arbitrary antenna array, and so to identify the mixture of more sources than sensors. A computationally simple but efficient algorithm, named SIRBI, is proposed and enables to identify the steering vectors of up to $P = N^2-N+1$ sources for arrays of N sensors with space diversity only, and up to $P = N^2$ for those with angular and polarization diversity only.

We show that the choice of posterior approximation of sources affects the solution found in Bayesian variational learning of linear independent component analysis models. Assuming the sources to be independent a posteriori favours a solution which has an orthogonal mixing matrix. A linear dynamic model which uses second-order statistics is considered but the analysis extends to nonlinear mixtures and non-Gaussian source models as well.

In 1955, McGill published a multivariate generalisation of Shannon's mutual information. Algorithms such as Independent Component Analysis use a different generalisation, the redundancy, or multi-information. McGill's concept expresses the information shared by all of K random variables, while the multi-information expresses the information shared by any two or more of them. Partly to avoid confusion with the multi-information, I call his concept here the co-information. Co-informations, oddly, can be negative. They form a partially ordered set, or lattice, as do the entropies. Entropies and co-informations are simply and symmetrically related by Moebius inversion. The co-inform-ation lattice sheds light on the problem of approximating a joint density with a set of marginal densities, though as usual we run into the partition function. Since the marginals correspond to higher-order edges in Bayesian hypergraphs, this approach motivates new algorithms such as Dependent Component Analysis, which we describe, and (loopy) Generalised Belief Propagation on hypergraphs, which we do not. Simulations of subspace-ICA (a tractable DCA) on natural images are presented on the web. In neural computation theory, we identify the co-information of a group of neurons (possibly in space/time staggered patterns) with the `degree of existence' of a corresponding cell assembly.

Conventional algorithms for blind source separation do not necessarily work well for real-world data. One of the reasons is that, in actual applications, the mixing matrix is often almost singular at some part of frequency range and it can cause a certain computational instability. This paper proposes a new algorithm to overcome this singularity problem. The algorithm is based on the minimal distortion principle proposed by the authors and in addition incorporates a kind of regularization term into it, which has a role to suppress the gain of the separator for the frequency range at which the mixing matrix is almost singular.

The present paper deals with the blind separation of multiple convolved colored signals, that is, the blind deconvolution of an Multiple-Input Multiple-Output Finite Impulse Response (MIMO-FIR) system. To deal with the blind deconvolution problem using the second-order statistics (SOS) of the outputs, Hua and Tugnait considered it under the conditions that a) the FIR system is irreducible and b the input signals are spatially uncorrelated and have distinct power spectra. In the present paper, the problem is considered under a weaker condition than the condition a). Namely, we assume that c) the FIR system is equalizable by means of the SOS of the outputs. Under b) and c), we show that the system can be blindly identified up to a permutation, a scaling, and a delay using the SOS of the outputs. Moreover, based on this identifiability, we show a novel necessary and sufficiently condition for solving the blind deconvolution problem, and then, based on the condition, we propose a new algorithm for finding an equalizer using the SOS of the outputs.

A joint diagonalization algorithm for convolutive blind source separation by explicitly exploiting the nonstationarity and second order statistics of signals is proposed. The algorithm incorporates a non-unitary penalty term within the cross-power spectrum based cost function in the frequency domain. This leads to a modification of the search direction of the gradient-based descent algorithm and thereby yields more robust convergence performance. Simulation results show that the algorithm leads to faster speed of convergence, together with a better performance for the separation of the convolved speech signals, in particular in terms of shape preservation and amplitude ambiguity reduction, as compared to Parra's nonstationary algorithm for convolutive mixtures.

There are two main approaches for blind source separation (BSS) on time series using second-order statistics. One is to utilize the nonwhiteness property, and the other one is to utilize the nonstationarity property of the source signal. In this paper, we combine both approaches for convolutive mixtures using a matrix notation that leads to a number of new insights. We give rigorous derivations of the corresponding time-domain and frequency-domain approaches by generalizing a known cost function so that it inherently allows joint optimization for several time lags of the correlations. The approach is suitable for on-line and off-line algorithms by introducing a general weighting function allowing for tracking of time-varying environments. For both, the time-domain and frequency-domain versions, we discuss links to well-known and also to extended algorithms as special cases. Moreover, using the so-called generalized coherence, we establish links between the time-domain and frequency-domain algorithms and show that our cost function leads to an update equation with an inherent normalization.

Based on the equivalence of blind source separation and adaptive beamforming, this paper introduces a new algorithm using independent component analysis with a geometrical constraint. The new algorithm solves the permutation problem of the blind source separation of acoustic mixtures, and is significantly less sensitive to the precision of the geometrical constraint than an adaptive beamformer. A high degree of robustness is very important since the steering vector is always roughly estimated in a reverberant environment, even when the look direction is precise. The new algorithm is based on FastICA and constrained optimization. It is theoretically and experimentally analyzed with respect to the roughness of the steering vector estimation by using impulse responses of a real room. The effectiveness of the algorithms for real-world mixtures is also shown for three sources and three microphones.

We propose a new method to perform the separation of two audio sources from a single sensor. This method generalizes the Wiener filtering with Gaussian Mixture distributions and with Hidden Markov Models. The method involves a training phase of the models parameters, which is done with the classical EM algorithm. We derive a new algorithm for the re-estimation of the sources with these mixture models, during the separation phase. The general approach is evaluated on the separation of real audio data and compared to classical Wiener filtering.

Using statistical models one can estimate features from natural images, such as images that we see in everyday life. Such models can also be used in computional visual neuroscience by relating the estimated features to the response properties of neurons in the brain. A seminal model for natural images was linear sparse coding which, in fact, turned out to be equivalent to ICA. In these linear generative models, the columns of the mixing matrix give the basis vectors (features) that are adapted to the statistical structure of natural images. Estimated features resemble wavelets or Gabor functions, and provide a very good description of the properties of simple cells in the primary visual cortex. We have introduced extensions of ICA that are based on modelling dependencies of the 'independent' components estimated by basic ICA. The dependencies of the components are used to define either a grouping or a topographic order between the components. With natural image data, these models lead to emergence of further properties of visual neurons: the topographic organization and complex cell receptive fields. We have also modelled the temporal structure of natural image sequences using models inspired by blind source separation methods. All these models can be combined in a unifying framework that we call bubble coding. Finally, we have developed a multivariate autoregressive model of the dependencies, which lead us to the concept of 'double-blind' source separation.

This paper describes a real-time blind source separation (BSS) method for moving speech signals in a room. Our method employs frequency domain independent component analysis (ICA) using a blockwise batch algorithm in the first stage, and the separated signals are refined by postprocessing using crosstalk component estimation and non-stationary spectral subtraction in the second stage. The blockwise batch algorithm achieves better performance than an online algorithm when sources are fixed, and the postprocessing compensates for performance degradation caused by source movement. Experimental results using speech signals recorded in a real room show that the proposed method realizes robust real-time separation for moving sources. Our method is implemented on a standard PC and works in realtime.

We consider the blind separation of convolutive mixtures based on the joint diagonalization of time varying spectral matrices of the observation records. The goal is to separate audio mixtures in which the mixing filter has quite long impulse responses and the signals are highly non stationary. We rely on the continuity of the frequency response of the filter to eliminate the permutation ambiguity. Simulations show that our method works well when there is no strong echos in the mixing filter. But if it is not the case, the permutation ambiguity cannot be sufficiently removed.

In this paper we propose a time-domain gradient algorithm that exploits the nonstationarity of the observed signals and recovers the original sources by decorrelating the time-varying second-order statistics simultaneously. By introducing a generalized weighting factor in our cost function we can formulate an on-line algorithm which can be applied to time-variant multipath mixing systems. A further benefit is the possibility to implement the update in a recursive manner to reduce the computational complexity. We show that this method inherently possesses an adaptive step size and hence avoids stability problems. Furthermore we present a new geometric initialization for time-domain gradient algorithms which improves separation performance in strongly reverberant environments. In our experiments we compared the separation performance of the proposed algorithm to its off-line counterpart and to another multiple-decorrelation-based on-line algorithm in the frequency-domain for real-world speech mixtures.

We newly propose a novel blind separation framework for Single-Input Multiple-Output (SIMO)-model-based acoustic signals using the extended ICA algorithm, SIMO-ICA. The SIMO-ICA consists of multiple ICAs and a fidelity controller, and each ICA runs in parallel under the fidelity control of the entire separation system. The SIMO-ICA can separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources as they are at the microphones. Thus, the separated signals of SIMO-ICA can maintain the spatial qualities of each sound source. In order to evaluate its effectiveness, separation experiments are carried out under both nonreverberant and reverberant conditions. The experimental results reveal that (1) the signal separation performance of the proposed SIMO-ICA is the same as that of the conventional ICA-based method, and that (2) the spatial quality of the separated sound in SIMO-ICA is remarkably superior to that of the conventional method, particularly for the fidelity of the sound reproduction.

Automatic methods to determine voiceprints in speech samples predominantly use short-time spectra to yield specific features of a given speaker. Among these, the Mel Frequency Cepstrum Coefficient (MFCC) features are widely used today. The speaker recognition method presented here is based on short-time spectra, however the feature extraction process does not correspond to the MFCC process. The motivation was to avoid what we see as shortcomings of present approaches, particularly the blurring effect in the frequency domain, which confuses rather than helps in distinguishing speakers. We introduce a speech synthesis model that can be identified using Independent Component Analysis (ICA). The ICA representations of log spectral data result in cepstral-like, independent coefficients, which capture correlations among frequency bands specific to the given speaker. It also results in speaker specific basis functions. Coefficients determined from test data using the true basis functions show a low degree of correlation, while those determined using other basis functions depart from this norm. This enables the system to reliably recognize speakers. The resulting speaker recognition method is text-independent, invariant over time, and robust to channel variability. Its effectiveness has been tested in representing and recognizing speakers from a set of 462 people from the TIMIT database.

This paper studies application of independent component analysis techniques to automatic control engineering. It is often desired in control systems to separate signals which are mixed through some unknown process. Since this process usually has dynamics, blind source deconvolution must be achieved in this case. Under a certain assumption, the paper provides a method to this end, by making use of a time series expansion motivated from control system identification. This method is then applied to simultaneous estimation of both disturbance and system parameter. The result is illustrated by means of a numerical simulation and experiment with a mechanical system.

In many applications, the direction-of-arrival (DOA) estimation problem is of great interest. After blind separation of the signals incident onto the receiving sensor array, the DOAs can be estimated for each source individually. Non-Gaussian signals with negative kurtosis can be automatically captured by the constant modulus (CM) array, which is the most striking blind beamforming algorithm and widely discussed. However, very little experimental studies have been conducted so far. In this paper, after proposing a new method, we analyze the DOA estimation performance of the multistage CM array via computer simulations and water tank experiments, and compare it with that of other DOA estimation algorithms including `non-blind' and `blind' algorithms. The multistage CM array shows better results in all considered situations.

The speech can be thought of as multi-spatial signal where each space corresponds to different attributes that affect the speech. Language, accent, speaker, emotions are few of such spaces. Each space can further be projected into n-dimensions, as there are different languages, accents and emotions, which combine together to make their sub-space. In this paper we talk about detecting emotions in speech by projecting emotional sub-space into 2-dimensional space. The basis of this space is neutral and stress emotions. To identify the weights of each component we use independent component analysis. This approach was used to exploit the non-gaussianity of the data.

This paper describes a new microphone system which separates a target sound from other sounds using blind source separation, when those sounds travel from the same direction. This system divides the observation signals into blocks in the time domain and decorrelates the output signals in each block. The resultant separation process in the block is obtained as a set of matrices even identifying the permutations as the same one. As the number of blocks increases, the intersection of the sets converges to the matrix that can separate the source signals, based on the non-stationarity of the source signal. It was experimentally shown that there is an optimal length of block, in which the stationarity within a block and nonstationarity among blocks are thought to be balanced.

An approach to multi-channel blind deconvolution is developed, which uses an adaptive filter that performs blind source separation in the Fourier space. The approach keeps (during the learning process) the same permutation and provides appropriate scaling of components for all frequency bins in the frequency space. Experiments verify a proper blind deconvolution of convolution mixtures of sources.

We approach the problem of blind identification and equalization (BIE) of single-user digital communication channels from the perspective of blind source separation (BSS). A new BSS-based BIE algorithm is proposed in this paper and is compared with a subspace method as well as a normalized variant of the well-known constant modulus algorithm (NCMA). The equalization qualities of the three algorithms are assessed using channels with well-conditioned and ill-conditioned convolution matrices. It is found that the BSS-based algorithm outperforms the other algorithms except for short source data sequences. The subspace method, which inverts the estimated channel to obtain the equalizer, leads to poor results in the case of the ill-conditioned channel. The simple NCMA suffers from slow convergence or misconvergence except for well-conditioned channels of low order.

This paper presents a new blind equalization technique for multilevel modulations. The proposed approach consists of fitting the probability density function (pdf) of the corresponding modulation at a set of specific points. The symbols of the modulation, along with the requirement of unity gain, determine these sampling points. The underlying pdf at the equalizer output is estimated by means of the Parzen window method. A soft transition between blind and decision directed equalization is possible by using an adaptive strategy for the kernel size of the Parzen window method. The method can be implemented using a stochastic gradient descent approach, which facilitates an on-line implementation. The proposed method has been compared with CMA and Benveniste-Goursat methods in QAM modulations.

This paper presents a new algorithm for the independent components analysis (ICA) problem based on efficient spacings estimates of entropy. Like many previous methods, we minimize a standard measure of the departure from independence, the estimated Kullback-Leibler divergence between a joint distribution and the product of its marginals. To do this, we use a consistent and rapidly converging entropy estimator due to Vasicek. The resulting algorithm is simple, computationally efficient, intuitively appealing, and outperforms other well known algorithms. In addition, the estimator and the resulting algorithm exhibit excellent robustness to outliers. We present favorable comparisons to Kernel ICA, FAST-ICA, JADE, and extended Infomax in extensive simulations.

In this paper, we focus on the fourth order cumulant based adaptive methods for independent component analysis. We propose a novel method based on the Jacobi Optimization, available for a wide set of maximum entropy (ME) based contrasts. In this algorithm we adaptively compute a moment matrix, an estimate of some fourth order moments of the whitened inputs. Starting from this matrix, the solution to the n-dimensional ME ICA problem may be solved at any time by means of the Jacobi Optimization approach. In the experiments included, we compare this method to previous ones such as the adEML or the EASI, obtaining a better performance.

In this paper, we propose two versions of a correlation-based blind source separation (BSS) method. Whereas its basic version operates in the time domain, its extended form is based on the time-frequency (TF) representations of the observed signals and thus applies to much more general conditions. The latter approach consists in identifying the columns of the (permuted scaled) mixing matrix in TF areas where this method detects that a single source occurs. Both the detection and identification stages of this approach use local correlation parameters of the TF transforms of the observed signals. This BSS method, called TIFCORR (for TIme-Frequency CORRelation-based BSS), is shown to yield very accurate separation for linear instantaneous mixtures of real speech signals (output SNR's are above 60 dB).

In this paper, we focus on the fourth order cumulant based adaptive methods for independent component analysis. We propose a novel method based on the Jacobi Optimization, available for a wide set of maximum entropy (ME) based contrasts. In this algorithm we adaptively compute a moment matrix, an estimate of some fourth order moments of the whitened inputs. Starting from this matrix, the solution to the n-dimensional ME ICA problem may be solved at any time by means of the Jacobi Optimization approach. In the experiments included, we compare this method to previous ones such as the adEML or the EASI, obtaining a better performance.

Overcomplete blind source separation (BSS) tries to recover more sources from less sensor signals. We present a new approach based on an estimated histogram of the sensor data; we search for the points fulfilling the overcomplete Geometric Convergence Condition, which has been shown to be a limit condition of overcomplete geometric BSS. The paper concludes with an example and a comparison of various overcomplete BSS algorithms.

Algebraic Independent Component Analysis (AICA) is a new ICA algorithm that exploits algebraic operations and vector-distance measures to estimate the unknown mixing matrix in a scaled algebraic domain. AICA possesses stability and convergence properties similar to earlier proposed geometic ICA (geo-ICA) algorithms, however, the choice of the proposed algebraic measures in AICA has several advantages. Firstly, it leads to considerable reduction in the computational complexity of the AICA algorithm as compared to similar algorithms relying on geometric measures making AICA more suitable for online implementations. Secondly, algebraic operations exhibit robustness against the inherent permutation and scaling issues in ICA, which simplifies the performance evaluation of the ICA algorithms using algebraic measures. Thirdly, the algebraic framework is directly extendable to any dimension of ICA problems exhibiting only a linear increase in the computational cost as a function of the mixing matrix dimension. The algorithm has been extensively tested for over-complete, under-complete and quadratic ICA using unimodal super-gaussian distributions. For other less peaked distributions, the algorithm can be applied with slight modifications. In this paper we focus on the overcomplete case, i.e., more sources than sensors. The overcomplete ICA is solved in two stages, AICA is used to estimate the rectangular mixing matrix, which is followed by optimal source inferencing using L1 norm based interior point LP technique. Further, some practical techniques are discussed to enhance the algebraic resolution of the AICA solution for cases where some of the columns of the mixing matrix are algebraically ``close'' to each other. Two illustrative simulation examples have also been presented.

In this paper, a new approach for blind source separation is presented. This approach is based on minimization of the mutual information of the outputs using a nonparametric ``gradient'' of mutual information, followed by a projection on the parametric model of the separation structure. It is applicable to different mixing system, linear as well as nonlinear, and the algorithms derived from this approach are very fast and efficient.

We have adopted the original Hérault-Jutten model for the segregation of two concurrent voices in realistic conditions. Firstly, with the new Daimler-Chrysler in-car database, we confirm our previous results obtained with a similar model, and how to make good use of the algorithm. Secondly, we create different mixture conditions from the DC database, and we test 3 kinds of improvement. (1) With the introduction of non-causal FIR filters, the H-J model is able to process asymmetric configurations of two speakers. (2) The benefit of the frequency decomposition is well retrieved with four subbands. Two compatible methods for completing the segregation by post-processing (3) are described: The amplification of the input/output relative level gain by joint-enhancement and the beamforming. We show a great overall reduction of the cross-talk, even in noise, which is promising for speech recognition applications.

The stability and sensitivity of joint diagonalization criteria are analyzed using the Hessian of the criteria at their critical points. The sensitivity of some known joint diagonalization criteria is shown to be weak when they are applied to the matrices with closely placed eigenvalues. In such a situation, a large deviation in the diagonalizer cause a slight change in the criterion which makes the estimation of the diagonalizer difficult. To overcome the problem, a new joint diagonalization criterion is introduced. The criterion is linear with respect to the target matrices and has ability to cancel the effect of closely placed eigenvalues.

This work shows a new method for blind separation of sources, based on geometrical considerations concerning the observation space. This new method is applied to a mixture of several sources and it obtains the estimated coefficients of the unknown mixture matrix A and separates the unknown sources. The principles of the new method and a description of the algorithm followed by some speed enhancements are shown. Finally, we illustrate with simulations of several source distributions how the algorithm performs.

In adaptive blind source separation, the order of recovered source signals is unpredictable due to order indeterminacy. However, it is sometimes desired that the reconstructed source signals be arranged in specified order according to their stochastic properties. To this end, some unsymmetry is introduced to the equilibrium point, and a new unsymmetrical natural gradient algorithm is proposed, which can separate the source signals in desired order. The validity of the proposed algorithm is confirmed through extensive computer simulations.

This paper presents an application of ICA to astronomical imaging. A first section describes the astrophysical context and motivates the use of source separation ideas. A second section describes our approach to the problem: the use of a noisy Gaussian stationary model. This technique uses spectral diversity and takes explicitly into account contamination by additive noise. Preliminary and extremely encouraging results on realistic synthetic signals and on real data will be presented at the conference.