The feature columns were found in the inferior temporal cortex which have close relation to object recognition. This discovery leads us to infer that the brain extracts complex features from images and recognizes objects from a combination of those features. In addition, we know from other studies that the brain has hierarchical structures, that is, as the layer gets higher, receptive field sizes get larger and features to which cells are sensitive get more complex.
To reveal what mechanism makes it possible for feature columns to self-organize, we constructed a computational model with artificial neural networks and tried executing unsupervised classification tasks with the model.
We must solve two problems for constructing a computational model: one is to classify objects without any label and the other is to make feature columns self-organize.
We have implemented unsupervised learning using a method of competitive modular scheme with a mixture of auto-encoders [1]. Self-organization of feature columns is possible by building a hierarchical structure of the competitive modules [2,3]. It is also realized in this model that receptive field sizes and complexity of features in columns vary according to their layers.
Simulation results are shown in figures below, which is an example of object classification with a two-layer model. Figure 1 describes classification of two objects: circles and squares. Figure 2 shows feature columns constructed in the lower layer where their receptive field size is a quarter of a whole input field. Each column consists of corners and arcs severally.
The simulation results shows the possibility of self-organization of feature columns assuming both competitive modular learning and a hierarchical structure. We are, furthermore, investigating some applications of the network's ability for simultaneously realizing unsupervised classification and feature extraction.
Contact: Satoshi Suzuki, satoshi@cslab.kecl.ntt.co.jp