Research

self-organising

Self-Organising Neural Network Hierarchy [Springer] [Research Gate]

Published on 27 November 2020

Abstract

Mammalian brains exhibit functional self-organisation between different neocortical regions to form virtual hierarchies from a physical 2D sheet. We propose a biologically-inspired self-organizing neural network architecture emulating the same. The network is composed of autoencoder units and driven by a meta-learning rule based on maximizing the Shannon entropy of latent representations of the input, which optimizes the receptive field placement of each unit within a feature map. Unlike Neural Architecture Search, here both the network parameters and the architecture are learned simultaneously. In a case study on image datasets, we observe that the meta-learning rule causes a functional hierarchy to form, and leads to learning progressively better topological configurations and higher classification performance overall, starting from randomly initialized architectures. In particular, our approach yields competitive performance in terms of classification accuracy compared to optimal handcrafted architecture(s) with desirable topological features for this network type, on both MNIST and CIFAR-10 datasets, even though it is not as significant for the latter.

Satya Borgohain © 2021 Built and customized with Jekyll + Chalk.