Monash University
Browse

Restricted Access

Reason: Access restricted by the author. A copy can be requested for private research and study by contacting your institution's library service. This copy cannot be republished

Brain-inspired self-organizing model for incremental knowledge discovery

thesis
posted on 2017-02-21, 05:03 authored by Gunawardana, Panditha Vidana Kasun Gayanath
Machine learning has been a vital research discipline that has contributed for the success of modern commercial organizations. Highly competitive business environments generate large amounts of data within milliseconds of time with the help of contemporary technological advancements. These organizations heavily rely on the data that they have collected and the internal processes such as strategic planning have been benefitted from the inferences which are made upon the collected data. In this era, data has become invaluable and hence, these circumstances create a perfect platform to engage machine learning research to uncover obscured knowledge from data. In this context, self-organizing map (SOM) is a well known learning algorithm which mainly facilitates to extract topological organization of a stationary data space. Being an artificial neural network model, SOM represents the cognitive models in machine learning. Abundance of published work on SOM applications exemplifies its effectiveness and popularity. However, it is a known fact that SOM can only be employed in stationary data spaces. The particular limitation has become detrimental to the popularity of SOM since contemporary data spaces have become more dynamic. Especially, the databases are growing in a rapid rate and hence, learning models such as SOM that are involved with stationary data spaces have been becoming inefficient. In literature, it is possible to find a number of learning models that has been attempted to fill the afore mentioned gap. However, none of the introduced algorithms has gained a recognition like SOM in machine learning community. Simplicity and coherence are key characteristics that up heaved SOM to its immense popularity where as those two are also the missing attributes in most of the modern solutions. Complexities these algorithms have introduced in order to control the learning process in a highly dynamic data space, hinder the usability and in turn affect to their popularity. In this respect, a novel artificial neural network model is proposed, which covers all these aspects to facilitate non-stationary data spaces in a similar manner that SOM serves in stationary data spaces. The proposed model is named as Brain-inspired self-organizing map (BiSOM) since the conceptual basis of the model follows distinct characteristics of the human brain and it extends the service of SOM in non-stationary data spaces. Simplicity is a key ingredient in BiSOM model by loosening up the rigid control over the learning process and letting the learning system self determine its behavior. Unlike other contemporary continuous learning models in this context, BiSOM adopts more biological inspirations in its model. Firstly, the behavior of excitability of a biological neuron is modeled on an artificial neuron. Therefore, an artificial neuron self determines its activation and there is no need of an external involvement for its adaptation. Then, over a network of excitable artificial neurons, BiSOM model achieves neural signal propagation similar to the biological neural networks. Furthermore, the layered architecture and the columnar organization of the neocortex of the human brain has inspired the multi layered formation of BiSOM model which extracts hierarchical knowledge facet of a non-stationary data space. As a novel artificial neural network model in machine learning field, BiSOM can cater for highly dynamic non-stationary data spaces. In a learning environment, it generates multiple layers such that each layer extracts a different level of topological organization and the set of layers as a whole provides hierarchical organization of the same data space. Thus, BiSOM provides multiple delineations for a single data space. Further, it consumes data in a one-pass manner and accumulates knowledge incrementally. Therefore, the proposed learning model can process unlimited number of data. Also, the simplicity of the model has resulted an efficient algorithm which can cater for rapid data streams. Therefore, the proposed learning model possesses the ability to cater requirements of modern dynamic data spaces.

History

Campus location

Australia

Principal supervisor

Jayantha Rajapakse

Additional supervisor 1

Damminda Alahakoon

Year of Award

2015

Department, School or Centre

Information Technology (Monash University Malaysia)

Course

Doctor of Philosophy

Degree Type

DOCTORATE

Faculty

Faculty of Information Technology

Usage metrics

    Faculty of Information Technology Theses

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC