Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280392
R. D. Rosa, N. Cesa-Bianchi
Decision tree classifiers are a widely used tool in data stream mining. The use of confidence intervals to estimate the gain associated with each split leads to very effective methods, like the popular Hoeffding tree algorithm. From a statistical viewpoint, the analysis of decision tree classifiers in a streaming setting requires knowing when enough new information has been collected to justify splitting a leaf. Although some of the issues in the statistical analysis of Hoeffding trees have been already clarified, a general and rigorous study of confidence intervals for splitting criteria is missing. We fill this gap by deriving accurate confidence intervals to estimate the splitting gain in decision tree learning with respect to three criteria: entropy, Gini index, and a third index proposed by Kearns and Mansour. Our confidence intervals depend in a more detailed way on the tree parameters. Experiments on real and synthetic data in a streaming setting show that our trees are indeed more accurate than trees with the same number of leaves generated by other techniques.
{"title":"Splitting with confidence in decision trees with application to stream mining","authors":"R. D. Rosa, N. Cesa-Bianchi","doi":"10.1109/IJCNN.2015.7280392","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280392","url":null,"abstract":"Decision tree classifiers are a widely used tool in data stream mining. The use of confidence intervals to estimate the gain associated with each split leads to very effective methods, like the popular Hoeffding tree algorithm. From a statistical viewpoint, the analysis of decision tree classifiers in a streaming setting requires knowing when enough new information has been collected to justify splitting a leaf. Although some of the issues in the statistical analysis of Hoeffding trees have been already clarified, a general and rigorous study of confidence intervals for splitting criteria is missing. We fill this gap by deriving accurate confidence intervals to estimate the splitting gain in decision tree learning with respect to three criteria: entropy, Gini index, and a third index proposed by Kearns and Mansour. Our confidence intervals depend in a more detailed way on the tree parameters. Experiments on real and synthetic data in a streaming setting show that our trees are indeed more accurate than trees with the same number of leaves generated by other techniques.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"108 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76330867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280718
Byungik Ahn
A convolutional neural network (CNN) is implemented on a field-programmable gate array (FPGA) and used for recognizing objects in real-time video streams. In this system, an image pyramid is constructed by successively down-scaling the input video stream. Image blocks are extracted from the image pyramid and classified by the CNN core. The detected parts are then marked on the output video frames. The CNN core is composed of six hardware neurons and two receptor units. The hardware neurons are designed as fully-pipelined digital circuits synchronized with the system clock, and are used to compute the model neurons in a time-sharing manner. The receptor units scan the input image for local receptive fields and continuously supply data to the hardware neurons as inputs. The CNN core module is controlled according to the contents of a table describing the sequence of computational stages and containing the system parameters required to control each stage. The use of this table makes the hardware system more flexible, and various CNN configurations can be accommodated without re-designing the system. The system implemented on a mid-range FPGA achieves a computational speed greater than 170,000 classifications per second, and performs scale-invariant object recognition from a 720×480 video stream at a speed of 60 fps. This work is a part of a commercial project, and the system is targeted for recognizing any pre-trained objects with a small physical volume and low power consumption.
{"title":"Real-time video object recognition using convolutional neural network","authors":"Byungik Ahn","doi":"10.1109/IJCNN.2015.7280718","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280718","url":null,"abstract":"A convolutional neural network (CNN) is implemented on a field-programmable gate array (FPGA) and used for recognizing objects in real-time video streams. In this system, an image pyramid is constructed by successively down-scaling the input video stream. Image blocks are extracted from the image pyramid and classified by the CNN core. The detected parts are then marked on the output video frames. The CNN core is composed of six hardware neurons and two receptor units. The hardware neurons are designed as fully-pipelined digital circuits synchronized with the system clock, and are used to compute the model neurons in a time-sharing manner. The receptor units scan the input image for local receptive fields and continuously supply data to the hardware neurons as inputs. The CNN core module is controlled according to the contents of a table describing the sequence of computational stages and containing the system parameters required to control each stage. The use of this table makes the hardware system more flexible, and various CNN configurations can be accommodated without re-designing the system. The system implemented on a mid-range FPGA achieves a computational speed greater than 170,000 classifications per second, and performs scale-invariant object recognition from a 720×480 video stream at a speed of 60 fps. This work is a part of a commercial project, and the system is targeted for recognizing any pre-trained objects with a small physical volume and low power consumption.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"92 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86435503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280749
Michael S. Gashler, Zachariah Kindle, Michael R. Smith
A minimalistic cognitive architecture called MANIC is presented. The MANIC architecture requires only three function approximating models, and one state machine. Even with so few major components, it is theoretically sufficient to achieve functional equivalence with all other cognitive architectures, and can be practically trained. Instead of seeking to trasfer architectural inspiration from biology into artificial intelligence, MANIC seeks to minimize novelty and follow the most well-established constructs that have evolved within various subfields of data science. From this perspective, MANIC offers an alternate approach to a long-standing objective of artificial intelligence. This paper provides a theoretical analysis of the MANIC architecture.
{"title":"A minimal architecture for general cognition","authors":"Michael S. Gashler, Zachariah Kindle, Michael R. Smith","doi":"10.1109/IJCNN.2015.7280749","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280749","url":null,"abstract":"A minimalistic cognitive architecture called MANIC is presented. The MANIC architecture requires only three function approximating models, and one state machine. Even with so few major components, it is theoretically sufficient to achieve functional equivalence with all other cognitive architectures, and can be practically trained. Instead of seeking to trasfer architectural inspiration from biology into artificial intelligence, MANIC seeks to minimize novelty and follow the most well-established constructs that have evolved within various subfields of data science. From this perspective, MANIC offers an alternate approach to a long-standing objective of artificial intelligence. This paper provides a theoretical analysis of the MANIC architecture.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"52 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86470877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280579
K. Zhan, Jinhui Shi, Qiaoqiao Li, Jicai Teng, Mingying Wang
Spiking cortical model (SCM) is applied to image segmentation. A natural image is processed to produce a series of spike images by SCM, and the segmented result is obtained by the integration of the series of spike images. An appropriate maximum iterative times is selected to achieve an optimal threshold of SCM. In each iteration, neurons that produced spikes correspond to pixels with an intensity of the input natural image approximately. SCM synchronizes the output spikes via the fast linking synaptic modulation, which makes objects in the image as homogeneous as possible. Experimental results show that the output image not only separates objects and background well, but also pixels in each object are homogeneous. The proposed method performs well over other methods and the quantitative metrics are consistent with the visual performance.
{"title":"Image segmentation using fast linking SCM","authors":"K. Zhan, Jinhui Shi, Qiaoqiao Li, Jicai Teng, Mingying Wang","doi":"10.1109/IJCNN.2015.7280579","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280579","url":null,"abstract":"Spiking cortical model (SCM) is applied to image segmentation. A natural image is processed to produce a series of spike images by SCM, and the segmented result is obtained by the integration of the series of spike images. An appropriate maximum iterative times is selected to achieve an optimal threshold of SCM. In each iteration, neurons that produced spikes correspond to pixels with an intensity of the input natural image approximately. SCM synchronizes the output spikes via the fast linking synaptic modulation, which makes objects in the image as homogeneous as possible. Experimental results show that the output image not only separates objects and background well, but also pixels in each object are homogeneous. The proposed method performs well over other methods and the quantitative metrics are consistent with the visual performance.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"24 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83979014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280399
Prayag Gowgi, S. G. Srinivasa
We revisit the problem of temporal self organization using activity diffusion based on the neural gas (NGAS) algorithm. Using a potential function formulation motivated by a spatio-temporal metric, we derive an adaptation rule for dynamic vector quantization of data. Simulations results show that our algorithm learns the input distribution and time correlation much faster compared to the static neural gas method over the same data sequence under similar training conditions.
{"title":"Spatio-temporal Map Formation based on a Potential Function","authors":"Prayag Gowgi, S. G. Srinivasa","doi":"10.1109/IJCNN.2015.7280399","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280399","url":null,"abstract":"We revisit the problem of temporal self organization using activity diffusion based on the neural gas (NGAS) algorithm. Using a potential function formulation motivated by a spatio-temporal metric, we derive an adaptation rule for dynamic vector quantization of data. Simulations results show that our algorithm learns the input distribution and time correlation much faster compared to the static neural gas method over the same data sequence under similar training conditions.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"449 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82951418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280523
Lin Bai, Kan Li, Shuai Jiang
Automatically discovering and recognizing the main structured visual pattern of an image is a challenging problem. The most difficulties are how to find the component objects and how to recognize the interaction among these objects. The component objects of the structured visual pattern have consistent 3D spatial co-occurrence layout across images, which manifest themselves as a predictable pattern called visual composite. In this paper, we propose a visual composite recognition model to automatically discover and recognize the visual composite of an image. Our model firstly learns 3D spatial co-occurrence statistics among objects to discover the potential structured visual pattern of an image so that it captures the component objects of visual composite. Secondly, we construct a feedforward architecture using the proposed factored three-way interaction machine to recognize the visual composite, which casts the recognition problem as a structured prediction task. It predicts the visual composite by maximizing the probability of the correct structured label given the component objects and their 3D spatial context. Experiments conducted on a six-class sports dataset and a phrasal recognition dataset respectively demonstrate the encouraging performance of our model in discovery precision and recognition accuracy compared with competing approaches.
{"title":"Recognizing visual composite in real images","authors":"Lin Bai, Kan Li, Shuai Jiang","doi":"10.1109/IJCNN.2015.7280523","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280523","url":null,"abstract":"Automatically discovering and recognizing the main structured visual pattern of an image is a challenging problem. The most difficulties are how to find the component objects and how to recognize the interaction among these objects. The component objects of the structured visual pattern have consistent 3D spatial co-occurrence layout across images, which manifest themselves as a predictable pattern called visual composite. In this paper, we propose a visual composite recognition model to automatically discover and recognize the visual composite of an image. Our model firstly learns 3D spatial co-occurrence statistics among objects to discover the potential structured visual pattern of an image so that it captures the component objects of visual composite. Secondly, we construct a feedforward architecture using the proposed factored three-way interaction machine to recognize the visual composite, which casts the recognition problem as a structured prediction task. It predicts the visual composite by maximizing the probability of the correct structured label given the component objects and their 3D spatial context. Experiments conducted on a six-class sports dataset and a phrasal recognition dataset respectively demonstrate the encouraging performance of our model in discovery precision and recognition accuracy compared with competing approaches.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82959361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280624
Emre Çakir, T. Heittola, H. Huttunen, T. Virtanen
In this paper, the use of multi label neural networks are proposed for detection of temporally overlapping sound events in realistic environments. Real-life sound recordings typically have many overlapping sound events, making it hard to recognize each event with the standard sound event detection methods. Frame-wise spectral-domain features are used as inputs to train a deep neural network for multi label classification in this work. The model is evaluated with recordings from realistic everyday environments and the obtained overall accuracy is 63.8%. The method is compared against a state-of-the-art method using non-negative matrix factorization as a pre-processing stage and hidden Markov models as a classifier. The proposed method improves the accuracy by 19% percentage points overall.
{"title":"Polyphonic sound event detection using multi label deep neural networks","authors":"Emre Çakir, T. Heittola, H. Huttunen, T. Virtanen","doi":"10.1109/IJCNN.2015.7280624","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280624","url":null,"abstract":"In this paper, the use of multi label neural networks are proposed for detection of temporally overlapping sound events in realistic environments. Real-life sound recordings typically have many overlapping sound events, making it hard to recognize each event with the standard sound event detection methods. Frame-wise spectral-domain features are used as inputs to train a deep neural network for multi label classification in this work. The model is evaluated with recordings from realistic everyday environments and the obtained overall accuracy is 63.8%. The method is compared against a state-of-the-art method using non-negative matrix factorization as a pre-processing stage and hidden Markov models as a classifier. The proposed method improves the accuracy by 19% percentage points overall.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"6 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80707065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280307
Mingjie Qian, ChengXiang Zhai
Unsupervised feature selection is a useful tool for reducing the complexity and improving the generalization performance of data mining tasks. In this paper, we propose an Adaptive Unsupervised Feature Selection (AUFS) algorithm with explicit l2/l0-norm minimization. We use a joint adaptive loss for data fitting and a l2/l0 minimization for feature selection. We solve the optimization problem with an efficient iterative algorithm and prove that all the expected properties of unsupervised feature selection can be preserved. We also show that the computational complexity and memory use is only linear to the number of instances and square to the number of clusters. Experiments show that our algorithm outperforms the state-of-the-arts on seven different benchmark data sets.
{"title":"Joint adaptive loss and l2/l0-norm minimization for unsupervised feature selection","authors":"Mingjie Qian, ChengXiang Zhai","doi":"10.1109/IJCNN.2015.7280307","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280307","url":null,"abstract":"Unsupervised feature selection is a useful tool for reducing the complexity and improving the generalization performance of data mining tasks. In this paper, we propose an Adaptive Unsupervised Feature Selection (AUFS) algorithm with explicit l2/l0-norm minimization. We use a joint adaptive loss for data fitting and a l2/l0 minimization for feature selection. We solve the optimization problem with an efficient iterative algorithm and prove that all the expected properties of unsupervised feature selection can be preserved. We also show that the computational complexity and memory use is only linear to the number of instances and square to the number of clusters. Experiments show that our algorithm outperforms the state-of-the-arts on seven different benchmark data sets.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"52 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80769416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280815
Omid David, N. Netanyahu
This paper presents a novel deep learning based method for automatic malware signature generation and classification. The method uses a deep belief network (DBN), implemented with a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants. Using a dataset containing hundreds of variants for several major malware families, our method achieves 98.6% classification accuracy using the signatures generated by the DBN. The presented method is completely agnostic to the type of malware behavior that is logged (e.g., API calls and their parameters, registry entries, websites and ports accessed, etc.), and can use any raw input from a sandbox to successfully train the deep neural network which is used to generate malware signatures.
{"title":"DeepSign: Deep learning for automatic malware signature generation and classification","authors":"Omid David, N. Netanyahu","doi":"10.1109/IJCNN.2015.7280815","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280815","url":null,"abstract":"This paper presents a novel deep learning based method for automatic malware signature generation and classification. The method uses a deep belief network (DBN), implemented with a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants. Using a dataset containing hundreds of variants for several major malware families, our method achieves 98.6% classification accuracy using the signatures generated by the DBN. The presented method is completely agnostic to the type of malware behavior that is logged (e.g., API calls and their parameters, registry entries, websites and ports accessed, etc.), and can use any raw input from a sandbox to successfully train the deep neural network which is used to generate malware signatures.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"61 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79602876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-12DOI: 10.1109/IJCNN.2015.7280771
Pedro Machado, Kofi Appiah, T. McGinnity, J. Wade
The hardware layer of the Si elegans EU FP7 project is a massively parallel architecture designed to accurately emulate the C. elegans nematode in biological real-time. The C. elegans nematode is one of the simplest and well characterized Biological Nervous Systems (BNS) yet many questions related to basic functions such as movement and learning remain unanswered. The hardware layer includes a Hardware Neural Network (HNN) composed of 302 FPGAs (one per neuron), a Hardware Muscle Network (HMN) composed of 27 FPGAs (one per 5 muscles) and one Interface Manager FPGA, which is physically connected through 2 Local Area Networks (LANs) and through an innovative 3D optical connectome. Neuron structures (gap junctions and synapses) and muscles are modelled in the design environment of the software layer and their simulation data (spikes, variable values and parameters) generate data packets sent across the Local Area Networks (LAN). Furthermore, a software layer gives the user a set of design tools giving the required flexibility and high level hardware abstraction to design custom neuronal models. In this paper the authors present an overview of the hardware layer, connections infrastructure and communication protocol.
{"title":"Si elegans: Hardware architecture and communications protocol","authors":"Pedro Machado, Kofi Appiah, T. McGinnity, J. Wade","doi":"10.1109/IJCNN.2015.7280771","DOIUrl":"https://doi.org/10.1109/IJCNN.2015.7280771","url":null,"abstract":"The hardware layer of the Si elegans EU FP7 project is a massively parallel architecture designed to accurately emulate the C. elegans nematode in biological real-time. The C. elegans nematode is one of the simplest and well characterized Biological Nervous Systems (BNS) yet many questions related to basic functions such as movement and learning remain unanswered. The hardware layer includes a Hardware Neural Network (HNN) composed of 302 FPGAs (one per neuron), a Hardware Muscle Network (HMN) composed of 27 FPGAs (one per 5 muscles) and one Interface Manager FPGA, which is physically connected through 2 Local Area Networks (LANs) and through an innovative 3D optical connectome. Neuron structures (gap junctions and synapses) and muscles are modelled in the design environment of the software layer and their simulation data (spikes, variable values and parameters) generate data packets sent across the Local Area Networks (LAN). Furthermore, a software layer gives the user a set of design tools giving the required flexibility and high level hardware abstraction to design custom neuronal models. In this paper the authors present an overview of the hardware layer, connections infrastructure and communication protocol.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"27 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83389000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}