Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407257
M. Islam, Md. Golam Gaus, A. Das, Mushlah Uddin Sarkar, M. Amin
Single element antennas have very little capability of variation of antenna gain pattern. For a desired directivity, shape of beam and steer able beam, array antenna is widely used in wireless network. Relative magnitude of feed currents, relative phases or separation between antenna elements, geometrical configuration of array are responsible for the overall radiation pattern. The weighting factor of each antenna element is governed by an adaptive algorithm based on input signal and desired signal to achieve dynamic shaping of antenna beam. In this paper, both single and multiple elements adaptive array antenna system is used to tune the gain in such a way that the gain is enhanced in the direction of desired signal and reduced in the direction of interference or jamming signals.
{"title":"Adaptive array antenna system in cancellation of jammer and noise of wireless link","authors":"M. Islam, Md. Golam Gaus, A. Das, Mushlah Uddin Sarkar, M. Amin","doi":"10.1109/ICCIT.2009.5407257","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407257","url":null,"abstract":"Single element antennas have very little capability of variation of antenna gain pattern. For a desired directivity, shape of beam and steer able beam, array antenna is widely used in wireless network. Relative magnitude of feed currents, relative phases or separation between antenna elements, geometrical configuration of array are responsible for the overall radiation pattern. The weighting factor of each antenna element is governed by an adaptive algorithm based on input signal and desired signal to achieve dynamic shaping of antenna beam. In this paper, both single and multiple elements adaptive array antenna system is used to tune the gain in such a way that the gain is enhanced in the direction of desired signal and reduced in the direction of interference or jamming signals.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131585454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407162
Md. Jahedul Islam, M. R. Talukdar, Md. Rafiqul Islam
The performance of direct sequence optical code division multiple access with cascaded optical amplifiers is analytically investigated in presence of fiber group velocity dispersion (GVD) and self phase modulation (SPM). In our analysis, Gaussian-shaped optical orthogonal codes are employed as address sequence and avalanche photodiode is used in an optical correlator receiver. The signal to noise power for the proposed system is evaluated on account of receiver, optical amplifier and multiuser access interference noises. The system performance is determined as a function of optical signal power, code length, code weight, number of simultaneous users, and fiber length. The power penalty suffered by the system is evaluated at bit error rate (BER) of 10−9. The numerical results show that the BER performance of the system is highly dependent on the signal input power, bit rate, and fiber length. It is found that the performance of the proposed system can be improved by the combined effect of GVD and SPM.
{"title":"Impact of optical fiber dispersion and self phase modulation on the performance of DS-OCDMA","authors":"Md. Jahedul Islam, M. R. Talukdar, Md. Rafiqul Islam","doi":"10.1109/ICCIT.2009.5407162","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407162","url":null,"abstract":"The performance of direct sequence optical code division multiple access with cascaded optical amplifiers is analytically investigated in presence of fiber group velocity dispersion (GVD) and self phase modulation (SPM). In our analysis, Gaussian-shaped optical orthogonal codes are employed as address sequence and avalanche photodiode is used in an optical correlator receiver. The signal to noise power for the proposed system is evaluated on account of receiver, optical amplifier and multiuser access interference noises. The system performance is determined as a function of optical signal power, code length, code weight, number of simultaneous users, and fiber length. The power penalty suffered by the system is evaluated at bit error rate (BER) of 10−9. The numerical results show that the BER performance of the system is highly dependent on the signal input power, bit rate, and fiber length. It is found that the performance of the proposed system can be improved by the combined effect of GVD and SPM.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130588416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407173
Vikram Arkalgud Chandrasetty, S. M. Aziz
In this paper, a simplified message passing algorithm for decoding Low-Density Parity-Check (LDPC) codes is proposed with a view to reduce the implementation complexity. The algorithm is based on simple hard-decision decoding techniques while utilizing the advantages of soft channel information for improvement in decoder performance. The algorithm has been validated through simulation using LDPC code compliant with Wireless Local Area Network (WLAN -IEEE 802.11n) standard. The results show that the proposed algorithm can achieve significant improvement in bit error rate (BER) performance and average decoding iterations compared to fully hard-decision based decoding algorithms.
{"title":"A reduced complexity message passing algorithm with improved performance for LDPC decoding","authors":"Vikram Arkalgud Chandrasetty, S. M. Aziz","doi":"10.1109/ICCIT.2009.5407173","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407173","url":null,"abstract":"In this paper, a simplified message passing algorithm for decoding Low-Density Parity-Check (LDPC) codes is proposed with a view to reduce the implementation complexity. The algorithm is based on simple hard-decision decoding techniques while utilizing the advantages of soft channel information for improvement in decoder performance. The algorithm has been validated through simulation using LDPC code compliant with Wireless Local Area Network (WLAN -IEEE 802.11n) standard. The results show that the proposed algorithm can achieve significant improvement in bit error rate (BER) performance and average decoding iterations compared to fully hard-decision based decoding algorithms.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124681553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407262
M. Akhand, P. C. Shill, K. Murase
Ensembles with several neural networks are widely used to improve the generalization performance over a single network. Proper diversity among component networks is considered an important parameter for ensemble construction so that failure of one may be compensated by others. Data sampling, i.e., different training sets for different networks, is the most investigated technique for diversity than other approaches. This paper presents a data sampling based neural network ensemble method where individual networks are trained on the union of original training set and a set of some artificially generated examples. Generated examples are different for different networks and are the element to produce diversity among the networks. The effectiveness of the method is evaluated on a suite of 20 benchmark classification problems. The experimental results show that the performance of this ensemble method is better or competitive with respect to the existing popular methods.
{"title":"Neural network ensembles based on Artificial Training Examples","authors":"M. Akhand, P. C. Shill, K. Murase","doi":"10.1109/ICCIT.2009.5407262","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407262","url":null,"abstract":"Ensembles with several neural networks are widely used to improve the generalization performance over a single network. Proper diversity among component networks is considered an important parameter for ensemble construction so that failure of one may be compensated by others. Data sampling, i.e., different training sets for different networks, is the most investigated technique for diversity than other approaches. This paper presents a data sampling based neural network ensemble method where individual networks are trained on the union of original training set and a set of some artificially generated examples. Generated examples are different for different networks and are the element to produce diversity among the networks. The effectiveness of the method is evaluated on a suite of 20 benchmark classification problems. The experimental results show that the performance of this ensemble method is better or competitive with respect to the existing popular methods.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116473297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407160
M. A. Hossain, Md. Fahad Chowdhury, F. Ahmed
Raman amplifier plays an important role in wavelength division multiplexing (WDM) systems. The drawback of Raman amplifier is that the Raman gain spectrum is flat for very small narrowband. So, it is necessary to equalize the Raman gain spectrum to achieve the desire performance for the WDM systems. This paper has proposed an extrinsic gain equalization technique by using Polarization-diversity Loop Filter. It is found that by using PDLF, an equalize bandwidth of 90nm (1550nm - 1640nm) can be achieved with a gain ripple of 0.5dB.
{"title":"Raman gain spectrum equalization by using Polarization-diversity Loop Filter","authors":"M. A. Hossain, Md. Fahad Chowdhury, F. Ahmed","doi":"10.1109/ICCIT.2009.5407160","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407160","url":null,"abstract":"Raman amplifier plays an important role in wavelength division multiplexing (WDM) systems. The drawback of Raman amplifier is that the Raman gain spectrum is flat for very small narrowband. So, it is necessary to equalize the Raman gain spectrum to achieve the desire performance for the WDM systems. This paper has proposed an extrinsic gain equalization technique by using Polarization-diversity Loop Filter. It is found that by using PDLF, an equalize bandwidth of 90nm (1550nm - 1640nm) can be achieved with a gain ripple of 0.5dB.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131079866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407170
M. Ferdous, J. Ferdous, T. Dey
As wireless sensor networks are equipped with sensor nodes which have a limited energy and sensing capabilities, a good routing protocol must be designed to make the network energy efficient. In this paper, we propose a centralized routing protocol called Central Base Station Controlled Density Aware Clustering Protocol (CBCDACP) where the base station centrally performs the cluster formation task. In this protocol, an optimum set of cluster heads are selected by using a new cluster head selection algorithm focusing on both the density of the sensor nodes and the minimum distances among the cluster head and its neighbor nodes. The performance of CBCDACP is then compared with some prevalent clustering-based schemes such as Low Energy Adaptive Clustering Hierarchy (LEACH), Centralized LEACH (LEACH-C). Simulation results show that CBCDACP can improve system life time and energy efficiency in terms of different simulation performance metrics over its comparatives.
{"title":"Central Base-Station Controlled Density Aware Clustering Protocol for wireless sensor networks","authors":"M. Ferdous, J. Ferdous, T. Dey","doi":"10.1109/ICCIT.2009.5407170","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407170","url":null,"abstract":"As wireless sensor networks are equipped with sensor nodes which have a limited energy and sensing capabilities, a good routing protocol must be designed to make the network energy efficient. In this paper, we propose a centralized routing protocol called Central Base Station Controlled Density Aware Clustering Protocol (CBCDACP) where the base station centrally performs the cluster formation task. In this protocol, an optimum set of cluster heads are selected by using a new cluster head selection algorithm focusing on both the density of the sensor nodes and the minimum distances among the cluster head and its neighbor nodes. The performance of CBCDACP is then compared with some prevalent clustering-based schemes such as Low Energy Adaptive Clustering Hierarchy (LEACH), Centralized LEACH (LEACH-C). Simulation results show that CBCDACP can improve system life time and energy efficiency in terms of different simulation performance metrics over its comparatives.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132321442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407289
Kiriti Prasad Choudhury, M. Rokonuzzaman
Small software companies can neither be typical software product company like Microsoft, nor afford to develop each customized application for individual customer from clean slate without taking into consideration of reuse. Systematic reuse is an opportunity of continued cost reduction and quality improvement in software delivery. Small software companies in Bangladesh and the rest of the world need to exploit this opportunity to deal with ever increasing competitive market forces. Systematic reuse largely depends on the scope of delivering customized software applications in the same market segment repeatedly to multiple customers. Understanding the market for defining the generic software application concept which will be customized for meeting individual customer's demand and expectation in a profitable manner is difficult. Thorough market analysis provides basic inputs for defining generic product concept for launching customized software application for delivering services targeting suitable market segments. Choosing an appropriate market research methodology is challenging within the context where the market forces for software application are rapidly changing. And a successful new service launch around customized application delivery by targeting an attractive market segment is thought by many to be the key to business growth and profitability. The problem of establishing a successful new business around a generic software product concept is not challenging because of shortage of ideas, but rather problems exist in proper analysis of the market, studying different market segments, targeting the attractive segment, minimizing development expenses, pricing product appropriately, adopting reuse capability for continued price reduction and quality improvement to deal with evolving market forces and marketing the new product. This paper, therefore, suggests the application of market research methodology to screen new software application ideas based on market analysis and shows how a software company can combine market research with new software product development to provide exciting customized software applications that better meet consumer requirements and make the company profitable. Both state-of-art-review and filed investigation have been performed to assess global as well as local practices of market research methodology in different industries including the software industry. Upon analysis of review outputs and field level investigation findings, a set of recommendations for practicing market research methodology for small software companies have been derived for improving systematic reuse based sustained capability improvement in delivering customized software applications in attractive market segments in an increasing profitable manner.
{"title":"A recommended market research based approach for small software companies for improving systematic reuse capability in delivering customized software solutions","authors":"Kiriti Prasad Choudhury, M. Rokonuzzaman","doi":"10.1109/ICCIT.2009.5407289","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407289","url":null,"abstract":"Small software companies can neither be typical software product company like Microsoft, nor afford to develop each customized application for individual customer from clean slate without taking into consideration of reuse. Systematic reuse is an opportunity of continued cost reduction and quality improvement in software delivery. Small software companies in Bangladesh and the rest of the world need to exploit this opportunity to deal with ever increasing competitive market forces. Systematic reuse largely depends on the scope of delivering customized software applications in the same market segment repeatedly to multiple customers. Understanding the market for defining the generic software application concept which will be customized for meeting individual customer's demand and expectation in a profitable manner is difficult. Thorough market analysis provides basic inputs for defining generic product concept for launching customized software application for delivering services targeting suitable market segments. Choosing an appropriate market research methodology is challenging within the context where the market forces for software application are rapidly changing. And a successful new service launch around customized application delivery by targeting an attractive market segment is thought by many to be the key to business growth and profitability. The problem of establishing a successful new business around a generic software product concept is not challenging because of shortage of ideas, but rather problems exist in proper analysis of the market, studying different market segments, targeting the attractive segment, minimizing development expenses, pricing product appropriately, adopting reuse capability for continued price reduction and quality improvement to deal with evolving market forces and marketing the new product. This paper, therefore, suggests the application of market research methodology to screen new software application ideas based on market analysis and shows how a software company can combine market research with new software product development to provide exciting customized software applications that better meet consumer requirements and make the company profitable. Both state-of-art-review and filed investigation have been performed to assess global as well as local practices of market research methodology in different industries including the software industry. Upon analysis of review outputs and field level investigation findings, a set of recommendations for practicing market research methodology for small software companies have been derived for improving systematic reuse based sustained capability improvement in delivering customized software applications in attractive market segments in an increasing profitable manner.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114912771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407317
M. Hossain, T. Azim, Md. Yasser Karim, A. Hoque
Data warehousing provides an excellent opportunity in transforming operational data into useful and reliable information to support the decision making process in any organization. Data Warehouse (DW) generalizes and consolidates multidimensional (MD) data. Hence, DW has become an important platform for Online Analytical Processing (OLAP) which is based on a MD data model. In this paper, we propose an integrated data warehouse system for the telecommunication companies in Bangladesh .Our integrated DW provides a common framework based on temporal data from different telecommunication operators rendering their services. We develop a dimensional model architecture, data extraction methodology, transformation and loading techniques which provides analytical options with the implementation of relational OLAP.
{"title":"Integrated data warehousing for telecommunication industries","authors":"M. Hossain, T. Azim, Md. Yasser Karim, A. Hoque","doi":"10.1109/ICCIT.2009.5407317","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407317","url":null,"abstract":"Data warehousing provides an excellent opportunity in transforming operational data into useful and reliable information to support the decision making process in any organization. Data Warehouse (DW) generalizes and consolidates multidimensional (MD) data. Hence, DW has become an important platform for Online Analytical Processing (OLAP) which is based on a MD data model. In this paper, we propose an integrated data warehouse system for the telecommunication companies in Bangladesh .Our integrated DW provides a common framework based on temporal data from different telecommunication operators rendering their services. We develop a dimensional model architecture, data extraction methodology, transformation and loading techniques which provides analytical options with the implementation of relational OLAP.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134493365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407151
Roksana Akter, M. L. Rahman
The area of wireless ad hoc networking has been receiving increasing attention among researchers in recent years. NS2 is the most common simulator which is used to simulate wireless ad hoc networks. But, NS2 does not scale the simulation well, where there are a large number of nodes in a simulation area. In this paper a geometric approach is proposed targeting the optimization of the area that exists between the approximated area for processing and the area outside the coverage area of a transmitter. Thus, this proposed algorithm reduces the number of unaffected nodes (considering the transmission signal range), which are currently considered by the NS2 simulator for checking, whether the node resides inside the transmission area or not. The current version of NS2 uses a block based optimization but the algorithm, proposed here, reduces the coverage area of the blocks near the boundary of the transmission range, targeting the improvement of the performance of NS2 for the simulation of large ad hoc networks. These theoretic assumptions have been followed by extensive realistic test conditions for generating a set of sensible result-set for achieving the optimum from the proposed physical propagation model to facilitate NS2 with a faster simulation performance for larger wireless ad hoc mobile network. The proposed approach saves the coverage area from at least 12.5% upto 78.15% than in existing solution.
{"title":"Geometric model for minimizing node discovery load in network simulators for large ad hoc networks","authors":"Roksana Akter, M. L. Rahman","doi":"10.1109/ICCIT.2009.5407151","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407151","url":null,"abstract":"The area of wireless ad hoc networking has been receiving increasing attention among researchers in recent years. NS2 is the most common simulator which is used to simulate wireless ad hoc networks. But, NS2 does not scale the simulation well, where there are a large number of nodes in a simulation area. In this paper a geometric approach is proposed targeting the optimization of the area that exists between the approximated area for processing and the area outside the coverage area of a transmitter. Thus, this proposed algorithm reduces the number of unaffected nodes (considering the transmission signal range), which are currently considered by the NS2 simulator for checking, whether the node resides inside the transmission area or not. The current version of NS2 uses a block based optimization but the algorithm, proposed here, reduces the coverage area of the blocks near the boundary of the transmission range, targeting the improvement of the performance of NS2 for the simulation of large ad hoc networks. These theoretic assumptions have been followed by extensive realistic test conditions for generating a set of sensible result-set for achieving the optimum from the proposed physical propagation model to facilitate NS2 with a faster simulation performance for larger wireless ad hoc mobile network. The proposed approach saves the coverage area from at least 12.5% upto 78.15% than in existing solution.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133421583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-12-01DOI: 10.1109/ICCIT.2009.5407276
Pavan Yalamanchili, T. Taha
We examine the parallelization of two recent biologically inspired hierarchical Bayesian cortical models onto two multicore processor based clusters. The models examined have been developed recently based on new insights from neuroscience and have several advantages over traditional neural network models. In particular, they need far fewer network nodes to simulate a biological scale cortical system than traditional neural network models, thus making them computationally more efficient. The two architectures examined are the Sony/Toshiba/IBM Cell BE and the Intel quad-core Xeon processors. Our results indicate that optimized implementations of the models on clusters of multicore processors can provide significant speedups and that such clusters are a promising approach for developing large scale simulations of the models. We show that for small scale implementations of the models, multicore clusters can provide speedups of about 850 times over serial implementations on the Cell Power Processor Unit.
我们研究了两个最近的受生物学启发的分层贝叶斯皮质模型在两个多核处理器集群上的并行化。所研究的模型是最近基于神经科学的新见解开发的,与传统的神经网络模型相比有几个优势。特别是,与传统的神经网络模型相比,它们需要更少的网络节点来模拟生物尺度的皮质系统,从而使它们的计算效率更高。测试的两种架构是索尼/东芝/IBM Cell BE和英特尔四核至强处理器。我们的研究结果表明,模型在多核处理器集群上的优化实现可以提供显着的速度,并且这种集群是开发模型大规模模拟的有前途的方法。我们表明,对于模型的小规模实现,多核集群可以提供比Cell Power Processor Unit上的串行实现大约850倍的速度。
{"title":"Multicore cluster implementations of hierarchical Bayesian cortical models","authors":"Pavan Yalamanchili, T. Taha","doi":"10.1109/ICCIT.2009.5407276","DOIUrl":"https://doi.org/10.1109/ICCIT.2009.5407276","url":null,"abstract":"We examine the parallelization of two recent biologically inspired hierarchical Bayesian cortical models onto two multicore processor based clusters. The models examined have been developed recently based on new insights from neuroscience and have several advantages over traditional neural network models. In particular, they need far fewer network nodes to simulate a biological scale cortical system than traditional neural network models, thus making them computationally more efficient. The two architectures examined are the Sony/Toshiba/IBM Cell BE and the Intel quad-core Xeon processors. Our results indicate that optimized implementations of the models on clusters of multicore processors can provide significant speedups and that such clusters are a promising approach for developing large scale simulations of the models. We show that for small scale implementations of the models, multicore clusters can provide speedups of about 850 times over serial implementations on the Cell Power Processor Unit.","PeriodicalId":443258,"journal":{"name":"2009 12th International Conference on Computers and Information Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127411759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}