Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033900
N. Partheeban, N. Sankarram
E-learning fulfils the thirst of knowledge and offers online content that can be delivered for the learner at anywhere, anytime and any age through a wide range of e-learning solution while compared with traditional learning system. It also provides the rapid access to specific knowledge and information. With the rapid growth of voluminous information sources and the time constraint the learning methodology has changed. Learners obtain knowledge through e-Learning systems rather than manually teaching and learning. In this research paper proposes the e-learning management system with web services oriented frame work and SOA. This system supports the cross browser and fully integrated with different databases. This system focused around the several features namely Content Management, Content Protection, Learning Management, Delivery Management, Evaluation management, Access Control, etc., and mainly focused on integrated platform needed for e-learning and managements.
{"title":"e-Learning management system using web services","authors":"N. Partheeban, N. Sankarram","doi":"10.1109/ICICES.2014.7033900","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033900","url":null,"abstract":"E-learning fulfils the thirst of knowledge and offers online content that can be delivered for the learner at anywhere, anytime and any age through a wide range of e-learning solution while compared with traditional learning system. It also provides the rapid access to specific knowledge and information. With the rapid growth of voluminous information sources and the time constraint the learning methodology has changed. Learners obtain knowledge through e-Learning systems rather than manually teaching and learning. In this research paper proposes the e-learning management system with web services oriented frame work and SOA. This system supports the cross browser and fully integrated with different databases. This system focused around the several features namely Content Management, Content Protection, Learning Management, Delivery Management, Evaluation management, Access Control, etc., and mainly focused on integrated platform needed for e-learning and managements.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"71 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89280987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034192
Nisha M. Katre, M. Madankar
Wireless sensor and actor networks employ actor nodes within the wireless sensor network (WSN) which can process the sensed data and perform certain actions. For best response Inter-actor coordination is required. The employed actors should form and maintain a connected inter-actor network at the times. WSANs often operate in harsh environments where actors can easily fail or may get damaged. This kind of failures can partition the inter-actor network and thus eventually make the network useless. In order to handle such failures, one of the effective recovery methodologies is to autonomously reposition a subset of the sensor nodes to restore the connectivity. Generally the Contemporary recovery schemes either impose high node relocation overhead or extend some of the inter-actor data paths. Here an overview of such kinds of different fault tolerance algorithms is provided.
{"title":"Review on fault tolerance with minimal topology changes in WSAN","authors":"Nisha M. Katre, M. Madankar","doi":"10.1109/ICICES.2014.7034192","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034192","url":null,"abstract":"Wireless sensor and actor networks employ actor nodes within the wireless sensor network (WSN) which can process the sensed data and perform certain actions. For best response Inter-actor coordination is required. The employed actors should form and maintain a connected inter-actor network at the times. WSANs often operate in harsh environments where actors can easily fail or may get damaged. This kind of failures can partition the inter-actor network and thus eventually make the network useless. In order to handle such failures, one of the effective recovery methodologies is to autonomously reposition a subset of the sensor nodes to restore the connectivity. Generally the Contemporary recovery schemes either impose high node relocation overhead or extend some of the inter-actor data paths. Here an overview of such kinds of different fault tolerance algorithms is provided.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"14 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89357907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033804
S. Suguna, A. Suhasini
Today, in every organization are generated in large volume of data in electronic format that required the safety storage services. Data backup and Disaster Recovery / Business Continuity issues are becoming fundamental in networks since the importance and societal value of digital data is continuously increasing. Every organization requires a business continuity plan (BCP) or disaster recovery plan (DRP) and data backup which falls within the cost constraints while achieving the target recovery requirements in terms of recovery time objective (RTO) and recovery point objective (RPO). The organizations must identify the probable consequences that can cause disasters and evaluate their impact. There is an obvious need of supporting data for resilience against major failures; in many situations the process of storing backup data is also enforced by the law. The aim of this paper is to overview of various techniques in data backup and disaster recovery systems in the cloud environment.
{"title":"Overview of data backup and disaster recovery in cloud","authors":"S. Suguna, A. Suhasini","doi":"10.1109/ICICES.2014.7033804","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033804","url":null,"abstract":"Today, in every organization are generated in large volume of data in electronic format that required the safety storage services. Data backup and Disaster Recovery / Business Continuity issues are becoming fundamental in networks since the importance and societal value of digital data is continuously increasing. Every organization requires a business continuity plan (BCP) or disaster recovery plan (DRP) and data backup which falls within the cost constraints while achieving the target recovery requirements in terms of recovery time objective (RTO) and recovery point objective (RPO). The organizations must identify the probable consequences that can cause disasters and evaluate their impact. There is an obvious need of supporting data for resilience against major failures; in many situations the process of storing backup data is also enforced by the law. The aim of this paper is to overview of various techniques in data backup and disaster recovery systems in the cloud environment.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"46 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87334091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033980
N. S. Reddy, Ganesh Chokkakula, Bhumarapu Devendra, K. Sivasankaran
Modern real-time embedded system must support multiple concurrently running applications. Double Data Rate Synchronous DRAM (DDR SDRAM) became mainstream choice in designing memories due to its burst access, speed and pipeline features. Synchronous dynamic access memory is designed to support DDR transferring. To achieve the correctness of different applications and system work as to be intended, the memory controller must be configured with pipelined design for multiple operations without delay. The main function of DDR SDRAM is to double the bandwidth of the memory by transferring data (either read operation or write operation) twice per cycle on both the falling and raising edges of the clock signal. The designed DDR Controller generates the control signals as synchronous command interface between the DRAM Memory and other modules. The DDR SDRAM controller supports data width of 64 bits and Burst Length of 4 and CAS (Column Address Strobe) latency of 2 and in this pipelined SRAM controller design, improvement of 28.57% is achieved in performance of memory accessing. The architecture is designed in Modelsim AlTERA STARTER EDITION 6.5b and Cadence (RTL complier and encounter).
{"title":"ASIC implementation of high speed pipelined DDR SDRAM controller","authors":"N. S. Reddy, Ganesh Chokkakula, Bhumarapu Devendra, K. Sivasankaran","doi":"10.1109/ICICES.2014.7033980","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033980","url":null,"abstract":"Modern real-time embedded system must support multiple concurrently running applications. Double Data Rate Synchronous DRAM (DDR SDRAM) became mainstream choice in designing memories due to its burst access, speed and pipeline features. Synchronous dynamic access memory is designed to support DDR transferring. To achieve the correctness of different applications and system work as to be intended, the memory controller must be configured with pipelined design for multiple operations without delay. The main function of DDR SDRAM is to double the bandwidth of the memory by transferring data (either read operation or write operation) twice per cycle on both the falling and raising edges of the clock signal. The designed DDR Controller generates the control signals as synchronous command interface between the DRAM Memory and other modules. The DDR SDRAM controller supports data width of 64 bits and Burst Length of 4 and CAS (Column Address Strobe) latency of 2 and in this pipelined SRAM controller design, improvement of 28.57% is achieved in performance of memory accessing. The architecture is designed in Modelsim AlTERA STARTER EDITION 6.5b and Cadence (RTL complier and encounter).","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"51 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86967933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033883
Navnath Shete, Avinash Jadhav
Software is a set of instructions executed by a computer which are designed to perform a particular task. Software Development Life Cycle is used to develop the software. Software Testing is the important phase of software development life cycle (SDLC). Software testing is a part of SDLC. Testing fulfills the customer's requirement. In addition to that testing process finds and removes the bugs of the software. Developing organization tries to show that the software they developing is quality software and process used to develop as well as to test software are quality processes. To test any software, tester writes test cases based on Software Requirement Specification (SRS). SRS contain all the functional and non-functional requirements of the software. Individual component (Unit) requirement specifications are written in detail. Test engineer and / or Tester used SRS to write test cases. Test cases are used to test the software thoroughly in manual testing. All small loop holes of the software could be identified by test cases. This paper focus on the significance of test cases and their role to test software used in IT industries. The researcher has concluded that without test cases testing would not be possible.
{"title":"An empirical study of test cases in software testing","authors":"Navnath Shete, Avinash Jadhav","doi":"10.1109/ICICES.2014.7033883","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033883","url":null,"abstract":"Software is a set of instructions executed by a computer which are designed to perform a particular task. Software Development Life Cycle is used to develop the software. Software Testing is the important phase of software development life cycle (SDLC). Software testing is a part of SDLC. Testing fulfills the customer's requirement. In addition to that testing process finds and removes the bugs of the software. Developing organization tries to show that the software they developing is quality software and process used to develop as well as to test software are quality processes. To test any software, tester writes test cases based on Software Requirement Specification (SRS). SRS contain all the functional and non-functional requirements of the software. Individual component (Unit) requirement specifications are written in detail. Test engineer and / or Tester used SRS to write test cases. Test cases are used to test the software thoroughly in manual testing. All small loop holes of the software could be identified by test cases. This paper focus on the significance of test cases and their role to test software used in IT industries. The researcher has concluded that without test cases testing would not be possible.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"7 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90761894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034108
M. Arulmozhi, M. Anbuselvi
Non Binary Low Density Parity Check (NB-LDPC) codes, a category of LDPC codes have better decoding performance in high order Galois field. A construction method called hierarchically diagonal matrix (HDM) is proposed in this paper. The constructed HDM is analyzed for IEEE 802.11 n specification of code length 648, rate 1/2 over GF (4). Codes constructed based on the hierarchical matrix perform well over the AWGN channel with FFT based sum product iterative decoding (FFT-SPA) algorithm. The computation complexity of the HDM is analyzed. The average number of multiplications and additions involved in the HDM of check node unit and variable node unit has reduced to 62% and 48% when compared with random matrix.
{"title":"Improvements on construction of quasi cyclic irregular non binary LDPC codes","authors":"M. Arulmozhi, M. Anbuselvi","doi":"10.1109/ICICES.2014.7034108","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034108","url":null,"abstract":"Non Binary Low Density Parity Check (NB-LDPC) codes, a category of LDPC codes have better decoding performance in high order Galois field. A construction method called hierarchically diagonal matrix (HDM) is proposed in this paper. The constructed HDM is analyzed for IEEE 802.11 n specification of code length 648, rate 1/2 over GF (4). Codes constructed based on the hierarchical matrix perform well over the AWGN channel with FFT based sum product iterative decoding (FFT-SPA) algorithm. The computation complexity of the HDM is analyzed. The average number of multiplications and additions involved in the HDM of check node unit and variable node unit has reduced to 62% and 48% when compared with random matrix.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89996289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033745
S. Usmin, M. A. Irudayaraja, U. Muthaiah
Cloud Data Centers provides a range of solutions for systems deployment and operation. It is used to provide hosted applications for a third party to offer services to their customers by multiplexing server resources. Virtualization creates a virtual version of the resources by dividing the execution units into one or more execution units. In this paper, we present an approach that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers actively used. The resource allocation problem can be modeled as the bin packing problem where each server is a bin and each virtual machine is the item to be packed. Virtualization technology makes it easy to move running application across physical machines without any interruption. We abstract this as a variant of the relaxed classical on-line bin packing problem and develop a practical, efficient algorithm that works well in a real system according to Service Level Agreements. We adjust the resources available to each VM both within and across physical servers with memory de-duplication technologies. This can be used to adjust the VM layout for load balancing and energy saving purpose. Extensive simulation and experiment results demonstrate that our system achieves good performance compared to the existing work.
{"title":"Dynamic placement of virtualized resources for data centers in cloud","authors":"S. Usmin, M. A. Irudayaraja, U. Muthaiah","doi":"10.1109/ICICES.2014.7033745","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033745","url":null,"abstract":"Cloud Data Centers provides a range of solutions for systems deployment and operation. It is used to provide hosted applications for a third party to offer services to their customers by multiplexing server resources. Virtualization creates a virtual version of the resources by dividing the execution units into one or more execution units. In this paper, we present an approach that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers actively used. The resource allocation problem can be modeled as the bin packing problem where each server is a bin and each virtual machine is the item to be packed. Virtualization technology makes it easy to move running application across physical machines without any interruption. We abstract this as a variant of the relaxed classical on-line bin packing problem and develop a practical, efficient algorithm that works well in a real system according to Service Level Agreements. We adjust the resources available to each VM both within and across physical servers with memory de-duplication technologies. This can be used to adjust the VM layout for load balancing and energy saving purpose. Extensive simulation and experiment results demonstrate that our system achieves good performance compared to the existing work.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"18 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90068955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033961
G. Vinodhini, R. Chandrasekaran
The rapid growth of online social media acts as a medium where people contribute their opinion and emotions as text messages. The messages include reviews and opinions on certain topics such as movie, book, product, politics and so on. Opinion mining refers to the application of natural language processing, computational linguistics, and text mining to identify or classify whether the opinion expressed in text message is positive or negative. Back Propagation Neural Networks is supervised machine learning methods that analyze data and recognize the patterns that are used for classification. This work focuses on binary classification to classify the text sentiment into positive and negative reviews. In this study Principal Component Analysis (PCA) is used to extract the principal components, to be used as predictors and back propagation neural network (BPN) have been employed as a classifier. The performance of PCA+ BPN and BPN without PCA has been compared using Receiver Operating Characteristics (ROC) analysis. The classifier is validated using 10-Fold cross validation. The result shows the effectiveness of BPN with PCA used as a feature reduction method for text sentiment classification.
{"title":"Sentiment classification using principal component analysis based neural network model","authors":"G. Vinodhini, R. Chandrasekaran","doi":"10.1109/ICICES.2014.7033961","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033961","url":null,"abstract":"The rapid growth of online social media acts as a medium where people contribute their opinion and emotions as text messages. The messages include reviews and opinions on certain topics such as movie, book, product, politics and so on. Opinion mining refers to the application of natural language processing, computational linguistics, and text mining to identify or classify whether the opinion expressed in text message is positive or negative. Back Propagation Neural Networks is supervised machine learning methods that analyze data and recognize the patterns that are used for classification. This work focuses on binary classification to classify the text sentiment into positive and negative reviews. In this study Principal Component Analysis (PCA) is used to extract the principal components, to be used as predictors and back propagation neural network (BPN) have been employed as a classifier. The performance of PCA+ BPN and BPN without PCA has been compared using Receiver Operating Characteristics (ROC) analysis. The classifier is validated using 10-Fold cross validation. The result shows the effectiveness of BPN with PCA used as a feature reduction method for text sentiment classification.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"19 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90304065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033871
Prashant Singh, Mayank Mishra, P. N. Barwal
This paper begins by introducing the concept of wireless LAN (WLAN). The introductory section gives brief information on the WLAN components and its architecture. In order to examine the WLAN security threats, this paper will look at both active & passive attacks. The paper will then explain the security & flaws of legacy IEEE802.11 WLAN standards. This situation leads to further research regarding practical solutions in implementing a more secured WLAN. This paper will also cover the new standards to improve the security of WLAN such as the IEEE 802. lx standard, which comprises of three separated sections: Point-to-Point Protocol (PPP), Extensible Authentication Protocol (EAP) and 802. lx itself. Then the paper look for a newly proposed standard i.e. 802.11Î for key distribution and encryption that will play a big role in improving the overall security capabilities of current and future WLAN networks. Finally, this paper ends with the conclusion of highlighted issues and solutions.
{"title":"Analysis of security issues and their solutions in wireless LAN","authors":"Prashant Singh, Mayank Mishra, P. N. Barwal","doi":"10.1109/ICICES.2014.7033871","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033871","url":null,"abstract":"This paper begins by introducing the concept of wireless LAN (WLAN). The introductory section gives brief information on the WLAN components and its architecture. In order to examine the WLAN security threats, this paper will look at both active & passive attacks. The paper will then explain the security & flaws of legacy IEEE802.11 WLAN standards. This situation leads to further research regarding practical solutions in implementing a more secured WLAN. This paper will also cover the new standards to improve the security of WLAN such as the IEEE 802. lx standard, which comprises of three separated sections: Point-to-Point Protocol (PPP), Extensible Authentication Protocol (EAP) and 802. lx itself. Then the paper look for a newly proposed standard i.e. 802.11Î for key distribution and encryption that will play a big role in improving the overall security capabilities of current and future WLAN networks. Finally, this paper ends with the conclusion of highlighted issues and solutions.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76963064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034030
G. Devi Prasanna, P. Abinaya, J. Poornimasre
This paper presents the non-intrusive built-in self-test system (BIST) for the test pattern generator (TPG) and output response analyzer (ORA) for testing of the field programmable gate array (FPGA). It consists of software and hardware parts with channels in between them to establish communication. The test generation and the response analysis are done in the software part whereas the hardware part is the circuit under test. Another FPGA is used to perform the interfacing operation. The configuration numbers are greatly reduced in this technique when compared with the embedded BIST technique. By incorporating bit-swapping linear feedback shift register (BS-LFSR) as the TPG instead of the conventional LFSR, transition numbers are reduced effectively. Hence the overall switching activity is reduced during the test operation, minimizing the power.
{"title":"Non-intrusive bit swapping pattern generator for BIST testing of LUTs","authors":"G. Devi Prasanna, P. Abinaya, J. Poornimasre","doi":"10.1109/ICICES.2014.7034030","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034030","url":null,"abstract":"This paper presents the non-intrusive built-in self-test system (BIST) for the test pattern generator (TPG) and output response analyzer (ORA) for testing of the field programmable gate array (FPGA). It consists of software and hardware parts with channels in between them to establish communication. The test generation and the response analysis are done in the software part whereas the hardware part is the circuit under test. Another FPGA is used to perform the interfacing operation. The configuration numbers are greatly reduced in this technique when compared with the embedded BIST technique. By incorporating bit-swapping linear feedback shift register (BS-LFSR) as the TPG instead of the conventional LFSR, transition numbers are reduced effectively. Hence the overall switching activity is reduced during the test operation, minimizing the power.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"21 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75031729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}