Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7917953
Mandeep Kaur, M. Agnihotri
Cloud computing is really a new computing mode. Load balancing of resources across virtual machines is the fundamental problem of Cloud Computing. Effective job scheduling device must meet people 'requirements and increase the source usage, to be able to increase the entire efficiency of the cloud processing environment. In optimization issue. Genetic Algorithm and Ant Colony Optimization Algorithm have already been referred to as excellent option method. GA is created by adopting the organic progress process, while ACO is encouraged by the foraging behavior of ant species. This paper evaluated hybridization of ACO and GA adopt with multi-objective function to improve the global optimization solution.
{"title":"Performance evaluation of hybrid GAACO for task scheduling in cloud computing","authors":"Mandeep Kaur, M. Agnihotri","doi":"10.1109/IC3I.2016.7917953","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917953","url":null,"abstract":"Cloud computing is really a new computing mode. Load balancing of resources across virtual machines is the fundamental problem of Cloud Computing. Effective job scheduling device must meet people 'requirements and increase the source usage, to be able to increase the entire efficiency of the cloud processing environment. In optimization issue. Genetic Algorithm and Ant Colony Optimization Algorithm have already been referred to as excellent option method. GA is created by adopting the organic progress process, while ACO is encouraged by the foraging behavior of ant species. This paper evaluated hybridization of ACO and GA adopt with multi-objective function to improve the global optimization solution.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131779472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7918049
Yang Li, Da Xu
Traditional game AI is aimed at the standard players in the beginning of the construction. After designing the AI, The whole game decision-making system will not be dynamically adjusted, Because of the strong regularity, the traditional game AI is easy to be found the weakness by users, which is not easy to maintain the rationality of the decision. This paper presents a design scheme of game AI based on decision tree algorithm ID3. It through the mining of the player with the NPC interactive data on the set target has a significant correlation of data clues, real-time control of the decision-making system, to maintain the rationality of the decision. The experimental results show that the proposed scheme can adapt to the player's operation well, and it can keep the rationality of the decision in the actual operating environment.
{"title":"A game AI based on ID3 algorithm","authors":"Yang Li, Da Xu","doi":"10.1109/IC3I.2016.7918049","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918049","url":null,"abstract":"Traditional game AI is aimed at the standard players in the beginning of the construction. After designing the AI, The whole game decision-making system will not be dynamically adjusted, Because of the strong regularity, the traditional game AI is easy to be found the weakness by users, which is not easy to maintain the rationality of the decision. This paper presents a design scheme of game AI based on decision tree algorithm ID3. It through the mining of the player with the NPC interactive data on the set target has a significant correlation of data clues, real-time control of the decision-making system, to maintain the rationality of the decision. The experimental results show that the proposed scheme can adapt to the player's operation well, and it can keep the rationality of the decision in the actual operating environment.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7917982
Mukesh Taneja
Wireless networks for IoT applications support different types or classes of end devices. Each such class results in different uplink and downlink traffic behavior. It is important to identify suitable class for each end device. We first propose a generic framework for this purpose. We propose an element, called Software Controller, which learns device profile using variety of means such as information provided by the device itself, information provided by the associated IoT operator and contextual information using other sources. It can also use machine learning techniques to learn how a device might behave during certain period. Suitable resource management methods are to be associated with such classification schemes. We propose one such resource management method for 802.11ah type of networks. Next, we look at some traffic scenarios that may not be supported well by the existing device classes in some of these networks. Some IoT devices may always communicate low amount of data sporadically but some may need to communicate large amount of uplink or downlink (or bi-directional) data during certain time intervals. For example, an IoT device may need to measure (and report) certain parameters more frequently on detection of certain events, or a network server may want to set certain parameters or upgrade software at an IoT device during some time interval. It becomes important to control uplink / downlink communication opportunities and sleep interval at IoT devices in the network effectively. We propose a new device class and dynamic switching mechanism to handle such traffic scenarios effectively. We also include a software defined controller that provides for dynamic management of these communication opportunities at IoT devices and Access Points in the network.
{"title":"A framework for traffic management in IoT networks","authors":"Mukesh Taneja","doi":"10.1109/IC3I.2016.7917982","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917982","url":null,"abstract":"Wireless networks for IoT applications support different types or classes of end devices. Each such class results in different uplink and downlink traffic behavior. It is important to identify suitable class for each end device. We first propose a generic framework for this purpose. We propose an element, called Software Controller, which learns device profile using variety of means such as information provided by the device itself, information provided by the associated IoT operator and contextual information using other sources. It can also use machine learning techniques to learn how a device might behave during certain period. Suitable resource management methods are to be associated with such classification schemes. We propose one such resource management method for 802.11ah type of networks. Next, we look at some traffic scenarios that may not be supported well by the existing device classes in some of these networks. Some IoT devices may always communicate low amount of data sporadically but some may need to communicate large amount of uplink or downlink (or bi-directional) data during certain time intervals. For example, an IoT device may need to measure (and report) certain parameters more frequently on detection of certain events, or a network server may want to set certain parameters or upgrade software at an IoT device during some time interval. It becomes important to control uplink / downlink communication opportunities and sleep interval at IoT devices in the network effectively. We propose a new device class and dynamic switching mechanism to handle such traffic scenarios effectively. We also include a software defined controller that provides for dynamic management of these communication opportunities at IoT devices and Access Points in the network.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133373271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7918046
A. K. Jaiswal, Udai Shanker
In the mobile environment the major issues for commit the transactions are disconnection due to mobile handoff or shut down of devices. Disconnection is frequent problem in mobile environment. This results transaction failure. A traditional two-phase commit protocol is best for the distributed environment. In case of mobile environment where devices are connected wirelessly we should deal with the issues of mobility. The paper presents comparison study of three major commit protocols in the mobile environment. The paper surveys different approaches proposed for mobile transaction and summarize how the conventional commitment are revisited to fit the needs of mobile environment.
{"title":"Comparative study of commit protocols in mobile environment: M2PC, UCM and TCOT","authors":"A. K. Jaiswal, Udai Shanker","doi":"10.1109/IC3I.2016.7918046","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918046","url":null,"abstract":"In the mobile environment the major issues for commit the transactions are disconnection due to mobile handoff or shut down of devices. Disconnection is frequent problem in mobile environment. This results transaction failure. A traditional two-phase commit protocol is best for the distributed environment. In case of mobile environment where devices are connected wirelessly we should deal with the issues of mobility. The paper presents comparison study of three major commit protocols in the mobile environment. The paper surveys different approaches proposed for mobile transaction and summarize how the conventional commitment are revisited to fit the needs of mobile environment.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131531665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7917996
P. Pushpa
The number of sensors deployed at each and every place is growing at a faster rate to meet the current needs of modern society. The uninterrupted huge amount of data generated from sensors and other devices in Internet of Things (IoT) is to be captured and inferred into meaningful information. Context-aware computing is one of the key research issues in IoT paradigm and it is evident that it is successful in understanding each sensor data. As the physical world is highly dynamic the sensed context information is inherently imperfect or imprecise. Therefore the challenge here is to design a context information modelling and reasoning technique so as to extract meaningful information from raw sensor data. In this paper, we propose a rich and unique classification of context information from the perspective of IoT. Our research findings indicate the importance of the proposed context information classification and modelling process.
{"title":"Context information modelling for Internet of Things (Invited paper)","authors":"P. Pushpa","doi":"10.1109/IC3I.2016.7917996","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917996","url":null,"abstract":"The number of sensors deployed at each and every place is growing at a faster rate to meet the current needs of modern society. The uninterrupted huge amount of data generated from sensors and other devices in Internet of Things (IoT) is to be captured and inferred into meaningful information. Context-aware computing is one of the key research issues in IoT paradigm and it is evident that it is successful in understanding each sensor data. As the physical world is highly dynamic the sensed context information is inherently imperfect or imprecise. Therefore the challenge here is to design a context information modelling and reasoning technique so as to extract meaningful information from raw sensor data. In this paper, we propose a rich and unique classification of context information from the perspective of IoT. Our research findings indicate the importance of the proposed context information classification and modelling process.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132776361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7918007
P. K. A. Kumar, R. Uthirasamy, G. Saravanan, A. Ibrahim
Application of ANN Controller towards the improvement of AGC performance issue in three area system is proposed in this article. To minimize the Area Control Error (ACE) and stabilize the frequency oscillations along with tie line power deviations with conventional PI controller is used initially. The application of ANN is to tune PI gain values. Here multilayer feed forward technique is intended to simulation purpose. The performance enhancement of AGC using of ANN controller results were compared with the PI. From the analogy point the results gained from ANN controller seems to be better than performance of PI.
{"title":"AGC performance enhancement using ANN","authors":"P. K. A. Kumar, R. Uthirasamy, G. Saravanan, A. Ibrahim","doi":"10.1109/IC3I.2016.7918007","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918007","url":null,"abstract":"Application of ANN Controller towards the improvement of AGC performance issue in three area system is proposed in this article. To minimize the Area Control Error (ACE) and stabilize the frequency oscillations along with tie line power deviations with conventional PI controller is used initially. The application of ANN is to tune PI gain values. Here multilayer feed forward technique is intended to simulation purpose. The performance enhancement of AGC using of ANN controller results were compared with the PI. From the analogy point the results gained from ANN controller seems to be better than performance of PI.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"340 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116451623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7917936
Tribid Debbarma, K. Chandrasekaran
Current computing systems comes with different power management and profiling tools to run the system in its optimal state. Though the hardware systems have advanced a lot in-terms of energy efficiency and computing power, the software's energy and resources efficiency is still lacking behind. In many cases due to poor/bad designing of software it cannot utilize the hardware efficiently and end up a system with high energy consumption. To address this issues software' s need a careful profiling in its development process to make the software efficient and less resource hungry. In this paper we compared some of the profiling tools available as Free and Open Source Software (FOSS) which are used under Linux environment. These software tools uses different strategies and have different accuracies in finding a system and software programs behavior and its resource requirements. Their performance and resource overheads such as memory, CPU, disk consumption were compared and results are summarized for making the tools selection easier to researchers and developers alike. Another important issue with these tools are that, their reporting formats are not always easy to understand and it makes them less user friendly.
{"title":"Comparison of FOSS based profiling tools in Linux operating system environment","authors":"Tribid Debbarma, K. Chandrasekaran","doi":"10.1109/IC3I.2016.7917936","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917936","url":null,"abstract":"Current computing systems comes with different power management and profiling tools to run the system in its optimal state. Though the hardware systems have advanced a lot in-terms of energy efficiency and computing power, the software's energy and resources efficiency is still lacking behind. In many cases due to poor/bad designing of software it cannot utilize the hardware efficiently and end up a system with high energy consumption. To address this issues software' s need a careful profiling in its development process to make the software efficient and less resource hungry. In this paper we compared some of the profiling tools available as Free and Open Source Software (FOSS) which are used under Linux environment. These software tools uses different strategies and have different accuracies in finding a system and software programs behavior and its resource requirements. Their performance and resource overheads such as memory, CPU, disk consumption were compared and results are summarized for making the tools selection easier to researchers and developers alike. Another important issue with these tools are that, their reporting formats are not always easy to understand and it makes them less user friendly.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114871226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7917938
Odunayo Fadahunsi, Mithileysh Sathiyanarayanan
Business processes are modelled, visualized and analyzed using process maps, and implemented using productivity suites such as MS Visio, Business Process Extraction Language (BPEL), and some other productivity suites. However, the problem with such current process models is that it is static. Business process tasks modelled using non-interactive visualisation tool does not allow active engagement of the analyst or users developing the process maps with the process maps to determine what if scenarios. The constant changes to existing business processes are enforced due to several external and internal factors with an objective of improving process efficiency and effectiveness. So, we developed a Petri net process model to visualise business process tasks and allow interaction with the process model to allow the user or analyst to compute from an ‘as-is’ business process model to a ‘to be’ business process model that would optimise both the time it takes to complete the process (process duration) and the cost of executing the business process tasks (process costs). Petri nets whilst a form of visualisation of tasks also enables mathematical modelling of the process model. Methodology: We applied mathematical combination theory, combined with Petri nets formalism to create design options in generating alternative business process options with different process duration and cost factors to enable the process modeller or analyst make a better decision. We visualized and evaluated different dynamic ‘transitions’ sequence generated based on best duration and cost, modelled the ‘design options’ in the Platform Independent Petri Net editor (PIPE2) tool and performed further analysis of the Petri net properties of the modelled processes. Finally, we discussed how to visualize and analyze ‘design options’ and provided certain views on business processes. Results: We demonstrated through our research that Petri nets could be used in visualising different process tasks as and users can interact with the process model. This was implemented using a case study of a supermarket chain business process. This research helped in identifying six out of the eleven business process ‘design options’ generated are best optimised each with process duration of 28 hours and process cost of 35.57 dollars. However, further qualitative analysis of these optimised ‘design options’ effectively resulted in only one out of the eleven as the most optimal process design indicating the selected revised business process. The visualization is helpful in analyzing dynamic transitions for exploring and understanding complex behaviour in the business processes. The results suggest that the combination of mathematical approach in conjunction with the analytical properties of Petri Nets could be used to generate unbiased alternative business process designs that can be further evaluated by the business process re-engineer or analyst in order to revise an existing business process.
{"title":"Visualizing and analyzing dynamic business process using Petri nets","authors":"Odunayo Fadahunsi, Mithileysh Sathiyanarayanan","doi":"10.1109/IC3I.2016.7917938","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917938","url":null,"abstract":"Business processes are modelled, visualized and analyzed using process maps, and implemented using productivity suites such as MS Visio, Business Process Extraction Language (BPEL), and some other productivity suites. However, the problem with such current process models is that it is static. Business process tasks modelled using non-interactive visualisation tool does not allow active engagement of the analyst or users developing the process maps with the process maps to determine what if scenarios. The constant changes to existing business processes are enforced due to several external and internal factors with an objective of improving process efficiency and effectiveness. So, we developed a Petri net process model to visualise business process tasks and allow interaction with the process model to allow the user or analyst to compute from an ‘as-is’ business process model to a ‘to be’ business process model that would optimise both the time it takes to complete the process (process duration) and the cost of executing the business process tasks (process costs). Petri nets whilst a form of visualisation of tasks also enables mathematical modelling of the process model. Methodology: We applied mathematical combination theory, combined with Petri nets formalism to create design options in generating alternative business process options with different process duration and cost factors to enable the process modeller or analyst make a better decision. We visualized and evaluated different dynamic ‘transitions’ sequence generated based on best duration and cost, modelled the ‘design options’ in the Platform Independent Petri Net editor (PIPE2) tool and performed further analysis of the Petri net properties of the modelled processes. Finally, we discussed how to visualize and analyze ‘design options’ and provided certain views on business processes. Results: We demonstrated through our research that Petri nets could be used in visualising different process tasks as and users can interact with the process model. This was implemented using a case study of a supermarket chain business process. This research helped in identifying six out of the eleven business process ‘design options’ generated are best optimised each with process duration of 28 hours and process cost of 35.57 dollars. However, further qualitative analysis of these optimised ‘design options’ effectively resulted in only one out of the eleven as the most optimal process design indicating the selected revised business process. The visualization is helpful in analyzing dynamic transitions for exploring and understanding complex behaviour in the business processes. The results suggest that the combination of mathematical approach in conjunction with the analytical properties of Petri Nets could be used to generate unbiased alternative business process designs that can be further evaluated by the business process re-engineer or analyst in order to revise an existing business process.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125123515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7917965
Yojna Arora, Dinesh Goyal
In the current scenario, data is considered to be the biggest assets. One who has maximum relevant data is considered to be rich in the information industry. But only the collection of data is not enough, it needs to be analyzed. This huge amount of data which is termed ass Big Data cannot be analyzed by traditional tools and techniques, rather it requires more advanced Techniques which can make data retrieval, management and storage much faster are required. In this paper an introduction to big data is explained along with a detailed comparative study of various Big Data techniques which have already been implemented. At the end various issues which still exist are enlisted.
{"title":"Big data: A review of analytics methods & techniques","authors":"Yojna Arora, Dinesh Goyal","doi":"10.1109/IC3I.2016.7917965","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7917965","url":null,"abstract":"In the current scenario, data is considered to be the biggest assets. One who has maximum relevant data is considered to be rich in the information industry. But only the collection of data is not enough, it needs to be analyzed. This huge amount of data which is termed ass Big Data cannot be analyzed by traditional tools and techniques, rather it requires more advanced Techniques which can make data retrieval, management and storage much faster are required. In this paper an introduction to big data is explained along with a detailed comparative study of various Big Data techniques which have already been implemented. At the end various issues which still exist are enlisted.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"351 7-8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120931378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/IC3I.2016.7918006
P. K. A. Kumar, S. Vivekanandan, C. K. Kumar, V. Chinnaiyan
Steadiness of power system is a significant issue in power system operation. In this article design of Neural Network tuned Fuzzy logic power system stabilizer (NNTFLPSS) for single machine infinite bus (SMIB) system is proposed to settle down low frequency swinging that improves small signal stability in power system. The speed deviance and variation in speed deviance of the rotor of synchronous generator from the trained neural network were considered as the feedback to the fuzzy logic power system stabilizer (FLPSS) to recover the power system from small signal stability problem by refining damping oscillations. The comparative reading was noted for rotor speed deviances and rotor angle deviances using conventional PSS (CPSS), Fuzzy logic based power system stabilizer (FLPSS) and NNTFLPSS. The MATLAB simulation results obtained indicates the improved performance of NNTFLPSS over the CPSS and FLPSS.
{"title":"Neural network tuned fuzzy logic power system stabilizer design for SMIB","authors":"P. K. A. Kumar, S. Vivekanandan, C. K. Kumar, V. Chinnaiyan","doi":"10.1109/IC3I.2016.7918006","DOIUrl":"https://doi.org/10.1109/IC3I.2016.7918006","url":null,"abstract":"Steadiness of power system is a significant issue in power system operation. In this article design of Neural Network tuned Fuzzy logic power system stabilizer (NNTFLPSS) for single machine infinite bus (SMIB) system is proposed to settle down low frequency swinging that improves small signal stability in power system. The speed deviance and variation in speed deviance of the rotor of synchronous generator from the trained neural network were considered as the feedback to the fuzzy logic power system stabilizer (FLPSS) to recover the power system from small signal stability problem by refining damping oscillations. The comparative reading was noted for rotor speed deviances and rotor angle deviances using conventional PSS (CPSS), Fuzzy logic based power system stabilizer (FLPSS) and NNTFLPSS. The MATLAB simulation results obtained indicates the improved performance of NNTFLPSS over the CPSS and FLPSS.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":"21 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120935860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}