Meta search engine searches information using multiple independent search engines. World wide web has been developed to a distributed information space nearly more than 800 million working stations and several billion pages, which brings the people great trouble in finding needed information although huge amount of information available on webs. The focus of this paper is to design and implementation of a meta search engine. The work is to develop prioritizor based and profile assisted meta search engine for merging the results extracted from two or more search engines. The results and analysis show that the improved the search quality to the specific specialty.
{"title":"Meta Search Engine Based on Prioritizor","authors":"B. Chaurasia, S. K. Gupta, Rishi Soni","doi":"10.1109/CICN.2011.109","DOIUrl":"https://doi.org/10.1109/CICN.2011.109","url":null,"abstract":"Meta search engine searches information using multiple independent search engines. World wide web has been developed to a distributed information space nearly more than 800 million working stations and several billion pages, which brings the people great trouble in finding needed information although huge amount of information available on webs. The focus of this paper is to design and implementation of a meta search engine. The work is to develop prioritizor based and profile assisted meta search engine for merging the results extracted from two or more search engines. The results and analysis show that the improved the search quality to the specific specialty.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122451493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Nagaria, Mohammad Farukh Hashmi, Vijay Patidar, N. Jain
In this paper, we have discussed the comparative study of Fast Discrete Cosine Transform (FDCT).The proposed Algorithm investigate the performance evaluation of quantization based Fast DCT and variable block size with different no of iterations based image compression Techniques. This paper has been devoted to improve image compression at low lower no of iterations and higher pixel values. The numerical analysis of such algorithms is carried out by measuring Peak Signal to Noise Ratio (PSNR), Compression Ratio (CR) and CPU processing time. In this paper we have elaborated about the compression ratio with different no iterations. We can evaluate the higher compression ratio results more effectively with lower iteration and higher pixel values than that of quality of image respectively. Image quality will be degraded at higher iteration but compression ratio is better as compare to other algorithms. Different no of iterations and quantized matrix and variable block size are chosen using FDCT for calculating MSE, PSNR and Compression Ratio for achieving the highest image quality and Compression Ratio under the same algorithm. The proposed algorithm significantly raises the PSNR and minimizes the MSE at lower iterations but as above discussion main theory is that Compression Ratio increases at higher iterations and quality of image will not be maintained at higher iterations. We have also calculated the CPU processing time for processing of image compression to find the complexity of algorithm. We have Tested this algorithm two test Images fruit with 512X512 pixel frame and Lena image with 256X256 pixel frames. Thus, we can also conclude that at the same compression ratio the difference between original and decompressed image goes on decreasing, as there is increase in image resolution.
{"title":"An Optimized Fast Discrete Cosine Transform Approach with Various Iterations and Optimum Numerical Factors for Image Quality Evaluation","authors":"B. Nagaria, Mohammad Farukh Hashmi, Vijay Patidar, N. Jain","doi":"10.1109/CICN.2011.31","DOIUrl":"https://doi.org/10.1109/CICN.2011.31","url":null,"abstract":"In this paper, we have discussed the comparative study of Fast Discrete Cosine Transform (FDCT).The proposed Algorithm investigate the performance evaluation of quantization based Fast DCT and variable block size with different no of iterations based image compression Techniques. This paper has been devoted to improve image compression at low lower no of iterations and higher pixel values. The numerical analysis of such algorithms is carried out by measuring Peak Signal to Noise Ratio (PSNR), Compression Ratio (CR) and CPU processing time. In this paper we have elaborated about the compression ratio with different no iterations. We can evaluate the higher compression ratio results more effectively with lower iteration and higher pixel values than that of quality of image respectively. Image quality will be degraded at higher iteration but compression ratio is better as compare to other algorithms. Different no of iterations and quantized matrix and variable block size are chosen using FDCT for calculating MSE, PSNR and Compression Ratio for achieving the highest image quality and Compression Ratio under the same algorithm. The proposed algorithm significantly raises the PSNR and minimizes the MSE at lower iterations but as above discussion main theory is that Compression Ratio increases at higher iterations and quality of image will not be maintained at higher iterations. We have also calculated the CPU processing time for processing of image compression to find the complexity of algorithm. We have Tested this algorithm two test Images fruit with 512X512 pixel frame and Lena image with 256X256 pixel frames. Thus, we can also conclude that at the same compression ratio the difference between original and decompressed image goes on decreasing, as there is increase in image resolution.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116157973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most important applications of adaptive filter is Interference or noise cancellation. The objective of adaptive interference cancellation is to obtain an estimate of the interfering signal and to subtract it from the corrupted signal and hence obtain a noise free signal. The tracking performances of the LMS and NLMS algorithms are compared when the input of the adaptive filter is no stationary For this purpose, the filter uses an adaptive algorithm to change the value of the filter coefficients, so that it acquires a better approximation of the signal after each iteration. The LMS (Least Mean Square), and its variant the NLMS (Normalized LMS) are two of the adaptive algorithms widely in use. This paper presents a comparative analysis of the LMS and the NLMS in case of interference cancellation from speech signals. For each algorithm, the effects of two parameters-filter length and step size have been analyzed. Finally, the performances of the two algorithms in different cases have been compared.
{"title":"Simulation and Performance Analysis of LMS and NLMS Adaptive Filters in Non-stationary Noisy Environment","authors":"K. Borisagar, B. Sedani, G. R. Kulkarni","doi":"10.1109/CICN.2011.148","DOIUrl":"https://doi.org/10.1109/CICN.2011.148","url":null,"abstract":"One of the most important applications of adaptive filter is Interference or noise cancellation. The objective of adaptive interference cancellation is to obtain an estimate of the interfering signal and to subtract it from the corrupted signal and hence obtain a noise free signal. The tracking performances of the LMS and NLMS algorithms are compared when the input of the adaptive filter is no stationary For this purpose, the filter uses an adaptive algorithm to change the value of the filter coefficients, so that it acquires a better approximation of the signal after each iteration. The LMS (Least Mean Square), and its variant the NLMS (Normalized LMS) are two of the adaptive algorithms widely in use. This paper presents a comparative analysis of the LMS and the NLMS in case of interference cancellation from speech signals. For each algorithm, the effects of two parameters-filter length and step size have been analyzed. Finally, the performances of the two algorithms in different cases have been compared.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"22 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116643297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The explosive growth of multimedia data and real time applications have put unexpected load on network and have increased congestion in network, which occurs due to flooding of packets to intermediate node and increase in aggregate demand as compared to the accessible capacity of the resources. In mobile adhoc networks (MANETs) congestion, leads to packet loss, transmission delay, bandwidth degradation and also wastes time and energy on congestion recovery and network maintenance. Most of the existing routing algorithms are not designed to adapt to congestion control for busty traffic. In this paper, a load balanced congestion adaptive (LBACA) routing algorithm has been proposed in the metric: traffic density of neighboring nodes have been used to determine the congestion status of the route and the traffic is distributed to the routes according to traffic density. The proposed algorithm has been simulated on Qualnet 4.5 simulation tool.
{"title":"A Load-Balancing Approach for Congestion Adaptivity in MANET","authors":"L. Shrivastava, G. Tomar, S. Bhadoria","doi":"10.1109/CICN.2011.7","DOIUrl":"https://doi.org/10.1109/CICN.2011.7","url":null,"abstract":"The explosive growth of multimedia data and real time applications have put unexpected load on network and have increased congestion in network, which occurs due to flooding of packets to intermediate node and increase in aggregate demand as compared to the accessible capacity of the resources. In mobile adhoc networks (MANETs) congestion, leads to packet loss, transmission delay, bandwidth degradation and also wastes time and energy on congestion recovery and network maintenance. Most of the existing routing algorithms are not designed to adapt to congestion control for busty traffic. In this paper, a load balanced congestion adaptive (LBACA) routing algorithm has been proposed in the metric: traffic density of neighboring nodes have been used to determine the congestion status of the route and the traffic is distributed to the routes according to traffic density. The proposed algorithm has been simulated on Qualnet 4.5 simulation tool.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121132761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Networks (WSNs) have become one of the emerging trends of the modern communication systems. Routing plays a vital role in the design of a WSNs as normal IP based routing will not suffice. Design issues for a routing protocol involve various key parameters like energy awareness, security, QoS requirement etc. Energy awareness is one of the vital parameters, as the batteries used in sensor nodes cannot be recharged often. Many energy aware protocols were proposed in the literature. In this paper, we propose a new Energy Efficient Shortest Path (EESP) algorithm for WSNs, which manages uniform load distribution amongst the paths so as to improve the network performance as compared to the traditional shortest path routing strategy.
{"title":"Energy Efficient Shortest Path Routing Protocol for Wireless Sensor Networks","authors":"K. S. Shivaprakasha, M. Kulkarni","doi":"10.1109/CICN.2011.70","DOIUrl":"https://doi.org/10.1109/CICN.2011.70","url":null,"abstract":"Wireless Sensor Networks (WSNs) have become one of the emerging trends of the modern communication systems. Routing plays a vital role in the design of a WSNs as normal IP based routing will not suffice. Design issues for a routing protocol involve various key parameters like energy awareness, security, QoS requirement etc. Energy awareness is one of the vital parameters, as the batteries used in sensor nodes cannot be recharged often. Many energy aware protocols were proposed in the literature. In this paper, we propose a new Energy Efficient Shortest Path (EESP) algorithm for WSNs, which manages uniform load distribution amongst the paths so as to improve the network performance as compared to the traditional shortest path routing strategy.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127376210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interest-based recommendation (IBR) is a kind of knowledge based automated recommendation, in which agents exchange (meta-) information about their underlying goals using argumentation. This helps in improving the quantitative and qualitative utility of a recommendation. IBR combines hybrid recommender system with automated argumentation between agents. IBR also improves recommendation repair activity by discovering interesting alternatives based on user's underlying mental attitude. This paper analyzes the role of interaction between agent's goals to improve recommendation. We give an experimental analysis to show that with increase in knowledge transfer, the benefits of an interest-based recommendation also increase as compared to other recommendation technique without argumentation.
{"title":"Improving Recommendation by Exchanging Meta-Information","authors":"Punam Bedi, P. Vashisth","doi":"10.1109/CICN.2011.94","DOIUrl":"https://doi.org/10.1109/CICN.2011.94","url":null,"abstract":"Interest-based recommendation (IBR) is a kind of knowledge based automated recommendation, in which agents exchange (meta-) information about their underlying goals using argumentation. This helps in improving the quantitative and qualitative utility of a recommendation. IBR combines hybrid recommender system with automated argumentation between agents. IBR also improves recommendation repair activity by discovering interesting alternatives based on user's underlying mental attitude. This paper analyzes the role of interaction between agent's goals to improve recommendation. We give an experimental analysis to show that with increase in knowledge transfer, the benefits of an interest-based recommendation also increase as compared to other recommendation technique without argumentation.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114575599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the availability of the sophisticated vulnerability assessment tools that are publicly available on the Internet, information security breaches are on the rise every day. Existing techniques such as Misuse detection methods identify packets that match a known pattern or signature. However, these methods fail to detect unknown anomalies. Hence, anomaly detection methods were used to identify the traffic patterns that deviate from the modeled normal traffic behavior. The identified anomalies could be either an attack or normal traffic. The focus in this paper is to monitor the resources of remote server and to detect the malicious traffic. This led to two contributions in this paper. First is the design and implementation of Remote server monitoring (REONIT) tool and the second is the confirmation of attacks by neural ensemble. Local and remote server resources are monitored through REONIT. The REONIT has been implemented using the existing ideas and has the following components, viz., Authentication port let to monitor the distributed resources, Web Port let, which processes requests and generates dynamic content, RRD tool for data storage and visualization, XML for data representation in the form of graphs, and Message Alert, which warns the victim server if any eccentric traffic pattern occurs. REONIT tool was deployed in SSE Test bed* and the resources were monitored. The results were displayed as graphs. From the results, it is confirmed that the anomalous behavior and the high resource utilization observed in the display were due to attacks and not due to legitimate traffic.
{"title":"Anomaly Detection Using REONIT and Attack Confirmation by Neural Ensemble","authors":"P. A. Kumar, S. Selvakumar","doi":"10.1109/CICN.2011.39","DOIUrl":"https://doi.org/10.1109/CICN.2011.39","url":null,"abstract":"With the availability of the sophisticated vulnerability assessment tools that are publicly available on the Internet, information security breaches are on the rise every day. Existing techniques such as Misuse detection methods identify packets that match a known pattern or signature. However, these methods fail to detect unknown anomalies. Hence, anomaly detection methods were used to identify the traffic patterns that deviate from the modeled normal traffic behavior. The identified anomalies could be either an attack or normal traffic. The focus in this paper is to monitor the resources of remote server and to detect the malicious traffic. This led to two contributions in this paper. First is the design and implementation of Remote server monitoring (REONIT) tool and the second is the confirmation of attacks by neural ensemble. Local and remote server resources are monitored through REONIT. The REONIT has been implemented using the existing ideas and has the following components, viz., Authentication port let to monitor the distributed resources, Web Port let, which processes requests and generates dynamic content, RRD tool for data storage and visualization, XML for data representation in the form of graphs, and Message Alert, which warns the victim server if any eccentric traffic pattern occurs. REONIT tool was deployed in SSE Test bed* and the resources were monitored. The results were displayed as graphs. From the results, it is confirmed that the anomalous behavior and the high resource utilization observed in the display were due to attacks and not due to legitimate traffic.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123385698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender systems are intelligent applications employ Information Filtering (IF) techniques to assist users by giving personalized product recommendations. IF techniques generally perform a progressive elimination of irrelevant content based on the information stored in a user profile, recommendation algorithms acquire information about user preferences - in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., observing some behavioral features) way - and finally make use of these data to generate a list of recommended items. Although all filtering methods have their own weaknesses and strengths, preference learning is one of the core issues in the design of each recommender system: because these systems aim to guide users in a personalized way to recommend items from the overwhelming set of possible options. Aspect Oriented Recommender System (AORS) is a proposed multi agent system (MAS) for building learning aspect using the concept of Aspect Oriented Programming (AOP). Using conventional agent-oriented approach, implementation of preference learning in recommender system creates the problem of code scattering and code tangling. This paper presents the learning aspect for the separation of learning crosscutting concern, which in turn improves the system reusability, maintainability and removes the scattering and tangling problems in the recommender system. The prototype of AORS has been designed and developed for book recommendations.
{"title":"Preference Learning in Aspect-Oriented Recommender System","authors":"Punam Bedi, Sumit Agarwal","doi":"10.1109/CICN.2011.132","DOIUrl":"https://doi.org/10.1109/CICN.2011.132","url":null,"abstract":"Recommender systems are intelligent applications employ Information Filtering (IF) techniques to assist users by giving personalized product recommendations. IF techniques generally perform a progressive elimination of irrelevant content based on the information stored in a user profile, recommendation algorithms acquire information about user preferences - in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., observing some behavioral features) way - and finally make use of these data to generate a list of recommended items. Although all filtering methods have their own weaknesses and strengths, preference learning is one of the core issues in the design of each recommender system: because these systems aim to guide users in a personalized way to recommend items from the overwhelming set of possible options. Aspect Oriented Recommender System (AORS) is a proposed multi agent system (MAS) for building learning aspect using the concept of Aspect Oriented Programming (AOP). Using conventional agent-oriented approach, implementation of preference learning in recommender system creates the problem of code scattering and code tangling. This paper presents the learning aspect for the separation of learning crosscutting concern, which in turn improves the system reusability, maintainability and removes the scattering and tangling problems in the recommender system. The prototype of AORS has been designed and developed for book recommendations.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116297578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extracting useful information from the web is the most significant issue of concern for the realization of semantic web. This may be achieved by several ways among which Web Usage Mining, Web Scrapping and Semantic Annotation plays an important role. Web mining enables to find out the relevant results from the web and is used to extract meaningful information from the discovery patterns kept back in the servers. Web usage mining is a type of web mining which mines the information of access routes/manners of users visiting the web sites. Web scraping, another technique, is a process of extracting useful information from HTML pages which may be implemented using a scripting language known as Prolog Server Pages(PSP) based on Prolog. Third, Semantic annotation is a technique which makes it possible to add semantics and a formal structure to unstructured textual documents, an important aspect in semantic information extraction which may be performed by a tool known as KIM(Knowledge Information Management). In this paper, we revisit, explore and discuss some information extraction techniques on web like web usage mining, web scrapping and semantic annotation for a better or efficient information extraction on the web illustrated with examples.
从web中提取有用的信息是实现语义web最重要的问题。这可以通过几种方法来实现,其中Web Usage Mining、Web Scrapping和Semantic Annotation起着重要的作用。Web挖掘能够从Web中找到相关的结果,并用于从保存在服务器中的发现模式中提取有意义的信息。Web使用挖掘是一种挖掘用户访问网站的路径/方式信息的Web挖掘方法。网络抓取是另一种技术,它是从HTML页面中提取有用信息的过程,可以使用基于Prolog的脚本语言Prolog Server pages (PSP)来实现。第三,语义注释是一种为非结构化文本文档添加语义和正式结构的技术,这是语义信息提取的一个重要方面,可以通过称为KIM(知识信息管理)的工具来执行。本文对web上的一些信息提取技术进行了回顾、探索和讨论,如web使用挖掘、web废弃和语义注释等,以期更好或更有效地在web上进行信息提取。
{"title":"Information Extraction Using Web Usage Mining, Web Scrapping and Semantic Annotation","authors":"S. K. Malik, S. Rizvi","doi":"10.1109/CICN.2011.97","DOIUrl":"https://doi.org/10.1109/CICN.2011.97","url":null,"abstract":"Extracting useful information from the web is the most significant issue of concern for the realization of semantic web. This may be achieved by several ways among which Web Usage Mining, Web Scrapping and Semantic Annotation plays an important role. Web mining enables to find out the relevant results from the web and is used to extract meaningful information from the discovery patterns kept back in the servers. Web usage mining is a type of web mining which mines the information of access routes/manners of users visiting the web sites. Web scraping, another technique, is a process of extracting useful information from HTML pages which may be implemented using a scripting language known as Prolog Server Pages(PSP) based on Prolog. Third, Semantic annotation is a technique which makes it possible to add semantics and a formal structure to unstructured textual documents, an important aspect in semantic information extraction which may be performed by a tool known as KIM(Knowledge Information Management). In this paper, we revisit, explore and discuss some information extraction techniques on web like web usage mining, web scrapping and semantic annotation for a better or efficient information extraction on the web illustrated with examples.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Prajapat, A. Katariya, Ashok Kumar V, S. Shukla
In the recent years the power demand is increasing regularly and it can be fulfilled by the use of conventional or non-conventional energy power plants. So, renewable energy sources like photovoltaic (PV) panels are used today in many applications. With the rapid development of photovoltaic system installations and increased number of grid connected power systems, it has become imperative to develop an efficient grid interfacing instrumentation suitable for photovoltaic systems ensuring maximum power transfer. The losses in the power converter play an important role in the overall efficiency of a PV system. Grid connected systems use a photovoltaic array to generate electricity, which is then fed to the main grid via a grid interactive inverter. When the solar array generates more power than is being used in the building, the surplus is exported to the grid. When the solar array generates less power than is being used in the building, the difference is imported from the grid. This system includes photovoltaic solar Panels, one inverter, 1 charge controller and a battery bank. The result shows that PV system would be suitable to supply electricity to cover the load requirement without using energy from the grid. The overall efficiency of the system depends on the efficiency of the SUNLIGHT-into-DC and the DC-into-AC conversion efficiencies. The first one varies up to 3% over a year. The second one, instead, shows a much greater variability. The output power of photovoltaic (PV) module varies with module temperature, solar isolation and 1oad changes etc. In order to control the output power of single-phase grid-connected PV system according to the output power of PV arrays. The experimental results of MATLAB simulation show that the proposed method has a good performance.
{"title":"Simulation and Testing of Photovoltaic with Grid Connected System","authors":"K. Prajapat, A. Katariya, Ashok Kumar V, S. Shukla","doi":"10.1109/CICN.2011.150","DOIUrl":"https://doi.org/10.1109/CICN.2011.150","url":null,"abstract":"In the recent years the power demand is increasing regularly and it can be fulfilled by the use of conventional or non-conventional energy power plants. So, renewable energy sources like photovoltaic (PV) panels are used today in many applications. With the rapid development of photovoltaic system installations and increased number of grid connected power systems, it has become imperative to develop an efficient grid interfacing instrumentation suitable for photovoltaic systems ensuring maximum power transfer. The losses in the power converter play an important role in the overall efficiency of a PV system. Grid connected systems use a photovoltaic array to generate electricity, which is then fed to the main grid via a grid interactive inverter. When the solar array generates more power than is being used in the building, the surplus is exported to the grid. When the solar array generates less power than is being used in the building, the difference is imported from the grid. This system includes photovoltaic solar Panels, one inverter, 1 charge controller and a battery bank. The result shows that PV system would be suitable to supply electricity to cover the load requirement without using energy from the grid. The overall efficiency of the system depends on the efficiency of the SUNLIGHT-into-DC and the DC-into-AC conversion efficiencies. The first one varies up to 3% over a year. The second one, instead, shows a much greater variability. The output power of photovoltaic (PV) module varies with module temperature, solar isolation and 1oad changes etc. In order to control the output power of single-phase grid-connected PV system according to the output power of PV arrays. The experimental results of MATLAB simulation show that the proposed method has a good performance.","PeriodicalId":292190,"journal":{"name":"2011 International Conference on Computational Intelligence and Communication Networks","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114017312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}