Sanjeeve Sharma, R. Tiwari, A. Shukla, Vikas Singh
Gait is a biometric identification technology, which can identify people from distance, without user cooperation. This paper proposes a simple and efficient automatic gait recognition method taking frontal view silhouette of walking person using principal component analysis. Here for each image, background subtraction method is applied to extract the moving silhouette. These silhouette images are used to extract features using principal component analysis algorithm. Here principal component analysis method is applied to reduce the dimensionality of feature vectors. These reduce features vector represent the most relevant information of walking person, which are able to distinguish one people from others. Our result shows that taking frontal view image for recognition, gives good results. In our work the recognition rate is 97.50%.
{"title":"Frontal view gait based recognition using PCA","authors":"Sanjeeve Sharma, R. Tiwari, A. Shukla, Vikas Singh","doi":"10.1145/2007052.2007077","DOIUrl":"https://doi.org/10.1145/2007052.2007077","url":null,"abstract":"Gait is a biometric identification technology, which can identify people from distance, without user cooperation. This paper proposes a simple and efficient automatic gait recognition method taking frontal view silhouette of walking person using principal component analysis. Here for each image, background subtraction method is applied to extract the moving silhouette. These silhouette images are used to extract features using principal component analysis algorithm. Here principal component analysis method is applied to reduce the dimensionality of feature vectors. These reduce features vector represent the most relevant information of walking person, which are able to distinguish one people from others. Our result shows that taking frontal view image for recognition, gives good results. In our work the recognition rate is 97.50%.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134305347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Price forecasting has become one of the main focus of electric power market research efforts since price is the key index to evaluate the market competition efficiency and reflects the operation condition of electricity market decision making. The work presented in this paper makes use of local linear wavelet neural networks (LLWNN) & ARMA to find the market price for a given period, with a certain confidence level. The results of the new method show significant improvement in the price forecasting process.
{"title":"Forecasting the hourly Ontario energy price by local linear wavelet neural network and ARMA models","authors":"P. K. Pany, S. Ghoshal","doi":"10.1145/2007052.2007091","DOIUrl":"https://doi.org/10.1145/2007052.2007091","url":null,"abstract":"Price forecasting has become one of the main focus of electric power market research efforts since price is the key index to evaluate the market competition efficiency and reflects the operation condition of electricity market decision making. The work presented in this paper makes use of local linear wavelet neural networks (LLWNN) & ARMA to find the market price for a given period, with a certain confidence level. The results of the new method show significant improvement in the price forecasting process.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116717965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicular travel is increasing throughout the world, particularly in large urban areas. With the increasing use of automobiles in cities traffic congestion occurred. So as the transportation system will continue to grow, intelligent traffic controls have to be employed to face road traffic congestion problems. Fuzzy controllers have been widely used in many consumer products and industrial applications with success over the past two decades. For traffic control, however, fuzzy controllers have not been widely applied. This paper proposes a fuzzy traffic lights controller to be used at a complex traffic junction. The real time parameters such as traffic density and queue length are obtained by image processing techniques. So the on and off timings for the green, red and orange lights are adjusted as per the actual road conditions. Fuzzy logic has been widely used to develop a traffic signal controller because it allows qualitative modeling of complex systems. This paper describes a fuzzy logic signal controller for a four--way intersection suitable for mixed traffic, including a high proportion of motorcycles. This paper discusses the traffic control strategy, which dictates the design criteria for the fuzzy logic controller. The components of fuzzy logic controller-the fuzzifier, the fuzzy rule base formulated by human experts, the fuzzy inference engine and the defuzzifier.
{"title":"Development of traffic light control system based on fuzzy logic","authors":"Sandeep Mehan, Vandana Sharma","doi":"10.1145/2007052.2007085","DOIUrl":"https://doi.org/10.1145/2007052.2007085","url":null,"abstract":"Vehicular travel is increasing throughout the world, particularly in large urban areas. With the increasing use of automobiles in cities traffic congestion occurred. So as the transportation system will continue to grow, intelligent traffic controls have to be employed to face road traffic congestion problems. Fuzzy controllers have been widely used in many consumer products and industrial applications with success over the past two decades. For traffic control, however, fuzzy controllers have not been widely applied. This paper proposes a fuzzy traffic lights controller to be used at a complex traffic junction. The real time parameters such as traffic density and queue length are obtained by image processing techniques. So the on and off timings for the green, red and orange lights are adjusted as per the actual road conditions. Fuzzy logic has been widely used to develop a traffic signal controller because it allows qualitative modeling of complex systems. This paper describes a fuzzy logic signal controller for a four--way intersection suitable for mixed traffic, including a high proportion of motorcycles. This paper discusses the traffic control strategy, which dictates the design criteria for the fuzzy logic controller. The components of fuzzy logic controller-the fuzzifier, the fuzzy rule base formulated by human experts, the fuzzy inference engine and the defuzzifier.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124904775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Grid computing infrastructure emerged as a next generation of high performance computing by providing availability of vast heterogenous resources. In the dynamic envirnment of grid, a schedling decision is still challenging area and it should consider reliability of reources while generating schedule in addition to other objectives. In this paper, we used evolutionary approach to obtain multiple trade-off soltions which minimizes makespan and cost along with the maximization of reliability under the deadline and budget constraints. We apply NSGA-II and ε - MOEA algorithms in order to explore solutions in the Pareto optimal front. Simulation analysis shows that multiple solutions obtained with ε -MOEA approach gives better convergence, uniform diversity in small computation time.
{"title":"A robust multi-objective optimization to workflow scheduling for dynamic grid","authors":"Darshan Singh, R. Garg","doi":"10.1145/2007052.2007090","DOIUrl":"https://doi.org/10.1145/2007052.2007090","url":null,"abstract":"Grid computing infrastructure emerged as a next generation of high performance computing by providing availability of vast heterogenous resources. In the dynamic envirnment of grid, a schedling decision is still challenging area and it should consider reliability of reources while generating schedule in addition to other objectives. In this paper, we used evolutionary approach to obtain multiple trade-off soltions which minimizes makespan and cost along with the maximization of reliability under the deadline and budget constraints. We apply NSGA-II and ε - MOEA algorithms in order to explore solutions in the Pareto optimal front. Simulation analysis shows that multiple solutions obtained with ε -MOEA approach gives better convergence, uniform diversity in small computation time.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130458843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Face localization is always a challenging job for a good recognition scheme. If process has to be done for 3D faces, it is quit difficult to localize faces. In this paper we provide a face localization scheme which is based on Average Threshold Value (AVT) and Grayscale Pixel Value Band (GPVB) with the help of Histogram Equalization. We experiment this approach on a synthetic 3D morphable face model of 20 individuals and get 89.51% accuracy among 20 faces with same lighting condition, which is appreciative for further work on same stream.
{"title":"A hybrid method for 3D face localization","authors":"Qaim Mehdi Rizvi, Qamar Abbas, Hasan Ahmad","doi":"10.1145/2007052.2007065","DOIUrl":"https://doi.org/10.1145/2007052.2007065","url":null,"abstract":"Face localization is always a challenging job for a good recognition scheme. If process has to be done for 3D faces, it is quit difficult to localize faces. In this paper we provide a face localization scheme which is based on Average Threshold Value (AVT) and Grayscale Pixel Value Band (GPVB) with the help of Histogram Equalization. We experiment this approach on a synthetic 3D morphable face model of 20 individuals and get 89.51% accuracy among 20 faces with same lighting condition, which is appreciative for further work on same stream.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129515777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radio Frequency Identification (RFID) is an emerging technology to enhance the Bar- code technology. This technology enables the electronic and wireless labelling and identification of objects, humans and animals. The technology can be used for many powerful applications including automatic item tracking, smart home appliances etc. The key idea is to attach each and every item with an RFID tag which can be read by RFID readers via radio communication. RFID technology uses wireless communication in radio frequency bands to transmit data from tags to readers. A reader scans the tag for data and sends the information to a database, which stores the data contained on the tag. RFID networks are getting deployed at a rapid pace and have already entered the public space on a massive scale: public transport cards, the biometric passport, office ID tokens, customer loyalty cards, etc. Although RFID technology offers interesting services to customers and retailers, it could also endanger the privacy of the end-users. The lack of protection mechanisms being deployed could potentially result in a privacy leakage of personal data. It is not a new technology and has been in the public domain for at least 10 years.
{"title":"RFID enabled cards skimming: enhanced technology","authors":"J. Kaur, N. Kehar","doi":"10.1145/2007052.2007083","DOIUrl":"https://doi.org/10.1145/2007052.2007083","url":null,"abstract":"Radio Frequency Identification (RFID) is an emerging technology to enhance the Bar- code technology. This technology enables the electronic and wireless labelling and identification of objects, humans and animals. The technology can be used for many powerful applications including automatic item tracking, smart home appliances etc. The key idea is to attach each and every item with an RFID tag which can be read by RFID readers via radio communication. RFID technology uses wireless communication in radio frequency bands to transmit data from tags to readers. A reader scans the tag for data and sends the information to a database, which stores the data contained on the tag. RFID networks are getting deployed at a rapid pace and have already entered the public space on a massive scale: public transport cards, the biometric passport, office ID tokens, customer loyalty cards, etc. Although RFID technology offers interesting services to customers and retailers, it could also endanger the privacy of the end-users. The lack of protection mechanisms being deployed could potentially result in a privacy leakage of personal data. It is not a new technology and has been in the public domain for at least 10 years.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122583917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In all application domains of multimedia, communication and network processing where huge amount of data processing at desired performance and power consumption are a mandatory prerequisite for successful functioning; the system architects have to find a design that fulfills the user requirements of the optimization parameters, while minimizing the cost as much as possible. In this paper a novel FPGA based comparative analysis to compare the cost-performance ratio (CPR) of an Application Specific Processor (ASP) with Microblaze soft core RISC processor is proposed. The paper also proposes an exclusive performance assessment between the FPGA based ASP and Microblaze soft core RISC processor embedded in FPGA. The paper also highlights the design processes of a performance optimized power stringent ASP by converting a computation intensive application into an actual Register Transfer Level (RTL) hardware design as well as the Microblaze soft core RISC processor for a same given application. The experimental results of the FPGA based speedup analysis indicated that for 'N' sets of processed data, the application specific processor performs faster than RISC. Further, it was concluded that speedup of ASP increases proportionally with increase in number of processed data. Moreover, the results of CPR comparison indicated that as the number of units of production increases, the value of CPR for the ASP becomes larger compared to CPR of RISC processor.
{"title":"Application specific processor vs. microblaze soft core RISC processor: FPGA based performance and CPR analysis","authors":"Pallabi Sarkar, R. Sedaghat, A. Sengupta","doi":"10.1145/2007052.2007069","DOIUrl":"https://doi.org/10.1145/2007052.2007069","url":null,"abstract":"In all application domains of multimedia, communication and network processing where huge amount of data processing at desired performance and power consumption are a mandatory prerequisite for successful functioning; the system architects have to find a design that fulfills the user requirements of the optimization parameters, while minimizing the cost as much as possible. In this paper a novel FPGA based comparative analysis to compare the cost-performance ratio (CPR) of an Application Specific Processor (ASP) with Microblaze soft core RISC processor is proposed. The paper also proposes an exclusive performance assessment between the FPGA based ASP and Microblaze soft core RISC processor embedded in FPGA. The paper also highlights the design processes of a performance optimized power stringent ASP by converting a computation intensive application into an actual Register Transfer Level (RTL) hardware design as well as the Microblaze soft core RISC processor for a same given application. The experimental results of the FPGA based speedup analysis indicated that for 'N' sets of processed data, the application specific processor performs faster than RISC. Further, it was concluded that speedup of ASP increases proportionally with increase in number of processed data. Moreover, the results of CPR comparison indicated that as the number of units of production increases, the value of CPR for the ASP becomes larger compared to CPR of RISC processor.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127213617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic relatedness measures the degree to which some words or concepts are related, considering possible semantic relationships among them. Semantic relatedness is of great interest in different areas, such as Natural Language Processing, Information Retrieval, or the Semantic Web. This paper proposes policy based ranking of documents, where policy directs which path of the ontology tree is to be considered for semantic computation. An algorithm for ranking documents according to their relevance to a query is used. The given query is expanded based on the ontology concepts and the given polices. As the policy changes, the concept set used for the ranking algorithm varies. Depending on the requirements, using the same ontology, documents can be ranked differently based on the policies which in turn depend on the structure of the ontology. A ranking algorithm calculates the similarity degree of each document with respect to the user query.
{"title":"Ontology based semantic similarly ranking of documents","authors":"S. Thenmalar, T. Geetha, S. Devi","doi":"10.1145/2007052.2007074","DOIUrl":"https://doi.org/10.1145/2007052.2007074","url":null,"abstract":"Semantic relatedness measures the degree to which some words or concepts are related, considering possible semantic relationships among them. Semantic relatedness is of great interest in different areas, such as Natural Language Processing, Information Retrieval, or the Semantic Web. This paper proposes policy based ranking of documents, where policy directs which path of the ontology tree is to be considered for semantic computation. An algorithm for ranking documents according to their relevance to a query is used. The given query is expanded based on the ontology concepts and the given polices. As the policy changes, the concept set used for the ranking algorithm varies. Depending on the requirements, using the same ontology, documents can be ranked differently based on the policies which in turn depend on the structure of the ontology. A ranking algorithm calculates the similarity degree of each document with respect to the user query.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114544070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers investigated Artificial Intelligence (AI) based classifiers for intrusion detection to cope the weaknesses of knowledge based systems. AI based classifiers can be utilized in supervised and unsupervised mode. Here, we perform a blind set of experiments to compare & evaluate performance of the supervised classifiers by their categories using variety of metrics. The performance of the classifiers is analyzed using subset of benchmarked KDD cup 1999 dataset as training & Test dataset. This work has significant aspect of using variety of performance metrics to evaluate the supervised classifiers because some classifiers are designed to optimize some specific metric. This empirical analysis is not only a comparison of various classifiers to identify best classifier on the whole and best classifiers for individual attack classes, but also reveals guidelines for researchers to apply AI based classifiers to field of intrusion detection and directions for further research in this field.
研究人员研究了基于人工智能(AI)分类器的入侵检测,以克服基于知识的系统的弱点。基于人工智能的分类器可以用于监督和无监督模式。在这里,我们执行一组盲实验,通过使用各种指标的类别来比较和评估监督分类器的性能。使用基准化的KDD cup 1999数据集子集作为训练和测试数据集,分析了分类器的性能。这项工作在使用各种性能指标来评估监督分类器方面具有重要意义,因为一些分类器被设计为优化某些特定的指标。本文的实证分析不仅比较了各种分类器在整体上的最佳分类器和针对单个攻击类的最佳分类器,而且揭示了基于AI的分类器在入侵检测领域应用的指导方针和该领域进一步研究的方向。
{"title":"AI based supervised classifiers: an analysis for intrusion detection","authors":"G. Kumar, Krishan Kumar","doi":"10.1145/2007052.2007087","DOIUrl":"https://doi.org/10.1145/2007052.2007087","url":null,"abstract":"Researchers investigated Artificial Intelligence (AI) based classifiers for intrusion detection to cope the weaknesses of knowledge based systems. AI based classifiers can be utilized in supervised and unsupervised mode.\u0000 Here, we perform a blind set of experiments to compare & evaluate performance of the supervised classifiers by their categories using variety of metrics. The performance of the classifiers is analyzed using subset of benchmarked KDD cup 1999 dataset as training & Test dataset. This work has significant aspect of using variety of performance metrics to evaluate the supervised classifiers because some classifiers are designed to optimize some specific metric. This empirical analysis is not only a comparison of various classifiers to identify best classifier on the whole and best classifiers for individual attack classes, but also reveals guidelines for researchers to apply AI based classifiers to field of intrusion detection and directions for further research in this field.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"41 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125883525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In these days many tools appear in market to support building system. Thus choosing the best tools need to know the most important features of it. Today UML consider as a de facto standard in software development and is used in many domains ranging from scientific modeling to business modeling. Here this paper describes the most used UML tools. It defines the main features of each one and then compares between them according to their features. It also defines the criteria for choosing the best tool in building system. Some tools aim at supporting specific life-cycle phases, but they often do not meet basic requirements arising in heterogeneous environments, UML education, early life-cycle phases, or agile processes: hassle-free tool deployment, support for fast model sketching, and flexible graphic export features.. A comparative study, including a selection of these tools, show advantages and disadvantages for each tool.
{"title":"A comparative study of UML tools","authors":"Heena, Ranjna Garg","doi":"10.1145/2007052.2007053","DOIUrl":"https://doi.org/10.1145/2007052.2007053","url":null,"abstract":"In these days many tools appear in market to support building system. Thus choosing the best tools need to know the most important features of it. Today UML consider as a de facto standard in software development and is used in many domains ranging from scientific modeling to business modeling. Here this paper describes the most used UML tools. It defines the main features of each one and then compares between them according to their features. It also defines the criteria for choosing the best tool in building system. Some tools aim at supporting specific life-cycle phases, but they often do not meet basic requirements arising in heterogeneous environments, UML education, early life-cycle phases, or agile processes: hassle-free tool deployment, support for fast model sketching, and flexible graphic export features.. A comparative study, including a selection of these tools, show advantages and disadvantages for each tool.","PeriodicalId":348804,"journal":{"name":"International Conference on Advances in Computing and Artificial Intelligence","volume":"38 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}