Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033763
S. Kajapriya, K. N. Vimal Shankar
Clustering is one of the most important techniques in machine learning and data mining responsibilities. Similar documents are grouped by performing clustering techniques. Similarity measure is used to determine transaction associations. Hierarchical clustering method produces tree structured results. Partition based clustering model produces the results in grid format. Text documents are formless data values with high dimensional attributes. Document clustering group the unlabeled text documents into meaningful clusters. Traditionally clustering methods need cluster count (K) before the document grouping process. Clustering accuracy decreases drastically with reference to the unsuitable cluster count. Document word features are automatically partitioned into two groups discriminative words and non-discriminative words. But only discriminative words are useful for grouping documents. The contribution of nondiscriminative words confuses the clustering process and leads to poor cluster solutions. The variational inference algorithm is used to infer the document collection structure and partition of document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition documents. DPM clustering model utilizes both the data likelihood and the clustering property of the Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to discover the latent cluster structure based on the DPM model. DPMFP clustering model is performed without requiring the no. of clusters as input. The Discriminative word identification process is enhanced with the labeled document analysis mechanism. The concept relationships are analyzed with Ontology support. Semantic weight analysis is used for the document similarity measure. This method increases the scalability with the support of labels and concept relations for dimensionality cutback process.
聚类是机器学习和数据挖掘中最重要的技术之一。通过执行聚类技术对类似文档进行分组。相似性度量用于确定事务关联。分层聚类方法产生树状结构的结果。基于分区的聚类模型以网格格式生成结果。文本文档是具有高维属性的无格式数据值。文档聚类将未标记的文本文档分组到有意义的簇中。传统的聚类方法需要在文档分组之前进行聚类计数(K)。不合适的聚类数量会使聚类精度急剧下降。将文档词特征自动划分为判别词和非判别词两组。但是,只有区别词对文档分组有用。非判别词的贡献混淆了聚类过程,导致了较差的聚类解决方案。采用变分推理算法对文档集合结构进行推理,同时对文档词进行划分。采用Dirichlet过程混合(DPM)模型对文档进行划分。DPM聚类模型同时利用了Dirichlet过程(DP)的数据似然和聚类特性。基于Dirichlet过程混合特征划分模型(Dirichlet Process Mixture Model for Feature Partition, DPMFP)发现潜在聚类结构。DPMFP聚类模型的执行不需要no。簇作为输入。通过标记文档分析机制,增强了判别词识别过程。在本体支持下,对概念关系进行了分析。语义权重分析用于文档相似度度量。该方法在维数裁剪过程的标签和概念关系支持下,提高了可扩展性。
{"title":"Document grouping with concept based discriminative analysis and feature partition","authors":"S. Kajapriya, K. N. Vimal Shankar","doi":"10.1109/ICICES.2014.7033763","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033763","url":null,"abstract":"Clustering is one of the most important techniques in machine learning and data mining responsibilities. Similar documents are grouped by performing clustering techniques. Similarity measure is used to determine transaction associations. Hierarchical clustering method produces tree structured results. Partition based clustering model produces the results in grid format. Text documents are formless data values with high dimensional attributes. Document clustering group the unlabeled text documents into meaningful clusters. Traditionally clustering methods need cluster count (K) before the document grouping process. Clustering accuracy decreases drastically with reference to the unsuitable cluster count. Document word features are automatically partitioned into two groups discriminative words and non-discriminative words. But only discriminative words are useful for grouping documents. The contribution of nondiscriminative words confuses the clustering process and leads to poor cluster solutions. The variational inference algorithm is used to infer the document collection structure and partition of document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition documents. DPM clustering model utilizes both the data likelihood and the clustering property of the Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to discover the latent cluster structure based on the DPM model. DPMFP clustering model is performed without requiring the no. of clusters as input. The Discriminative word identification process is enhanced with the labeled document analysis mechanism. The concept relationships are analyzed with Ontology support. Semantic weight analysis is used for the document similarity measure. This method increases the scalability with the support of labels and concept relations for dimensionality cutback process.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"402 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79701237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034064
S. Govarthini, M. Vadivel
Digital images have been used in emerging applications, where their authenticity is quite importance. This proves to be problematic due to the widespread availability of digital image editing software. As a result, there is a great need for the development of reliable techniques for verifying the integrity of digital images. In this paper, we propose a novel technique based on blind forensic method to attest the image authenticity. This paper presents the efficient method of digital blind forensics within the medical imaging field with the objective to detect whether an image has been modified by some processing. It compares two image features: the histogram statistics of reorganized block-based discrete cosine transform coefficients, originally proposed for steganalysis purposes, and the histogram statistics of reorganized block-based Tchebichef moments. Both features serve as input of a set of support vector machine classifiers built in order to discriminate tampered images from original ones as well as to identify the nature of the global modification.
{"title":"Integrity vérification of medical images using blind forensic method","authors":"S. Govarthini, M. Vadivel","doi":"10.1109/ICICES.2014.7034064","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034064","url":null,"abstract":"Digital images have been used in emerging applications, where their authenticity is quite importance. This proves to be problematic due to the widespread availability of digital image editing software. As a result, there is a great need for the development of reliable techniques for verifying the integrity of digital images. In this paper, we propose a novel technique based on blind forensic method to attest the image authenticity. This paper presents the efficient method of digital blind forensics within the medical imaging field with the objective to detect whether an image has been modified by some processing. It compares two image features: the histogram statistics of reorganized block-based discrete cosine transform coefficients, originally proposed for steganalysis purposes, and the histogram statistics of reorganized block-based Tchebichef moments. Both features serve as input of a set of support vector machine classifiers built in order to discriminate tampered images from original ones as well as to identify the nature of the global modification.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"135 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79887808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034111
S. Sudheep, B. Rebekka
Scheduling allots the frequency and time resources of the LTE network to the users based on certain algorithms. The aim of an efficient fair scheduling is to provide the radio resources such that all the users are serviced almost equally over time without reducing the average user throughput. In this paper a fair scheduling algorithm named Blind Equal Throughput is modified and tested in different user conditions and a new scheduling scheme named Proportional Equal Throughput (PET) is developed which offers better fairness among users without reducing average user throughput, in comparison with other scheduling algorithms.
{"title":"Proportional equal throughput scheduler — A very fair scheduling approach in LTE downlink","authors":"S. Sudheep, B. Rebekka","doi":"10.1109/ICICES.2014.7034111","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034111","url":null,"abstract":"Scheduling allots the frequency and time resources of the LTE network to the users based on certain algorithms. The aim of an efficient fair scheduling is to provide the radio resources such that all the users are serviced almost equally over time without reducing the average user throughput. In this paper a fair scheduling algorithm named Blind Equal Throughput is modified and tested in different user conditions and a new scheduling scheme named Proportional Equal Throughput (PET) is developed which offers better fairness among users without reducing average user throughput, in comparison with other scheduling algorithms.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"2 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84110968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033996
Mr P Chandrasekar, Ms T Sangeetha
Contemporary embedded systems are habitually based on microcontroller's i.e. CPUs in the company of integrated memory as well as peripheral interfaces but ordinary microprocessors by means of external chips for memory and peripheral interface circuits are also still common, especially in more complex systems. Radio frequency identification (RFID) technology may not only be useful for streamlining inventory and supply chains: it could also make shoppers swarm. ZigBee is based on an IEEE 802.15 standard. ZigBee devices often transmit data over longer distances by passing data through intermediate devices to reach more distant ones, creating a mesh network; i.e., a network with no centralized control or high-power transmitter/receiver able to reach all of the networked devices. This paper provides centralized and automated billing system using RFID and ZigBee communication. Each product of shopping mall, super markets will be provided with a RFID tag, to identify its type. Each shopping cart is designed or implemented with a Product Identification Device (PID) that contains microcontroller, LCD, an RFID reader, EEPROM, and ZigBee module. Purchasing product information will be read through a RFID reader on shopping cart, mean while product information will be stored into EEPROM attached to it and EEPROM data will be send to Central Billing System through ZigBee module. The central billing system gets the cart information and EEPROM data, it access the product database and calculates the total amount of purchasing for that particular cart. Main aim of this paper was to provide an automatic billing to avoid queue in malls and super markets.
{"title":"Smart shopping cart with automatic billing system through RFID and ZigBee","authors":"Mr P Chandrasekar, Ms T Sangeetha","doi":"10.1109/ICICES.2014.7033996","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033996","url":null,"abstract":"Contemporary embedded systems are habitually based on microcontroller's i.e. CPUs in the company of integrated memory as well as peripheral interfaces but ordinary microprocessors by means of external chips for memory and peripheral interface circuits are also still common, especially in more complex systems. Radio frequency identification (RFID) technology may not only be useful for streamlining inventory and supply chains: it could also make shoppers swarm. ZigBee is based on an IEEE 802.15 standard. ZigBee devices often transmit data over longer distances by passing data through intermediate devices to reach more distant ones, creating a mesh network; i.e., a network with no centralized control or high-power transmitter/receiver able to reach all of the networked devices. This paper provides centralized and automated billing system using RFID and ZigBee communication. Each product of shopping mall, super markets will be provided with a RFID tag, to identify its type. Each shopping cart is designed or implemented with a Product Identification Device (PID) that contains microcontroller, LCD, an RFID reader, EEPROM, and ZigBee module. Purchasing product information will be read through a RFID reader on shopping cart, mean while product information will be stored into EEPROM attached to it and EEPROM data will be send to Central Billing System through ZigBee module. The central billing system gets the cart information and EEPROM data, it access the product database and calculates the total amount of purchasing for that particular cart. Main aim of this paper was to provide an automatic billing to avoid queue in malls and super markets.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84643055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033851
G. Nalini Priya, K. Murugan, A. Sharmila
Recommender systems are altering from novelties used by a small number of online sites, to grave business tools that are reshaping the world of E-commerce. Numerous sites are already using it to help their customers for finding good products to purchase. In this paper it presents an explanation of how recommender systems help E-commerce sites to increase sales, and analyze business patterns to developer. Based on the system business developer is helpful to build the product according to the customer likes. It create patterns using recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. It concludes with ideas for new applications with recommender systems for business to customer interaction.
{"title":"Developing intellectual patterns in online business to customer interaction with dynamic recommender system","authors":"G. Nalini Priya, K. Murugan, A. Sharmila","doi":"10.1109/ICICES.2014.7033851","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033851","url":null,"abstract":"Recommender systems are altering from novelties used by a small number of online sites, to grave business tools that are reshaping the world of E-commerce. Numerous sites are already using it to help their customers for finding good products to purchase. In this paper it presents an explanation of how recommender systems help E-commerce sites to increase sales, and analyze business patterns to developer. Based on the system business developer is helpful to build the product according to the customer likes. It create patterns using recommender systems, including the interfaces they present to customers, the technologies used to create the recommendations, and the inputs they need from customers. It concludes with ideas for new applications with recommender systems for business to customer interaction.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"87 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84644758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034115
K. V. Reddy, P. Hari Deepak, Lvisweswararao, K. Srividhya, B. S. Srilatha, Indira Dutt
In order to determine the position and attitude in a wide range of applications Global Positioning System is used. One of the utmost interests regarding navigation and positioning areas is the near point monitoring of the ionosphere using dual frequency GPS data. To determine the location and regulate time on the earth's surface through GPS receivers radio signals are broadcasted by the global positioning system (GPS) satellites which enable the receivers. These GPS signals comprehend time and status of entire satellite constellations, ranging signals, navigation messages. GPS provides entire information about positioning thoroughly for twenty four hours a day which covers any part of the world. The main cause of errors for GPS signals is the ionosphere which contains anonymous distribution of electrons which introduces a frequency dependant path delay proportional to the total electron content (TEC) along the signal path. This delay in the atmosphere specifically in ionosphere is due to the existence of electrons. Ionospheric effects can be miniaturized for single frequency GPS receivers via modelling using empirical or physics-based ionospheric models. But due to the dispersive nature of ionosphere in the electromagnetic spectrum the effects are considered for dual frequency GPS receivers. Though the errors likes, clock, orbital, satellite and multipath effects which effect the GPS signals are declined depending on the particular application. In this work we are only concerned with the ionospheric errors in which the ionosphere is modelled by Kalman filter approach and the obtained results are validated using SOPAC web application. To increase the accuracy and reliability of navigation Kalman filters are widely used. Using thin shell approximation of the ionosphere as the basis, ionosphere in the LOS is modelled using Kalman filter. It also contains how ionosphere impacts the GPS signals and about ionospheric TEC observables. Near point monitoring of ionosphere is investigated based on the data of dual frequency GPS receiver of an IGS station, NGRI Hyderabad (lat: 17° 22' 31" N, long: 78° 28' 27" E) which is collected through the web.
{"title":"Near point monitoring of the ionosphere using dual frequency GPS data over the hydearabad region: A Kalman filter approach","authors":"K. V. Reddy, P. Hari Deepak, Lvisweswararao, K. Srividhya, B. S. Srilatha, Indira Dutt","doi":"10.1109/ICICES.2014.7034115","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034115","url":null,"abstract":"In order to determine the position and attitude in a wide range of applications Global Positioning System is used. One of the utmost interests regarding navigation and positioning areas is the near point monitoring of the ionosphere using dual frequency GPS data. To determine the location and regulate time on the earth's surface through GPS receivers radio signals are broadcasted by the global positioning system (GPS) satellites which enable the receivers. These GPS signals comprehend time and status of entire satellite constellations, ranging signals, navigation messages. GPS provides entire information about positioning thoroughly for twenty four hours a day which covers any part of the world. The main cause of errors for GPS signals is the ionosphere which contains anonymous distribution of electrons which introduces a frequency dependant path delay proportional to the total electron content (TEC) along the signal path. This delay in the atmosphere specifically in ionosphere is due to the existence of electrons. Ionospheric effects can be miniaturized for single frequency GPS receivers via modelling using empirical or physics-based ionospheric models. But due to the dispersive nature of ionosphere in the electromagnetic spectrum the effects are considered for dual frequency GPS receivers. Though the errors likes, clock, orbital, satellite and multipath effects which effect the GPS signals are declined depending on the particular application. In this work we are only concerned with the ionospheric errors in which the ionosphere is modelled by Kalman filter approach and the obtained results are validated using SOPAC web application. To increase the accuracy and reliability of navigation Kalman filters are widely used. Using thin shell approximation of the ionosphere as the basis, ionosphere in the LOS is modelled using Kalman filter. It also contains how ionosphere impacts the GPS signals and about ionospheric TEC observables. Near point monitoring of ionosphere is investigated based on the data of dual frequency GPS receiver of an IGS station, NGRI Hyderabad (lat: 17° 22' 31\" N, long: 78° 28' 27\" E) which is collected through the web.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"99 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81752068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033809
A. Shanthini, R. Chandrasekaran
Faults in a module tend to cause failure of the software product. These defective modules in the software pose considerable risk by increasing the developing cost and decreasing the customer satisfaction. Hence in a software development life cycle it is very important to predict the faulty modules in the software product. Prediction of the defective modules should be done as early as possible so as to improve software developers' ability to identify the defect-prone modules and focus quality assurance activities such as testing and inspections on those defective modules. For quality assurance activity, it is important to concentrate on the software metrics. Software metrics play a vital role in measuring the quality of software. Many researchers focused on classification algorithm for predicting the software defect. On the other hand, classifiers ensemble can effectively improve classification performance when compared with a single classifier. This paper mainly addresses using ensemble approach of Support Vector Machine (SVM) for fault prediction. Ensemble classifier was examined for Eclipse Package level dataset and NASA KC1 dataset. We showed that proposed ensemble of Support Vector Machine is superior to individual approach for software fault prediction in terms of classification rate through Root Mean Square Error Rate (RMSE), AUC-ROC, ROC curves.
{"title":"Analyzing the effect of bagged ensemble approach for software fault prediction in class level and package level metrics","authors":"A. Shanthini, R. Chandrasekaran","doi":"10.1109/ICICES.2014.7033809","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033809","url":null,"abstract":"Faults in a module tend to cause failure of the software product. These defective modules in the software pose considerable risk by increasing the developing cost and decreasing the customer satisfaction. Hence in a software development life cycle it is very important to predict the faulty modules in the software product. Prediction of the defective modules should be done as early as possible so as to improve software developers' ability to identify the defect-prone modules and focus quality assurance activities such as testing and inspections on those defective modules. For quality assurance activity, it is important to concentrate on the software metrics. Software metrics play a vital role in measuring the quality of software. Many researchers focused on classification algorithm for predicting the software defect. On the other hand, classifiers ensemble can effectively improve classification performance when compared with a single classifier. This paper mainly addresses using ensemble approach of Support Vector Machine (SVM) for fault prediction. Ensemble classifier was examined for Eclipse Package level dataset and NASA KC1 dataset. We showed that proposed ensemble of Support Vector Machine is superior to individual approach for software fault prediction in terms of classification rate through Root Mean Square Error Rate (RMSE), AUC-ROC, ROC curves.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"71 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83706412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033926
P. Vinoth Kumar, M. Maheshwari
VANET is a form of Mobile Ad-Hoc Network which provides communication between vehicles and road-side base stations. The aim is to provide safety, traffic management, and infotainment services. The security of VANET is in concern state from early time. VANETs face several security threats and there are a number of attacks that can lead to human life loss. Existing VANET systems used detection algorithm to detect the attacks at the verification time in which delay overhead occurred. Batch authenticated and key agreement (ABAKA) scheme is used to authenticate multiple requests sent from different vehicles. Yet it does not provide any priority to the requests from emergency vehicles and a malicious vehicle can send a false message by spoofing the identity of valid vehicles to other vehicles leading to Sybil attack. Priority Batch Verification Algorithm (PBVA) is used to classify the requests obtained from multiple vehicles in order to provide immediate response to emergency vehicles with less time delay. This system also to prevent Sybil attack by restricting timestamps provided by RSU at an early stage itself.
{"title":"Prevention of Sybil attack and priority batch verification in VANETs","authors":"P. Vinoth Kumar, M. Maheshwari","doi":"10.1109/ICICES.2014.7033926","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033926","url":null,"abstract":"VANET is a form of Mobile Ad-Hoc Network which provides communication between vehicles and road-side base stations. The aim is to provide safety, traffic management, and infotainment services. The security of VANET is in concern state from early time. VANETs face several security threats and there are a number of attacks that can lead to human life loss. Existing VANET systems used detection algorithm to detect the attacks at the verification time in which delay overhead occurred. Batch authenticated and key agreement (ABAKA) scheme is used to authenticate multiple requests sent from different vehicles. Yet it does not provide any priority to the requests from emergency vehicles and a malicious vehicle can send a false message by spoofing the identity of valid vehicles to other vehicles leading to Sybil attack. Priority Batch Verification Algorithm (PBVA) is used to classify the requests obtained from multiple vehicles in order to provide immediate response to emergency vehicles with less time delay. This system also to prevent Sybil attack by restricting timestamps provided by RSU at an early stage itself.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"57 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83990439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7033848
K. Rajesh, R. Karthick, G. S. Raj
VANETs are used for high-speed, short-range communication among vehicles and vehicle to road side equipments, consistent internet connectivity resulting in improved road safety alerts, accessing comforts and for entertainments. Within VANETs, Vehicle mobility leads to constant breakage of communication links between vehicles. Such link failures require a direct response from the routing protocols in VANETs. This leads to increase the routing overhead in VANET and also reduce network efficiency or scalability. In this paper, we propose a reactive location based ad hoc routing protocol. This routing protocol minimize the routing overhead by using the efficiently make use of all the location information available. This protocol moves to reactive routing as location information degrades. Through analysis and simulation we show that our protocol is efficient in presence of high Location errors in all environments of VANETS.
{"title":"A new scalable reactive location based ad hoc routing protocol for VANETs","authors":"K. Rajesh, R. Karthick, G. S. Raj","doi":"10.1109/ICICES.2014.7033848","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7033848","url":null,"abstract":"VANETs are used for high-speed, short-range communication among vehicles and vehicle to road side equipments, consistent internet connectivity resulting in improved road safety alerts, accessing comforts and for entertainments. Within VANETs, Vehicle mobility leads to constant breakage of communication links between vehicles. Such link failures require a direct response from the routing protocols in VANETs. This leads to increase the routing overhead in VANET and also reduce network efficiency or scalability. In this paper, we propose a reactive location based ad hoc routing protocol. This routing protocol minimize the routing overhead by using the efficiently make use of all the location information available. This protocol moves to reactive routing as location information degrades. Through analysis and simulation we show that our protocol is efficient in presence of high Location errors in all environments of VANETS.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"158 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83373073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-02-01DOI: 10.1109/ICICES.2014.7034055
N. Soni, R. Negi
This paper considers the problem of asymptotic stability of linear discrete-time system with interval-like time-varying delay subjected to actuator saturation through anti-windup strategies. By using a delay based Lyapunov function, a new criterion for the asymptotic stability of such system is proposed in terms of linear matrix inequalities (LMIs). The proposed technique is illustrated by means of numerical example.
{"title":"Stability analysis of linear discrete time system with time varying delay and actuator saturation","authors":"N. Soni, R. Negi","doi":"10.1109/ICICES.2014.7034055","DOIUrl":"https://doi.org/10.1109/ICICES.2014.7034055","url":null,"abstract":"This paper considers the problem of asymptotic stability of linear discrete-time system with interval-like time-varying delay subjected to actuator saturation through anti-windup strategies. By using a delay based Lyapunov function, a new criterion for the asymptotic stability of such system is proposed in terms of linear matrix inequalities (LMIs). The proposed technique is illustrated by means of numerical example.","PeriodicalId":13713,"journal":{"name":"International Conference on Information Communication and Embedded Systems (ICICES2014)","volume":"8 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2014-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76368627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}