World is emerging with trade practices in the global scenario. Trademarks today play a very critical and important role. Trademark registration and its evaluation for distinctiveness is becoming very tedious job for registration offices. Millions of trademarks had already registered and millions of applications are filed for trademarks registration. There are different techniques and approaches currently in use for distinctness check for trademarks. The proposed method is used for retrieval of trademarks and it uses the maxima of curvature zero crossing contours of Curvature Scale Space image (CSS) as a feature vector to represent the shapes of object boundary contours. The matching algorithm, which compares two sets of maxima and assigns matching value as a measure of similarity. The major advantage with CSS technique is that CSS representation of the image is robust with respect to noise, scale, and orientation changes of the image. Each retrieval takes a reasonable amount of computation time. The top most retrieved images by the system agree with that obtained by human perception.
{"title":"CSS Based Trademark Retrieval System","authors":"Dayanand Jamkhandikar, V. D. Mytri","doi":"10.1109/ICESC.2014.27","DOIUrl":"https://doi.org/10.1109/ICESC.2014.27","url":null,"abstract":"World is emerging with trade practices in the global scenario. Trademarks today play a very critical and important role. Trademark registration and its evaluation for distinctiveness is becoming very tedious job for registration offices. Millions of trademarks had already registered and millions of applications are filed for trademarks registration. There are different techniques and approaches currently in use for distinctness check for trademarks. The proposed method is used for retrieval of trademarks and it uses the maxima of curvature zero crossing contours of Curvature Scale Space image (CSS) as a feature vector to represent the shapes of object boundary contours. The matching algorithm, which compares two sets of maxima and assigns matching value as a measure of similarity. The major advantage with CSS technique is that CSS representation of the image is robust with respect to noise, scale, and orientation changes of the image. Each retrieval takes a reasonable amount of computation time. The top most retrieved images by the system agree with that obtained by human perception.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131997153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sphere Decoder (SD) is widely being used in Multiple Input Multiple Output (MIMO) systems to reduce the complexity of the system while obtaining near Maximum Likelihood (ML) performance. The complexity of the system increases with the increase in antenna configuration or the constellation size. Some pre-processing is a fundamental prerequisite in iterative detectors to reduce the system complexity by focusing the received signal energy to reduce the effect of inter-symbol interference. The QR Decomposition (QRD) of communication channel matrices in the pre-processor stage is an important issue to ensure good performance of the subsequent steps of decoding thus a QRD) is commonly used in many MIMO detection algorithms. A sorted QR decomposition (SQRD) is an advanced algorithm that improves the performance of MIMO detection. In this paper the efficiency of QRD and SQRD methods in terms of computational complexity, error rate performance and the FPGA resources utilized is presented. The main contribution of this work is a comparison of hardware implementations of the QRD and SQRD system. QRD for 4x4 MIMO system is implemented on various target FPGA platforms to compare their area utilization.
{"title":"Analysis of QR Decomposition for MIMO Systems","authors":"Abha Chauhan, R. Mehra","doi":"10.1109/ICESC.2014.20","DOIUrl":"https://doi.org/10.1109/ICESC.2014.20","url":null,"abstract":"Sphere Decoder (SD) is widely being used in Multiple Input Multiple Output (MIMO) systems to reduce the complexity of the system while obtaining near Maximum Likelihood (ML) performance. The complexity of the system increases with the increase in antenna configuration or the constellation size. Some pre-processing is a fundamental prerequisite in iterative detectors to reduce the system complexity by focusing the received signal energy to reduce the effect of inter-symbol interference. The QR Decomposition (QRD) of communication channel matrices in the pre-processor stage is an important issue to ensure good performance of the subsequent steps of decoding thus a QRD) is commonly used in many MIMO detection algorithms. A sorted QR decomposition (SQRD) is an advanced algorithm that improves the performance of MIMO detection. In this paper the efficiency of QRD and SQRD methods in terms of computational complexity, error rate performance and the FPGA resources utilized is presented. The main contribution of this work is a comparison of hardware implementations of the QRD and SQRD system. QRD for 4x4 MIMO system is implemented on various target FPGA platforms to compare their area utilization.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131038864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
VEHICULAR ad hoc networks (VANETs) are the self-configuring networks where the nodes are vehicles (equipped with on-board computers), elements of roadside infrastructure, sensors, and pedestrian personal devices. In the current state of the art in this field there is a need of studies on real outdoor experiments to validate the new VANETs optimizing protocols and applications. Thus exchanging up-to-date information among vehicles is the most salient feature of a VANET. In order to do so, the packets have to travel through the network from one node to the others & thus give rise to concept of inter VANET data handovers & VDTP (Vehicular data transfer Protocol). In this work, we have addressed the utility of metaheuristic algorithms (PSO, GA) for inter VANET sensor data handovers in order to study the performance analysis to maximize the throughput & reliability improvement in real VANET.
{"title":"Vehicular Adhoc Networks Handovers with Metaheuristic Algorithms","authors":"P. Wararkar, S. Dorle","doi":"10.1109/ICESC.2014.33","DOIUrl":"https://doi.org/10.1109/ICESC.2014.33","url":null,"abstract":"VEHICULAR ad hoc networks (VANETs) are the self-configuring networks where the nodes are vehicles (equipped with on-board computers), elements of roadside infrastructure, sensors, and pedestrian personal devices. In the current state of the art in this field there is a need of studies on real outdoor experiments to validate the new VANETs optimizing protocols and applications. Thus exchanging up-to-date information among vehicles is the most salient feature of a VANET. In order to do so, the packets have to travel through the network from one node to the others & thus give rise to concept of inter VANET data handovers & VDTP (Vehicular data transfer Protocol). In this work, we have addressed the utility of metaheuristic algorithms (PSO, GA) for inter VANET sensor data handovers in order to study the performance analysis to maximize the throughput & reliability improvement in real VANET.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125587027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Content based image retrieval technique is done by three primitive methods namely through colour, shape and texture. This paper provides specified path to use these primitive features to retrieve the desired image. The technique by which we obtain the required image is CBIR. In CBIR first the HSV colour space is quantified to obtain the colour histogram and texture features. Using these components a feature matrix is formed. Then this matrix is mapped with the characteristic of global colour histogram and local colour histogram, which are analysed and compared. For the cooccurrence matrix between the local image and the images in the database to retrieve the image. For extracting shape feature gradient method is used here. Based on this principle, CBIR system uses colour, texture and shape fused features to retrieve desired image from the large database and hence provides more efficiency or enhancement in image retrieval than the single feature retrieval system which means better image retrieval results.
{"title":"Enhancement of Image Retrieval by Using Colour, Texture and Shape Features","authors":"A. Ganar, C. Gode, S. Jambhulkar","doi":"10.1109/ICESC.2014.48","DOIUrl":"https://doi.org/10.1109/ICESC.2014.48","url":null,"abstract":"Content based image retrieval technique is done by three primitive methods namely through colour, shape and texture. This paper provides specified path to use these primitive features to retrieve the desired image. The technique by which we obtain the required image is CBIR. In CBIR first the HSV colour space is quantified to obtain the colour histogram and texture features. Using these components a feature matrix is formed. Then this matrix is mapped with the characteristic of global colour histogram and local colour histogram, which are analysed and compared. For the cooccurrence matrix between the local image and the images in the database to retrieve the image. For extracting shape feature gradient method is used here. Based on this principle, CBIR system uses colour, texture and shape fused features to retrieve desired image from the large database and hence provides more efficiency or enhancement in image retrieval than the single feature retrieval system which means better image retrieval results.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126655104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction of many Image processing and segmentation tools has undoubtedly presented the procedures of mapping the human brain in a more efficient way. This paper attempts to pull out a new and a practical approach for enhancing the underlying delicate architectures of the human brain images captured by a Magnetic Resonance Imaging(MRI) machine in a much better way. Edge detection is a fundamental tool for the basic study of human brain particularly in the areas of feature detection and feature extraction. The edge detection methodology presented in this paper relies on two basic stages: Firstly, the original MRI image is subjected to image segmentation which is done using Particle Swarm optimization incorporating Fuzzy C Means Clustering (PSOFCM) technique. And secondly, canny edge detection algorithm is used for detecting the fine edges. After implementation it was found that this technique yields better edge detected image of the human brain as compared to other edge detection methods as discussed below.
{"title":"A Novel MRI Brain Edge Detection Using PSOFCM Segmentation and Canny Algorithm","authors":"Romesh Laishram, Wahengbam Kanan Kumar, Anshuman Gupta, Khairnar Vinayak Prakash","doi":"10.1109/ICESC.2014.78","DOIUrl":"https://doi.org/10.1109/ICESC.2014.78","url":null,"abstract":"Introduction of many Image processing and segmentation tools has undoubtedly presented the procedures of mapping the human brain in a more efficient way. This paper attempts to pull out a new and a practical approach for enhancing the underlying delicate architectures of the human brain images captured by a Magnetic Resonance Imaging(MRI) machine in a much better way. Edge detection is a fundamental tool for the basic study of human brain particularly in the areas of feature detection and feature extraction. The edge detection methodology presented in this paper relies on two basic stages: Firstly, the original MRI image is subjected to image segmentation which is done using Particle Swarm optimization incorporating Fuzzy C Means Clustering (PSOFCM) technique. And secondly, canny edge detection algorithm is used for detecting the fine edges. After implementation it was found that this technique yields better edge detected image of the human brain as compared to other edge detection methods as discussed below.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"61 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114011337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
'Water' is one of the most valuable resources available to the mankind. In the world, due to exponential growth in population and industrialization we are witnessing scarcity of water. In addition, water table levels are falling rapidly than ever. Hence proper management and appropriate utilization of water has become the need of an hour. Hence this problem is required to be tackled with the novel approach. The idea behind this proposal is to design and development of rain water harvesting system based on rainfall runoff estimation over rooftop. The Google Earth image is combination of remote sensed satellite images and aerial photographs. The information on land use and land cover is obtained using satellites Google Earth images which are simple, economical and precise approach. In the proposed work an efficient classification technique is proposed in which K-means clustering algorithm and textural parameters based on GLCM are used for classification of the Google Earth images into land cover and land use sector. In Land use and land cover classification whole image gets classified into different region such as Grass area, Water area, Roof-top area, Soil area etc. Then area under the different regions is computed. Area measurement is required for computing rainfall runoff using estimation model. Experimental result shows that the computation of the areas of roof tops and road surfaces are nearly accurate and rainfall runoff calculation can be estimated very near to actual.
{"title":"Rainfall Estimation over Roof-Top Using Land-Cover Classification of Google Earth Images","authors":"M. Aher, S. Pradhan, Y. Dandawate","doi":"10.1109/ICESC.2014.24","DOIUrl":"https://doi.org/10.1109/ICESC.2014.24","url":null,"abstract":"'Water' is one of the most valuable resources available to the mankind. In the world, due to exponential growth in population and industrialization we are witnessing scarcity of water. In addition, water table levels are falling rapidly than ever. Hence proper management and appropriate utilization of water has become the need of an hour. Hence this problem is required to be tackled with the novel approach. The idea behind this proposal is to design and development of rain water harvesting system based on rainfall runoff estimation over rooftop. The Google Earth image is combination of remote sensed satellite images and aerial photographs. The information on land use and land cover is obtained using satellites Google Earth images which are simple, economical and precise approach. In the proposed work an efficient classification technique is proposed in which K-means clustering algorithm and textural parameters based on GLCM are used for classification of the Google Earth images into land cover and land use sector. In Land use and land cover classification whole image gets classified into different region such as Grass area, Water area, Roof-top area, Soil area etc. Then area under the different regions is computed. Area measurement is required for computing rainfall runoff using estimation model. Experimental result shows that the computation of the areas of roof tops and road surfaces are nearly accurate and rainfall runoff calculation can be estimated very near to actual.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main drawback of reinforcement learning is that it learns nothing from an episode until it is over. So the learning procedure is very slow in case of large space applications. Differential Evolution (DE) algorithm is a population-based evolutionary optimization algorithm able to learn the search space in iterative way. In the paper, improvement of Q-learning method has been proposed using DE algorithm where guided randomness has been incorporated in the search space resulting fast convergence. Markov Decision Process (MDP), a mathematical framework has been used to model the problem in order to learn the large search space efficiently. The proposed algorithm exhibits better result in terms of speed and performance compare to basic Q-learning algorithm.
{"title":"DE Based Q-Learning Algorithm to Improve Speed of Convergence in Large Search Space Applications","authors":"Z. Rahaman, J. Sil","doi":"10.1109/ICESC.2014.80","DOIUrl":"https://doi.org/10.1109/ICESC.2014.80","url":null,"abstract":"The main drawback of reinforcement learning is that it learns nothing from an episode until it is over. So the learning procedure is very slow in case of large space applications. Differential Evolution (DE) algorithm is a population-based evolutionary optimization algorithm able to learn the search space in iterative way. In the paper, improvement of Q-learning method has been proposed using DE algorithm where guided randomness has been incorporated in the search space resulting fast convergence. Markov Decision Process (MDP), a mathematical framework has been used to model the problem in order to learn the large search space efficiently. The proposed algorithm exhibits better result in terms of speed and performance compare to basic Q-learning algorithm.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123306050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chandrashekhar T. Kukade, R. Deshmukh, R. Patrikar
A novel architecture of parallel multiplier using modified Booth's recoding unit for 2's complement numbers is presented in this paper. The basic Booth's recoding algorithm requires add and shift operations for multiplication, these steps makes this multiplier sequential. Parallel multiplication can be achieved using Booth's recoding algorithm and simple Brown's array of adders, but it requires more number of adders to get correct output. Other parallel multiplication techniques are available using Booth's recoding algorithm. However, these array multiplier also requires add, shift and extra control unit. The proposed design has two major features; first is modified Booth's recoding unit which produces partial products second is modified array of adders. Modified array of adder block designed, which uses less number of adders than conventional Booth's recoding multiplier. Multiplexers are basic unit used for Booth's recoding unit and synthesis has been carried out using 180 nm technology. The proposed design uses less power than conventional Booth's recoding 2's complement parallel multiplier.
{"title":"A Novel Parallel Multiplier for 2's Complement Numbers Using Booth's Recoding Algorithm","authors":"Chandrashekhar T. Kukade, R. Deshmukh, R. Patrikar","doi":"10.1109/ICESC.2014.103","DOIUrl":"https://doi.org/10.1109/ICESC.2014.103","url":null,"abstract":"A novel architecture of parallel multiplier using modified Booth's recoding unit for 2's complement numbers is presented in this paper. The basic Booth's recoding algorithm requires add and shift operations for multiplication, these steps makes this multiplier sequential. Parallel multiplication can be achieved using Booth's recoding algorithm and simple Brown's array of adders, but it requires more number of adders to get correct output. Other parallel multiplication techniques are available using Booth's recoding algorithm. However, these array multiplier also requires add, shift and extra control unit. The proposed design has two major features; first is modified Booth's recoding unit which produces partial products second is modified array of adders. Modified array of adder block designed, which uses less number of adders than conventional Booth's recoding multiplier. Multiplexers are basic unit used for Booth's recoding unit and synthesis has been carried out using 180 nm technology. The proposed design uses less power than conventional Booth's recoding 2's complement parallel multiplier.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129125048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently the application of wireless sensor network (WSN) is very popular for monitoring the remote or hostile environments. Network lifetime is the critical characteristic in wireless sensor network. In large scale reconnaissance system the lifetime of wireless sensor network (WSNs) system is the time period that all targets can be covered. Target coverage problem is one of the issues to increase the lifetime of wireless sensor network. This problem can be solved by scheduling the sensors activity by keeping minimum number of nodes active at particular time. The sensors from the active sets are monitoring all targets and liable to transmit the data to the base station and the remaining nodes are in sleep mode. In this paper we propose Genetic Algorithm in which we keep minimum number of nodes active by forming set cover by divide all the deployed sensors in to mutually elite subsets of sensors or set cover such that each set of can cover all targets . After finding the set cover they may work by turns means at one time only one set will active and other will sleep mode by designing heuristics that efficiently compute the set covers hence lifetime of WSNs can be prolong by finding more sensor set. The simulation results presented to verify the approach the maximum number of sensor cover can be found by conversion to the Cover Set problem, which has been proved to be NP-complete.
{"title":"Energy Efficient Wireless Sensor Network","authors":"R. Khedikar, A. Kapur, M. Chawhan","doi":"10.1109/ICESC.2014.14","DOIUrl":"https://doi.org/10.1109/ICESC.2014.14","url":null,"abstract":"Recently the application of wireless sensor network (WSN) is very popular for monitoring the remote or hostile environments. Network lifetime is the critical characteristic in wireless sensor network. In large scale reconnaissance system the lifetime of wireless sensor network (WSNs) system is the time period that all targets can be covered. Target coverage problem is one of the issues to increase the lifetime of wireless sensor network. This problem can be solved by scheduling the sensors activity by keeping minimum number of nodes active at particular time. The sensors from the active sets are monitoring all targets and liable to transmit the data to the base station and the remaining nodes are in sleep mode. In this paper we propose Genetic Algorithm in which we keep minimum number of nodes active by forming set cover by divide all the deployed sensors in to mutually elite subsets of sensors or set cover such that each set of can cover all targets . After finding the set cover they may work by turns means at one time only one set will active and other will sleep mode by designing heuristics that efficiently compute the set covers hence lifetime of WSNs can be prolong by finding more sensor set. The simulation results presented to verify the approach the maximum number of sensor cover can be found by conversion to the Cover Set problem, which has been proved to be NP-complete.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122901686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicular Ad hoc Network (VANET) is the network among vehicles. The efficient routing protocol is required to route the data from source node to destination node in VANET. In existing protocols, Ad hoc On-Demand Multipath Distance Vector (AOMDV) the number of hops increases with data traffic and Secure Ring Broadcast (SRB) routing protocol the routes established are very stable but cannot adapt to engaged nodes circumstances. A new concept protocol, Multipath Reliable Range Node Selection Distance Vector (MRRNSDV) routing is proposed which will be multipath protocol and number of hops will remain same even at high traffic conditions. This protocol is basically proposed to reduce the number of hops, find multiple paths to destination node and avoid channel congestion in more efficient way to increase the performance under higher traffic condition unlike Ad hoc on demand distance vector (AODV) protocol.
车辆自组织网络(VANET)是车辆之间的网络。在VANET中,需要有效的路由协议将数据从源节点路由到目的节点。在现有的路由协议中,AOMDV (Ad hoc demand Multipath Distance Vector)跳数随数据流量的增加而增加,SRB (Secure Ring Broadcast)路由协议建立的路由非常稳定,但不能适应节点参与的情况。提出了一种多路径可靠范围节点选择距离矢量路由(MRRNSDV)的新概念协议,该协议是一种多路径协议,即使在高流量条件下也能保持跳数不变。该协议与Ad hoc按需距离矢量(AODV)协议不同,主要是为了减少跳数,寻找到目的节点的多条路径,以更有效的方式避免信道拥塞,以提高在高流量条件下的性能。
{"title":"Multipath Reliable Range Node Selection Distance Vector Routing for VANET: Design Approach","authors":"H. Sharma, Pankaj Agrawal, R. Kshirsagar","doi":"10.1109/ICESC.2014.53","DOIUrl":"https://doi.org/10.1109/ICESC.2014.53","url":null,"abstract":"Vehicular Ad hoc Network (VANET) is the network among vehicles. The efficient routing protocol is required to route the data from source node to destination node in VANET. In existing protocols, Ad hoc On-Demand Multipath Distance Vector (AOMDV) the number of hops increases with data traffic and Secure Ring Broadcast (SRB) routing protocol the routes established are very stable but cannot adapt to engaged nodes circumstances. A new concept protocol, Multipath Reliable Range Node Selection Distance Vector (MRRNSDV) routing is proposed which will be multipath protocol and number of hops will remain same even at high traffic conditions. This protocol is basically proposed to reduce the number of hops, find multiple paths to destination node and avoid channel congestion in more efficient way to increase the performance under higher traffic condition unlike Ad hoc on demand distance vector (AODV) protocol.","PeriodicalId":335267,"journal":{"name":"2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124117541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}