Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.069
Paniti Netinant
Developing reusable and adaptable concurrent software are very difficult. Reusability and adaptability in concurrent software become minimal. These collaborations not only perimeter reusability, but also make modification invasive changes to meet new requirements in the design of the concurrent software. The concurrent system has to reengineer, in order to meet the future requirements. A misunderstanding, that a concurrent object-oriented programming generally endorses reusability and adaptability, as none of these problems is imposed. Software developers have to specifically deliberate software making reusability and adaptability. We present a concurrent aspect framework which better supporting reusability and adaptability. In the framework, functional components and system assets are relatively separated. This technique makes concurrent software developing better reusability and adaptability. A framework enables manageable reusability and adaptability for building of concurrent software. Our research focuses on decomposition of contact in concurrent software development and our goal is to demonstrate a better model of concurrent software design.
{"title":"Design Reusability and Adaptability for Concurrent Software","authors":"Paniti Netinant","doi":"10.1016/j.aasri.2013.10.069","DOIUrl":"10.1016/j.aasri.2013.10.069","url":null,"abstract":"<div><p>Developing reusable and adaptable concurrent software are very difficult. Reusability and adaptability in concurrent software become minimal. These collaborations not only perimeter reusability, but also make modification invasive changes to meet new requirements in the design of the concurrent software. The concurrent system has to reengineer, in order to meet the future requirements. A misunderstanding, that a concurrent object-oriented programming generally endorses reusability and adaptability, as none of these problems is imposed. Software developers have to specifically deliberate software making reusability and adaptability. We present a concurrent aspect framework which better supporting reusability and adaptability. In the framework, functional components and system assets are relatively separated. This technique makes concurrent software developing better reusability and adaptability. A framework enables manageable reusability and adaptability for building of concurrent software. Our research focuses on decomposition of contact in concurrent software development and our goal is to demonstrate a better model of concurrent software design.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"5 ","pages":"Pages 133-139"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84871509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.010
Shivnarayan Patidar, Ram Bilas Pachori
In this paper, we present a new method for analysis of cardiac sound signals containing murmurs using constrained tunable-Q wavelet transform (TQWT). The fundamental heart sounds (FHS) and murmurs are separately reconstructed by suitably constraining TQWT. The segmentation of reconstructed murmurs into heart beat cycles is achieved using cardiac sound characteristic wave-form (CSCW) of reconstructed FHS. The frequency domain based approximate entropy, spectral entropy, Lempel-Ziv complexity, and time domain Shannon entropy are computed for each segmented heart beat cycles for least squares support vector machine (LS-SVM) based classification. The experimental results are included to show the effectiveness of the proposed method.
本文提出了一种利用约束可调q小波变换(TQWT)分析含杂音的心音信号的新方法。通过对TQWT进行适当的约束,分别对心音和杂音进行重构。利用重构FHS的心音特征波形(CSCW)实现了重构杂音的心跳周期分割。基于最小二乘支持向量机(least squares support vector machine, LS-SVM)对每段心跳周期进行分类,计算基于频域的近似熵、谱熵、Lempel-Ziv复杂度和时域Shannon熵。实验结果表明了该方法的有效性。
{"title":"Constrained Tunable-Q Wavelet Transform based Analysis of Cardiac Sound Signals","authors":"Shivnarayan Patidar, Ram Bilas Pachori","doi":"10.1016/j.aasri.2013.10.010","DOIUrl":"10.1016/j.aasri.2013.10.010","url":null,"abstract":"<div><p>In this paper, we present a new method for analysis of cardiac sound signals containing murmurs using constrained tunable-Q wavelet transform (TQWT). The fundamental heart sounds (FHS) and murmurs are separately reconstructed by suitably constraining TQWT. The segmentation of reconstructed murmurs into heart beat cycles is achieved using cardiac sound characteristic wave-form (CSCW) of reconstructed FHS. The frequency domain based approximate entropy, spectral entropy, Lempel-Ziv complexity, and time domain Shannon entropy are computed for each segmented heart beat cycles for least squares support vector machine (LS-SVM) based classification. The experimental results are included to show the effectiveness of the proposed method.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"4 ","pages":"Pages 57-63"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81748793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.060
Mohamed-Larbi Rebaiaia , Daoud Ait-Kadi
Network reliability analysis problem is the center of many scientific productions. It consists of evaluating the all-terminal reliability of networks. Two classes have emerged; exact and approximate methods. The aim of this paper is to present an efficient exact method for enumerating minimal cuts (MCS) of R-networks. The algorithm proceeds by determining minimal paths set (MPS) and from which minimal cuts are generated by managing binary decision diagrams. The manipulation process consists of a series of transformations, reductions and filtering operations. The approach succeeds in the reduction of computation time and memory space and was applied for evaluating the reliability of a national radio communication network.
{"title":"A New Technique for Generating Minimal Cut Sets in Nontrivial Network","authors":"Mohamed-Larbi Rebaiaia , Daoud Ait-Kadi","doi":"10.1016/j.aasri.2013.10.060","DOIUrl":"10.1016/j.aasri.2013.10.060","url":null,"abstract":"<div><p>Network reliability analysis problem is the center of many scientific productions. It consists of evaluating the all-terminal reliability of networks. Two classes have emerged; exact and approximate methods. The aim of this paper is to present an efficient exact method for enumerating minimal cuts (MCS) of R-networks. The algorithm proceeds by determining minimal paths set (MPS) and from which minimal cuts are generated by managing binary decision diagrams. The manipulation process consists of a series of transformations, reductions and filtering operations. The approach succeeds in the reduction of computation time and memory space and was applied for evaluating the reliability of a national radio communication network.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"5 ","pages":"Pages 67-76"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.060","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84013999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.064
Junxing Wang, Zhigang Chen, Yuan Yao
Aimed at the problem that system energy consumption of the traditional cooperative multicast is large with increase of the number of users, four energy-efficient relay selection schemes are proposed to reduce system energy consumption. By using exhaustive search method, the optimal relay selection algorithm selects relays from all the possible combinations of relays, the scheme could reduce energy consumption and improve energy efficiency effectively at high computational complexity. According to the thought of iteration, greedy algorithm chooses relays using the selected relays, though the scheme doesn’t achieve the energy efficiency of the optimal relay selection algorithm, it reduces computational complexity. Besides, relay selection based on multicast rate of users selects relays using the multicast rate of each user by which the rest users with bad channel condition could decode data correctly, and relay selection based on channel state information (CSI) among users chooses relays according to the whole CSI between each user decoded data correctly and all users failed to decode data, these two schemes further reduce complexity compared to greedy algorithm. Results show that all the proposed energy-efficient relay selection approaches could improve energy efficiency while meeting the requirement of fairness.
{"title":"Energy-efficient Relay Selection for Multicast Communication","authors":"Junxing Wang, Zhigang Chen, Yuan Yao","doi":"10.1016/j.aasri.2013.10.064","DOIUrl":"10.1016/j.aasri.2013.10.064","url":null,"abstract":"<div><p>Aimed at the problem that system energy consumption of the traditional cooperative multicast is large with increase of the number of users, four energy-efficient relay selection schemes are proposed to reduce system energy consumption. By using exhaustive search method, the optimal relay selection algorithm selects relays from all the possible combinations of relays, the scheme could reduce energy consumption and improve energy efficiency effectively at high computational complexity. According to the thought of iteration, greedy algorithm chooses relays using the selected relays, though the scheme doesn’t achieve the energy efficiency of the optimal relay selection algorithm, it reduces computational complexity. Besides, relay selection based on multicast rate of users selects relays using the multicast rate of each user by which the rest users with bad channel condition could decode data correctly, and relay selection based on channel state information (CSI) among users chooses relays according to the whole CSI between each user decoded data correctly and all users failed to decode data, these two schemes further reduce complexity compared to greedy algorithm. Results show that all the proposed energy-efficient relay selection approaches could improve energy efficiency while meeting the requirement of fairness.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"5 ","pages":"Pages 98-105"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80801391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.005
Ricardo Soto , Stéphane Caro , Broderick Crawford
In robotics, pose errors are known as positional and rotational errors of a given mechanical system. Those errors are commonly produced by the play among joined components, commonly known as joint clearances. Predicting pose errors can be done via the formulation of two optimization models holding continuous domains, which belong to the NP-Hard class of problems. This paper focuses on providing rigorous and reliable solution to this problem by using constraint programing.
{"title":"On the Pursuit of Reliable Solutions for a Robotic Optimization Problem","authors":"Ricardo Soto , Stéphane Caro , Broderick Crawford","doi":"10.1016/j.aasri.2013.10.005","DOIUrl":"10.1016/j.aasri.2013.10.005","url":null,"abstract":"<div><p>In robotics, pose errors are known as positional and rotational errors of a given mechanical system. Those errors are commonly produced by the play among joined components, commonly known as joint clearances. Predicting pose errors can be done via the formulation of two optimization models holding continuous domains, which belong to the NP-Hard class of problems. This paper focuses on providing rigorous and reliable solution to this problem by using constraint programing.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"4 ","pages":"Pages 26-30"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80755614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.007
Yassine Aribi, Ali Wali, Mohamed Chakroun, Adel M. Alimi
Renography is a radionuclide technique introduced in medicine since the early sixties in the daily practice. It allows measuring the function of each kidney separately in case of unilateral or bilateral uro-nephrological disease. A new system for analysis of Renography is presented in this paper. Our system aims to show a fully automated method for drawing the renal regions of interest based on a multi-agent system that incorporates spatio-temporal interest points detection on scintigraphic images by using the HOG3D descriptor to initialize agents. We used both types of agents, namely supervisor agents and explorator agents, they communicate among themselves and they inspire in their behavior from the Fast Marching method. Our system was tested on many real patients’ cases and we have obtained encouraging results. Clinical validation showed a constant agreement between the approach presented and manual segmentation by professionals in the nuclear medicine.
{"title":"Automatic Definition of Regions of Interest on Renal Scintigraphic Images","authors":"Yassine Aribi, Ali Wali, Mohamed Chakroun, Adel M. Alimi","doi":"10.1016/j.aasri.2013.10.007","DOIUrl":"10.1016/j.aasri.2013.10.007","url":null,"abstract":"<div><p>Renography is a radionuclide technique introduced in medicine since the early sixties in the daily practice. It allows measuring the function of each kidney separately in case of unilateral or bilateral uro-nephrological disease. A new system for analysis of Renography is presented in this paper. Our system aims to show a fully automated method for drawing the renal regions of interest based on a multi-agent system that incorporates spatio-temporal interest points detection on scintigraphic images by using the HOG3D descriptor to initialize agents. We used both types of agents, namely supervisor agents and explorator agents, they communicate among themselves and they inspire in their behavior from the Fast Marching method. Our system was tested on many real patients’ cases and we have obtained encouraging results. Clinical validation showed a constant agreement between the approach presented and manual segmentation by professionals in the nuclear medicine.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"4 ","pages":"Pages 37-42"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89397727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.048
Haiyan Li, Shen Yang, Hong Liu
Granular computing theories in the field of computer are introduced into the statistical analysis of qualitative data, based on the traditional qualitative data analysis methods. Multidimensional qualitative data by use of information system are described, and the mathematical model of qualitative data cluster model based on granular computing is given. The feasibility and the superiority are verified by treating massive data. This method may provide a new train of thought for analysis of large and complex qualitative data.
{"title":"Study of Qualitative Data Cluster Model based on Granular Computing","authors":"Haiyan Li, Shen Yang, Hong Liu","doi":"10.1016/j.aasri.2013.10.048","DOIUrl":"10.1016/j.aasri.2013.10.048","url":null,"abstract":"<div><p>Granular computing theories in the field of computer are introduced into the statistical analysis of qualitative data, based on the traditional qualitative data analysis methods. Multidimensional qualitative data by use of information system are described, and the mathematical model of qualitative data cluster model based on granular computing is given. The feasibility and the superiority are verified by treating massive data. This method may provide a new train of thought for analysis of large and complex qualitative data.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"4 ","pages":"Pages 329-333"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.048","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81227513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.051
Kyung-Chang Kim , Choung-Seok Kim
Many wireless sensor network (WSN) applications require join of sensor data belonging to various sensor nodes. For join processing, it is important to minimize the communication cost since it is the main consumer of battery power. In this paper, we introduce a parallel join technique for sensor networks. A WSN consists of many independent sensor nodes and provides a natural platform for a shared-nothing architecture to carry out parallel processing. The proposed parallel join algorithm is based on sensor data that are stored in column-oriented databases. A column-oriented database store table data column-wise rather than row-wise as in traditional relational databases. The proposed algorithm is energy-efficient for two clear reasons. First, unlike relational databases, only relevant columns are shipped to the join region for final join processing. Second, parallel join processing of sensor data also improves performance. The performance analysis shows that the proposed algorithm outperforms join algorithms for sensor data that are based on relational databases.
{"title":"Parallel Processing of Sensor Network Data Using Column-oriented Databases","authors":"Kyung-Chang Kim , Choung-Seok Kim","doi":"10.1016/j.aasri.2013.10.051","DOIUrl":"10.1016/j.aasri.2013.10.051","url":null,"abstract":"<div><p>Many wireless sensor network (WSN) applications require join of sensor data belonging to various sensor nodes. For join processing, it is important to minimize the communication cost since it is the main consumer of battery power. In this paper, we introduce a parallel join technique for sensor networks. A WSN consists of many independent sensor nodes and provides a natural platform for a shared-nothing architecture to carry out parallel processing. The proposed parallel join algorithm is based on sensor data that are stored in column-oriented databases. A column-oriented database store table data column-wise rather than row-wise as in traditional relational databases. The proposed algorithm is energy-efficient for two clear reasons. First, unlike relational databases, only relevant columns are shipped to the join region for final join processing. Second, parallel join processing of sensor data also improves performance. The performance analysis shows that the proposed algorithm outperforms join algorithms for sensor data that are based on relational databases.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"5 ","pages":"Pages 2-8"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73558863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.037
D. Binu, M. Selvi, Aloysius George
Discovering of optimal cluster through the help of optimization procedure is a recent trend in clustering process. Accordingly, several algorithms have been developed in the literature to mine optimal clusters. Most of the optimization- based clustering algorithms presented in the literature are only focused on the same objective given in the well-known clustering process, k-means clustering. Instead of k-means objective, some more effective objective functions are designed by the researchers for clustering. So, hybridization of those effective objectives with optimization algorithms can lead the effective clustering results. With the aim of this, we have presented a hybrid algorithm, called MKF-Cuckoo which is the hybridization of cuckoo search algorithm with the multiple kernel-based fuzzy c means algorithm. Here, MKFCM objective is taken and the same objective is solved through the cuckoo search algorithm which is one of the recent optimization algorithm proved effective in many optimization problems. For proving the effectiveness of the algorithm, the performance of the algorithm is comparatively analyzed with some other algorithm using clustering accuracy, rand coefficient, jaccard coefficient and computational time with iris and wine datasets. From the results, we can prove that the hybrid algorithm obtained 96% accuracy in iris data and 67% accuracy in wine data.
{"title":"MKF-Cuckoo: Hybridization of Cuckoo Search and Multiple Kernel-based Fuzzy C-means Algorithm","authors":"D. Binu, M. Selvi, Aloysius George","doi":"10.1016/j.aasri.2013.10.037","DOIUrl":"10.1016/j.aasri.2013.10.037","url":null,"abstract":"<div><p>Discovering of optimal cluster through the help of optimization procedure is a recent trend in clustering process. Accordingly, several algorithms have been developed in the literature to mine optimal clusters. Most of the optimization- based clustering algorithms presented in the literature are only focused on the same objective given in the well-known clustering process, k-means clustering. Instead of k-means objective, some more effective objective functions are designed by the researchers for clustering. So, hybridization of those effective objectives with optimization algorithms can lead the effective clustering results. With the aim of this, we have presented a hybrid algorithm, called MKF-Cuckoo which is the hybridization of cuckoo search algorithm with the multiple kernel-based fuzzy c means algorithm. Here, MKFCM objective is taken and the same objective is solved through the cuckoo search algorithm which is one of the recent optimization algorithm proved effective in many optimization problems. For proving the effectiveness of the algorithm, the performance of the algorithm is comparatively analyzed with some other algorithm using clustering accuracy, rand coefficient, jaccard coefficient and computational time with iris and wine datasets. From the results, we can prove that the hybrid algorithm obtained 96% accuracy in iris data and 67% accuracy in wine data.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"4 ","pages":"Pages 243-249"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86045389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-01-01DOI: 10.1016/j.aasri.2013.10.059
Wu Liu , Donghong Sun , Ping Ren , Yihui Zhang
This paper proposed a Convex Optimization method which is called Co-SRL and is used to localize sensor location in Wireless Sensor Networks.Co-SRL can be used to help the node to localize a friendnode or mobile node using anchors. In Co-SRL, convex optimization algorithm is used forthe estimationof malicious nodeposition.Simulation result shows that Co-SRL is both secure and robust, in an environment without colluding, Co-SRLcan identify more than half of the malicious nodes; and in an environment with colluding, no more than 15% of malicious nodescan escape from the identification of our methods.
{"title":"Co-SRL: A Convex Optimization Algorithm for Anchor Localization in Wireless Sensor Networks","authors":"Wu Liu , Donghong Sun , Ping Ren , Yihui Zhang","doi":"10.1016/j.aasri.2013.10.059","DOIUrl":"10.1016/j.aasri.2013.10.059","url":null,"abstract":"<div><p>This paper proposed a Convex Optimization method which is called Co-SRL and is used to localize sensor location in Wireless Sensor Networks.Co-SRL can be used to help the node to localize a friendnode or mobile node using anchors. In Co-SRL, convex optimization algorithm is used forthe estimationof malicious nodeposition.Simulation result shows that Co-SRL is both secure and robust, in an environment without colluding, Co-SRLcan identify more than half of the malicious nodes; and in an environment with colluding, no more than 15% of malicious nodescan escape from the identification of our methods.</p></div>","PeriodicalId":100008,"journal":{"name":"AASRI Procedia","volume":"5 ","pages":"Pages 62-66"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aasri.2013.10.059","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89195386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}