Pub Date : 2014-11-20DOI: 10.1109/ICCCNT.2014.6963089
Ling Yang, Y. Fu, Zhifen Yang, Yanyan Wei
In this paper, blind equalization of short burst signals is formulated with the twin support vector regressor (TSVR) framework. The proposed algorithm combine the conventional cost function of TSVR with classical error function applied to blind equalization: the Godard's error function that describes the relationship between the input signals and the desired output signals of a blind equalizer is contained in the penalty terms of TSVR, and the iterative re-weighted least square (IRWLS) algorithm is used for twin support vector regressor to achieve fast convergence. In addition, it utilizes the data-reusing method for small amounts of data samples to reach stable convergence. Simulation experiments for constant modulus signals are done to prove the feasibility and validity of the proposed algorithm.
{"title":"Blind equalization of short burst signals based on twin support vector regressor and data-reusing method","authors":"Ling Yang, Y. Fu, Zhifen Yang, Yanyan Wei","doi":"10.1109/ICCCNT.2014.6963089","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963089","url":null,"abstract":"In this paper, blind equalization of short burst signals is formulated with the twin support vector regressor (TSVR) framework. The proposed algorithm combine the conventional cost function of TSVR with classical error function applied to blind equalization: the Godard's error function that describes the relationship between the input signals and the desired output signals of a blind equalizer is contained in the penalty terms of TSVR, and the iterative re-weighted least square (IRWLS) algorithm is used for twin support vector regressor to achieve fast convergence. In addition, it utilizes the data-reusing method for small amounts of data samples to reach stable convergence. Simulation experiments for constant modulus signals are done to prove the feasibility and validity of the proposed algorithm.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124316206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-20DOI: 10.1109/ICCCNT.2014.6963050
Nitish Chopra, Sarbjeet Singh
Cloud computing provides infinite access to on demand resources, this is true in case of public clouds but not in case of private clouds. As public cloud charges on usage basis, private clouds are owned by user. Composition of private and public cloud is known as hybrid cloud. In hybrid cloud task scheduling is a complex process as jobs can be allocated either in private cloud or public cloud. Deciding which resource should be taken on lease from public cloud into private cloud to complete the task is decided by number of scheduling algorithms. In this paper, we look at the various scheduling techniques and algorithms that are used in hybrid cloud and these are classified according to their optimization criteria and features provided by them. This paper is composed of five parts as introduction, scheduling process in hybrid clouds, types of scheduling techniques with their classification, execution cost parameter and conclusion is given in the end.
{"title":"Survey on scheduling in hybrid clouds","authors":"Nitish Chopra, Sarbjeet Singh","doi":"10.1109/ICCCNT.2014.6963050","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963050","url":null,"abstract":"Cloud computing provides infinite access to on demand resources, this is true in case of public clouds but not in case of private clouds. As public cloud charges on usage basis, private clouds are owned by user. Composition of private and public cloud is known as hybrid cloud. In hybrid cloud task scheduling is a complex process as jobs can be allocated either in private cloud or public cloud. Deciding which resource should be taken on lease from public cloud into private cloud to complete the task is decided by number of scheduling algorithms. In this paper, we look at the various scheduling techniques and algorithms that are used in hybrid cloud and these are classified according to their optimization criteria and features provided by them. This paper is composed of five parts as introduction, scheduling process in hybrid clouds, types of scheduling techniques with their classification, execution cost parameter and conclusion is given in the end.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125143473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-20DOI: 10.1109/ICCCNT.2014.6963059
M. Elhoseny, Xiaohui Yuan, H. K. El-Minir, A. Riad
In this paper, we propose a novel method based on genetic algorithm for constructing the wireless sensor network to extend its functionality and availability. In our proposed method, the structure of the network is dynamically decided and the organization differs after each message transmission round. With the goal of optimizing the lifespan of the entire network, genetic algorithm is employed to search for the most suitable sensor nodes as the cluster heads to relay the messages to base station. Using the chosen cluster heads, sensor clusters are formed that minimize the total inner cluster node-to-cluster head distance. Compared with eight other methods, our experimental results demonstrated that our proposed method greatly extended the network life. The network life improvement rate with respect to the second best cases is in the range of 13% to 43.44%. In each transmission round, the remaining energy of sensor nodes are fairly even with some fluctuations. That is, as a consequence of our proposed method, the variance among remaining energy is quite low, which implies that the sensor nodes shared the burden of relaying messages and, hence, elongated the overall network life.
{"title":"Extending self-organizing network availability using genetic algorithm","authors":"M. Elhoseny, Xiaohui Yuan, H. K. El-Minir, A. Riad","doi":"10.1109/ICCCNT.2014.6963059","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963059","url":null,"abstract":"In this paper, we propose a novel method based on genetic algorithm for constructing the wireless sensor network to extend its functionality and availability. In our proposed method, the structure of the network is dynamically decided and the organization differs after each message transmission round. With the goal of optimizing the lifespan of the entire network, genetic algorithm is employed to search for the most suitable sensor nodes as the cluster heads to relay the messages to base station. Using the chosen cluster heads, sensor clusters are formed that minimize the total inner cluster node-to-cluster head distance. Compared with eight other methods, our experimental results demonstrated that our proposed method greatly extended the network life. The network life improvement rate with respect to the second best cases is in the range of 13% to 43.44%. In each transmission round, the remaining energy of sensor nodes are fairly even with some fluctuations. That is, as a consequence of our proposed method, the variance among remaining energy is quite low, which implies that the sensor nodes shared the burden of relaying messages and, hence, elongated the overall network life.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126890939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963003
Xiaohong Li, Yuan Chen, Feng Wang, Tao Qinqin
Secret sharing is a very important branch in the field of modern cryptography. Secret image sharing technology is the expansion of secret sharing in terms of image. When using this technology to share images, it can ensure the integrity and security of the image. In this paper, secret image used random shadow image which is generated from (t, n) threshold secret image sharing scheme. This paper adopted improved method of LSB to hide the secrets in n meaningful cover images that were provided randomly, which reflected good invisibility, then compared the level of similarity of extracted watermark and original watermark, and eventually got good effects. Finally, we made noise and cutting attack, which illustrated that the scheme has good robustness.
{"title":"The secret image sharing scheme based on improved LSB algorithm","authors":"Xiaohong Li, Yuan Chen, Feng Wang, Tao Qinqin","doi":"10.1109/ICCCNT.2014.6963003","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963003","url":null,"abstract":"Secret sharing is a very important branch in the field of modern cryptography. Secret image sharing technology is the expansion of secret sharing in terms of image. When using this technology to share images, it can ensure the integrity and security of the image. In this paper, secret image used random shadow image which is generated from (t, n) threshold secret image sharing scheme. This paper adopted improved method of LSB to hide the secrets in n meaningful cover images that were provided randomly, which reflected good invisibility, then compared the level of similarity of extracted watermark and original watermark, and eventually got good effects. Finally, we made noise and cutting attack, which illustrated that the scheme has good robustness.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124929881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963028
Mu Zhou, Qiao Zhang, Z. Tian, Feng Qiu, Qi Wu
For the purpose of utilizing physical neighborhood relations of adjacent reference points (ARPs) in radio-map, a new approach by constructing both location fingerprinting database and physical neighborhood database in off-line phase is proposed to enhance the accuracy of wireless local area network (WLAN) probabilistic localization. In the on-line phase, we first rely on Bayesian inference to find the most adjacent points (MAPs) with respect to each testing point (TP). Then, based on the physical neighborhood database, we obtain the physical adjacent points (PAPs) corresponding to these MAPs. In the set of MAPs and PAPs, we choose the feature points (FPs) for the second Bayesian inference. Finally, we locate the TP at the geometric center of the chosen FPs which has the maximum posterior probabilities.
{"title":"Integrated location fingerprinting and physical neighborhood for WLAN probabilistic localization","authors":"Mu Zhou, Qiao Zhang, Z. Tian, Feng Qiu, Qi Wu","doi":"10.1109/ICCCNT.2014.6963028","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963028","url":null,"abstract":"For the purpose of utilizing physical neighborhood relations of adjacent reference points (ARPs) in radio-map, a new approach by constructing both location fingerprinting database and physical neighborhood database in off-line phase is proposed to enhance the accuracy of wireless local area network (WLAN) probabilistic localization. In the on-line phase, we first rely on Bayesian inference to find the most adjacent points (MAPs) with respect to each testing point (TP). Then, based on the physical neighborhood database, we obtain the physical adjacent points (PAPs) corresponding to these MAPs. In the set of MAPs and PAPs, we choose the feature points (FPs) for the second Bayesian inference. Finally, we locate the TP at the geometric center of the chosen FPs which has the maximum posterior probabilities.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123424246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963013
A. Jalal, S. Kamal, Daijin Kim
In this paper, we implement human activity tracking and recognition system utilizing body joints features using depth maps. During HAR settings, depth maps are processed to track human silhouettes by considering temporal continuity constraints of human motion information and compute centroids for each activity based on contour generation. In body joints features, depth silhouettes are computed first through geodesic distance to identify anatomical landmarks which produce joint points information from specific body parts. Then, body joints are processed to produce centroid distance features and key joints distance features. Finally, Self-Organized Map (SOM) is employed to train and recognize different human activities from the features. Experimental results show that body joints features achieved high recognition rate over the conventional features. The proposed system should be applicable as e-healthcare systems for monitoring elderly people, surveillance systems for observing pedestrian traffic areas and indoor environment systems which recognize activities of multiple users.
{"title":"Depth map-based human activity tracking and recognition using body joints features and Self-Organized Map","authors":"A. Jalal, S. Kamal, Daijin Kim","doi":"10.1109/ICCCNT.2014.6963013","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963013","url":null,"abstract":"In this paper, we implement human activity tracking and recognition system utilizing body joints features using depth maps. During HAR settings, depth maps are processed to track human silhouettes by considering temporal continuity constraints of human motion information and compute centroids for each activity based on contour generation. In body joints features, depth silhouettes are computed first through geodesic distance to identify anatomical landmarks which produce joint points information from specific body parts. Then, body joints are processed to produce centroid distance features and key joints distance features. Finally, Self-Organized Map (SOM) is employed to train and recognize different human activities from the features. Experimental results show that body joints features achieved high recognition rate over the conventional features. The proposed system should be applicable as e-healthcare systems for monitoring elderly people, surveillance systems for observing pedestrian traffic areas and indoor environment systems which recognize activities of multiple users.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128770415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963000
S. Swaminathan, A. Karthick, S. Suganya
Information security is always given a cardinal role in every domain related to exchange of information. There is always a constant demand for dynamic and stronger encryption methods. Currently a plethora of encryption algorithms exist and have been achieving data security to a great extent. But, there is always a possibility of an adversary to crack the algorithm or use different attacks to gain access to confidential information. Striking a balance between the versatility and robustness is always important to maintain the level of security. A new crypto system has been proposed in this paper “HB2IG - Hex Binary 2's complement Invert Gray ” which offers higher level of security through a unique dynamic key generation scheme involving the use of symmetric encryption system coupled with a Unique Dynamic Crypto Key and Transport Cipher Key.
{"title":"A secure and robust crypto system based on unique dynamic key generation scheme","authors":"S. Swaminathan, A. Karthick, S. Suganya","doi":"10.1109/ICCCNT.2014.6963000","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963000","url":null,"abstract":"Information security is always given a cardinal role in every domain related to exchange of information. There is always a constant demand for dynamic and stronger encryption methods. Currently a plethora of encryption algorithms exist and have been achieving data security to a great extent. But, there is always a possibility of an adversary to crack the algorithm or use different attacks to gain access to confidential information. Striking a balance between the versatility and robustness is always important to maintain the level of security. A new crypto system has been proposed in this paper “HB2IG - Hex Binary 2's complement Invert Gray ” which offers higher level of security through a unique dynamic key generation scheme involving the use of symmetric encryption system coupled with a Unique Dynamic Crypto Key and Transport Cipher Key.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129900006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963051
Surbhi Singh, Sangeeta Sharma
Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites eg. facebook, gmail, yahoo etc using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.
{"title":"Improving security mechanism to access HDFS data by mobile consumers using middleware-layer framework","authors":"Surbhi Singh, Sangeeta Sharma","doi":"10.1109/ICCCNT.2014.6963051","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963051","url":null,"abstract":"Revolution in the field of technology leads to the development of cloud computing which delivers on-demand and easy access to the large shared pools of online stored data, softwares and applications. It has changed the way of utilizing the IT resources but at the compromised cost of security breaches as well such as phishing attacks, impersonation, lack of confidentiality and integrity. Thus this research work deals with the core problem of providing absolute security to the mobile consumers of public cloud to improve the mobility of user's, accessing data stored on public cloud securely using tokens without depending upon the third party to generate them. This paper presents the approach of simplifying the process of authenticating and authorizing the mobile user's by implementing middleware-centric framework called MiLAMob model with the huge online data storage system i.e. HDFS. It allows the consumer's to access the data from HDFS via mobiles or through the social networking sites eg. facebook, gmail, yahoo etc using OAuth 2.0 protocol. For authentication, the tokens are generated using one-time password generation technique and then encrypting them using AES method. By implementing the flexible user based policies and standards, this model improves the authorization process.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129125320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963029
T. Subashri, A. Arjun, S. Ashok
This paper presents the design and the implementation of Elliptic Curve Cryptography in an Asterisk VoIP server which serves as an exchange for placing voice calls over the internet. Voice over internet protocol refers to the transmission of speech encoded into data packets transmitted across networks. VoIP networks are prone to confidentiality threats due to the weak keys used by the AES algorithm for encryption of the VoIP packets. So, in order to strengthen the key for encryption/decryption, Elliptic Curve Diffie-Hellman (ECDH) Algorithm key agreement scheme is employed with smaller key sizes resulting in faster computations. The elliptic curve used in this paper is a modified NIST P-256 curve and key generation algorithm using split exponents for fast exponentiation has been implemented to speed up and increase the randomness of key generation. The implementation of split exponents also help in increasing the security of the keys generated. The key generated by ECDH is highly secure because the discrete logarithmic problem is very difficult in this scheme. This Method is successfully carrying out voice calls on VoIP clients connected to the internet. This ECDH key exchanging mechanism for voice calls in real time is implemented on an Asterisk PBX (Private Branch eXchange), using AGI(Asterisk Gateway Interface) server.
{"title":"Real time implementation of Elliptic Curve Cryptography over a open source VoIP server","authors":"T. Subashri, A. Arjun, S. Ashok","doi":"10.1109/ICCCNT.2014.6963029","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963029","url":null,"abstract":"This paper presents the design and the implementation of Elliptic Curve Cryptography in an Asterisk VoIP server which serves as an exchange for placing voice calls over the internet. Voice over internet protocol refers to the transmission of speech encoded into data packets transmitted across networks. VoIP networks are prone to confidentiality threats due to the weak keys used by the AES algorithm for encryption of the VoIP packets. So, in order to strengthen the key for encryption/decryption, Elliptic Curve Diffie-Hellman (ECDH) Algorithm key agreement scheme is employed with smaller key sizes resulting in faster computations. The elliptic curve used in this paper is a modified NIST P-256 curve and key generation algorithm using split exponents for fast exponentiation has been implemented to speed up and increase the randomness of key generation. The implementation of split exponents also help in increasing the security of the keys generated. The key generated by ECDH is highly secure because the discrete logarithmic problem is very difficult in this scheme. This Method is successfully carrying out voice calls on VoIP clients connected to the internet. This ECDH key exchanging mechanism for voice calls in real time is implemented on an Asterisk PBX (Private Branch eXchange), using AGI(Asterisk Gateway Interface) server.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123856553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-07-11DOI: 10.1109/ICCCNT.2014.6963140
Mu Zhou, Qiao Zhang, Z. Tian, Feng Qiu, Qi Wu
The purpose of received signal strength (RSS) correction in radio-map based Wi-Fi localization is to obtain a set of fine-grain location-dependent RSS fingerprints, and eventually achieve the purpose of highly accurate and reliable localization. To meet this goal, the RSS correction is conducted on the raw RSS samples to eliminate the environmental noise from the radio-map. This paper shows the comprehensive analysis on the autocorrelation property of the chronological RSS samples in the same RSS sequence, and meanwhile presents the correlated RSS correction approach. Furthermore, the correlated RSS correction approach can also be integrated into the conventional radio-map based K nearest neighbor (KNN) and weighted KNN (WKNN) localization algorithms. The experimental results conducted on the real Wi-Fi RSS samples recorded in a representative indoor environment prove that the proposed correlated RSS correction approach can result in the significant improvement of accuracy over the conventional radio-map based localization.
{"title":"Correlated received signal strength correction for radio-map based indoor Wi-Fi localization","authors":"Mu Zhou, Qiao Zhang, Z. Tian, Feng Qiu, Qi Wu","doi":"10.1109/ICCCNT.2014.6963140","DOIUrl":"https://doi.org/10.1109/ICCCNT.2014.6963140","url":null,"abstract":"The purpose of received signal strength (RSS) correction in radio-map based Wi-Fi localization is to obtain a set of fine-grain location-dependent RSS fingerprints, and eventually achieve the purpose of highly accurate and reliable localization. To meet this goal, the RSS correction is conducted on the raw RSS samples to eliminate the environmental noise from the radio-map. This paper shows the comprehensive analysis on the autocorrelation property of the chronological RSS samples in the same RSS sequence, and meanwhile presents the correlated RSS correction approach. Furthermore, the correlated RSS correction approach can also be integrated into the conventional radio-map based K nearest neighbor (KNN) and weighted KNN (WKNN) localization algorithms. The experimental results conducted on the real Wi-Fi RSS samples recorded in a representative indoor environment prove that the proposed correlated RSS correction approach can result in the significant improvement of accuracy over the conventional radio-map based localization.","PeriodicalId":140744,"journal":{"name":"Fifth International Conference on Computing, Communications and Networking Technologies (ICCCNT)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125627689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}