Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409135
B. Misra, A. K. Nayak, S. Rai
Wireless sensor nodes are mostly battery operated, tiny in size, and deployed in some specified area. The limited capacity of the battery power hinders the performance of the node as well as of the network. This requires efficient management of battery power for optimal utilization of the sensor network technology. In this work we propose a novel method to extend the life time of the sensor network by organizing the sensors into a maximal number of non-disjoint set covers with non-uniform sensing ranges. Sensors present in one set cover remain active at any instant of time while other sensors are in sleep mode, so they consume negligible battery power. Each set cover is activated successively, thereby increasing the total lifetime of the network. Each sensor can move towards the center of the set of nodes it covers to further reduce the sensing range and further diminishes the consumption of energy. It is observed through extensive simulation work that the network lifetime can be increased by adjusting their sensing ranges as compared to sensors with fixed sensing ranges. This method employs genetic algorithm (GA) and greedy heuristic approach to determine optimal sensing range for efficient energy management in sensor network. Simulation results show its efficiency over the fixed-range strategies.
{"title":"Achieving energy efficiency by self-adjusting sensing range and positioning in wireless sensor networks","authors":"B. Misra, A. K. Nayak, S. Rai","doi":"10.1109/WICT.2012.6409135","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409135","url":null,"abstract":"Wireless sensor nodes are mostly battery operated, tiny in size, and deployed in some specified area. The limited capacity of the battery power hinders the performance of the node as well as of the network. This requires efficient management of battery power for optimal utilization of the sensor network technology. In this work we propose a novel method to extend the life time of the sensor network by organizing the sensors into a maximal number of non-disjoint set covers with non-uniform sensing ranges. Sensors present in one set cover remain active at any instant of time while other sensors are in sleep mode, so they consume negligible battery power. Each set cover is activated successively, thereby increasing the total lifetime of the network. Each sensor can move towards the center of the set of nodes it covers to further reduce the sensing range and further diminishes the consumption of energy. It is observed through extensive simulation work that the network lifetime can be increased by adjusting their sensing ranges as compared to sensors with fixed sensing ranges. This method employs genetic algorithm (GA) and greedy heuristic approach to determine optimal sensing range for efficient energy management in sensor network. Simulation results show its efficiency over the fixed-range strategies.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124235038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409114
P. Khuntia, B. Sahu, C. Mohanty
The digital channel equalizers are located in the front end of the receivers to avoid the effect of Inter-Symbol-Interference (ISI). In this paper, the equalization problem has been viewed as an optimization problem. In past the Least Mean Square Algorithm (LMS), Recursive least square (RLS), Artificial Neural Network (ANN) and Genetic Algorithm (GA) have been successfully employed for nonlinear channel equalization. The LMS, RLS and ANN techniques are derivative based and hence are chances that the parameters may fall to local minima during training. Though GA is a derivative free technique, it takes more converging time. We propose a novel equalization technique based on Differential Evolution (DE). DE is an efficient and powerful population based stochastic search technique for solving optimization problems over continuous space and hence the channel equalization performance is expected to be superior.
{"title":"Development of adaptive channel equalization using DE","authors":"P. Khuntia, B. Sahu, C. Mohanty","doi":"10.1109/WICT.2012.6409114","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409114","url":null,"abstract":"The digital channel equalizers are located in the front end of the receivers to avoid the effect of Inter-Symbol-Interference (ISI). In this paper, the equalization problem has been viewed as an optimization problem. In past the Least Mean Square Algorithm (LMS), Recursive least square (RLS), Artificial Neural Network (ANN) and Genetic Algorithm (GA) have been successfully employed for nonlinear channel equalization. The LMS, RLS and ANN techniques are derivative based and hence are chances that the parameters may fall to local minima during training. Though GA is a derivative free technique, it takes more converging time. We propose a novel equalization technique based on Differential Evolution (DE). DE is an efficient and powerful population based stochastic search technique for solving optimization problems over continuous space and hence the channel equalization performance is expected to be superior.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129046574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409245
Ying Wang, Xianfeng Liu, B. Lin, Fuwen Pang
This paper addresses the optimal multicast resource allocation problem in orthogonal frequency division multiple access (OFDMA) system, where the fine granularity scalability (FGS) video coding is exploited to perform source coding. With FGS, the multicast information is coded into base layer data and enhancement layer data, and there is a predefined proportional rate ratio between the transmission rate of the base layer and the enhancement layer so that an approximate playing progress can be maintained at the receiver. The multicast resource allocation problem is formulated as a mixed binary integer programming problem with the objective of maximizing the network total throughput while maintaining the required proportional rate ratio under a total transmit power constraint for the base station. The problem is solved by using Lagrangian dual decomposition method, resulting in an optimal joint subcarrier and power allocation algorithm with computational complexity being linear with the number of subcarriers. Numerical simulations are conducted to evaluate the performance of the proposed algorithm in terms of convergence and throughput. Simulation results show that the proposed optimal algorithm converges very fast and significantly outperforms the available suboptimal schemes.
{"title":"Joint power and subcarrier allocation for layered multicast streaming with proportional rate ratio constraint in OFDMA wireless networks","authors":"Ying Wang, Xianfeng Liu, B. Lin, Fuwen Pang","doi":"10.1109/WICT.2012.6409245","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409245","url":null,"abstract":"This paper addresses the optimal multicast resource allocation problem in orthogonal frequency division multiple access (OFDMA) system, where the fine granularity scalability (FGS) video coding is exploited to perform source coding. With FGS, the multicast information is coded into base layer data and enhancement layer data, and there is a predefined proportional rate ratio between the transmission rate of the base layer and the enhancement layer so that an approximate playing progress can be maintained at the receiver. The multicast resource allocation problem is formulated as a mixed binary integer programming problem with the objective of maximizing the network total throughput while maintaining the required proportional rate ratio under a total transmit power constraint for the base station. The problem is solved by using Lagrangian dual decomposition method, resulting in an optimal joint subcarrier and power allocation algorithm with computational complexity being linear with the number of subcarriers. Numerical simulations are conducted to evaluate the performance of the proposed algorithm in terms of convergence and throughput. Simulation results show that the proposed optimal algorithm converges very fast and significantly outperforms the available suboptimal schemes.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117096464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409174
M. Rout, B. Majhi, U. M. Mohapatra, R. Mahapatra
The ABC algorithm is a new meta-heuristic approach, having the advantages of memory, multi-characters, local search, and a solution improvement mechanism. It can be used to identify a high quality optimal solution and offer a balance between complexity and performance, thus optimizing forecasting effectiveness. This paper proposes an efficient prediction model for forecasting of short and long range stock market prices of two well know stock indices, S&P 500 and DJIA using a simple adaptive linear combiner (ALC), whose weights are trained using artificial bee colony (ABC) algorithm. The Model is simulated in terms of mean square error (MSE) and extensive simulation study reveals that the performance of the proposed model with the test input patterns is more efficient, accurate than the PSO and GA based trained models.
{"title":"An artificial bee colony algorithm based efficient prediction model for stock market indices","authors":"M. Rout, B. Majhi, U. M. Mohapatra, R. Mahapatra","doi":"10.1109/WICT.2012.6409174","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409174","url":null,"abstract":"The ABC algorithm is a new meta-heuristic approach, having the advantages of memory, multi-characters, local search, and a solution improvement mechanism. It can be used to identify a high quality optimal solution and offer a balance between complexity and performance, thus optimizing forecasting effectiveness. This paper proposes an efficient prediction model for forecasting of short and long range stock market prices of two well know stock indices, S&P 500 and DJIA using a simple adaptive linear combiner (ALC), whose weights are trained using artificial bee colony (ABC) algorithm. The Model is simulated in terms of mean square error (MSE) and extensive simulation study reveals that the performance of the proposed model with the test input patterns is more efficient, accurate than the PSO and GA based trained models.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126907057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409070
S. Chawla, S. Srivastava
Web application domain requires specialized focus in Requirements Engineering because of their unique features like dynamic nature, presentation styles, large and heterogeneous audience, and navigational access to information. We propose WebURN notation, an extension to User Requirements Notation that is a goal and scenario based notation for early requirements analysis. An extended meta model for Goal Requirements Language and Use case Maps has been created for Web specific context. Also, the paper describes construction algorithms for creating the Requirements Analysis models from textual information entered by the Requirements Engineer. The Goal based approach, works closely with the Web specific functional and non-functional Requirements and delivers models with lesser conflicts, better choice amongst alternatives and handles crosscutting concerns corresponding to personalization and non-functional requirements.
{"title":"A Goal based methodology for Web specific Requirements Engineering","authors":"S. Chawla, S. Srivastava","doi":"10.1109/WICT.2012.6409070","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409070","url":null,"abstract":"Web application domain requires specialized focus in Requirements Engineering because of their unique features like dynamic nature, presentation styles, large and heterogeneous audience, and navigational access to information. We propose WebURN notation, an extension to User Requirements Notation that is a goal and scenario based notation for early requirements analysis. An extended meta model for Goal Requirements Language and Use case Maps has been created for Web specific context. Also, the paper describes construction algorithms for creating the Requirements Analysis models from textual information entered by the Requirements Engineer. The Goal based approach, works closely with the Web specific functional and non-functional Requirements and delivers models with lesser conflicts, better choice amongst alternatives and handles crosscutting concerns corresponding to personalization and non-functional requirements.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114340745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409051
S. Angadi, M. Kodabagi, M. Jerabandi
Automated systems for understanding text in low resolution natural scene images of display boards are facilitating several applications such as blind assistants, traffic guidance systems, tour guide systems, location aware systems and many more. The text recognition at character level is one the important processing steps for development of such systems. In this work, a novel method for recognition of Kannada basic characters using zone wise statistical features is proposed. The method works in two phases; In the first phase, the zone wise statistical features are obtained from training samples and knowledge base is constructed. During testing, the test image is processed to obtain zone wise statistical features and character is recognized using nearest neighbor classifier. The method has been evaluated for 1043 samples and achieves an average recognition accuracy of 83.49%. The method is robust and insensitive to noise, blur, variations in font size and style, uneven thickness and varying lightning conditions.
{"title":"Character recognition of Kannada text in low resolution display board images using zone wise statistical features","authors":"S. Angadi, M. Kodabagi, M. Jerabandi","doi":"10.1109/WICT.2012.6409051","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409051","url":null,"abstract":"Automated systems for understanding text in low resolution natural scene images of display boards are facilitating several applications such as blind assistants, traffic guidance systems, tour guide systems, location aware systems and many more. The text recognition at character level is one the important processing steps for development of such systems. In this work, a novel method for recognition of Kannada basic characters using zone wise statistical features is proposed. The method works in two phases; In the first phase, the zone wise statistical features are obtained from training samples and knowledge base is constructed. During testing, the test image is processed to obtain zone wise statistical features and character is recognized using nearest neighbor classifier. The method has been evaluated for 1043 samples and achieves an average recognition accuracy of 83.49%. The method is robust and insensitive to noise, blur, variations in font size and style, uneven thickness and varying lightning conditions.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114458878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409257
Jianying Zhang, Hongbo Liu, Hong Yu
When traditional database theories applied to large scale computing environments like cloud computing, difficulties encountered in many aspects, such as ACID transaction, closed-world assumption, data consistency, security, and scalability. In this paper, on the basis of analyzing the limitation of closed world assumption for massively computing environments, and from the viewpoint of complex giant system, a high level data model of relational database outlining a solution to large scale computing environments is proposed, in which semantic clusters of data, called data multitrees are taken as distribution units. We mainly investigate the motive for constructing such a data model, and briefly introduce the model to make the paper complete.
{"title":"The motive for constructing a high level data model on cloud databases","authors":"Jianying Zhang, Hongbo Liu, Hong Yu","doi":"10.1109/WICT.2012.6409257","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409257","url":null,"abstract":"When traditional database theories applied to large scale computing environments like cloud computing, difficulties encountered in many aspects, such as ACID transaction, closed-world assumption, data consistency, security, and scalability. In this paper, on the basis of analyzing the limitation of closed world assumption for massively computing environments, and from the viewpoint of complex giant system, a high level data model of relational database outlining a solution to large scale computing environments is proposed, in which semantic clusters of data, called data multitrees are taken as distribution units. We mainly investigate the motive for constructing such a data model, and briefly introduce the model to make the paper complete.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409162
S. Purkayastha
Integration of Health Information Systems (HIS) has been a challenge because of the different semantics that are used by health-care practices and different levels of health-care systems need different kinds of information. Looking through the case of two widely used open-source HIS (one patient-level and one aggregate country-level system), the paper analyses multiple approaches for integration. The paper develops a novel integration engine (HIXEn) that uses concepts of distributed hypermedia systems from RESTful architecture and Resource Descriptors from semantic web that can allow health information exchanges to flexibly connect different HIS.
{"title":"HIXEn: An integration engine for multi-vocabulary health information using REST & semantic metadata mapping","authors":"S. Purkayastha","doi":"10.1109/WICT.2012.6409162","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409162","url":null,"abstract":"Integration of Health Information Systems (HIS) has been a challenge because of the different semantics that are used by health-care practices and different levels of health-care systems need different kinds of information. Looking through the case of two widely used open-source HIS (one patient-level and one aggregate country-level system), the paper analyses multiple approaches for integration. The paper develops a novel integration engine (HIXEn) that uses concepts of distributed hypermedia systems from RESTful architecture and Resource Descriptors from semantic web that can allow health information exchanges to flexibly connect different HIS.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122256784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409207
N. Singh, K. Tiwari, A. Nigam, P. Gupta
This paper presents an efficient multimodel bio-metric system based on 4 slap fingerprint images. The system utilizes 4 slap fingerprint scanner to simultaneously collect fingerprints of multiple fingers on a hand in one image. The acquired multi-finger images are first segmented to get individual fingers. Quality of each individual finger is estimated and its minutiae points are extracted. The minutiae points of each individual finger extracted from gallery 4 slap fingerprint image is compared with the corresponding individual finger of the query 4 slap fingerprint image to get matching score of that finger. Matching score between two 4 slap fingerprint images is obtained by fusing matching scores of various fingers along with their respective image quality and relative accuracies. Decision of matching has been taken based on the fused matching score. The system has been tested on two 4 slap fingerprint databases viz IITK-student and IITK-rural containing 1007 and 991 subjects respectively. Both databases are acquired in 2 sessions. The correct recognition rate obtained is 91.00% for IITK-rural database and 99.64% for IITK-student database. Respective EER values are 5.64% and 0.94%.
{"title":"Fusion of 4-slap fingerprint images with their qualities for human recognition","authors":"N. Singh, K. Tiwari, A. Nigam, P. Gupta","doi":"10.1109/WICT.2012.6409207","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409207","url":null,"abstract":"This paper presents an efficient multimodel bio-metric system based on 4 slap fingerprint images. The system utilizes 4 slap fingerprint scanner to simultaneously collect fingerprints of multiple fingers on a hand in one image. The acquired multi-finger images are first segmented to get individual fingers. Quality of each individual finger is estimated and its minutiae points are extracted. The minutiae points of each individual finger extracted from gallery 4 slap fingerprint image is compared with the corresponding individual finger of the query 4 slap fingerprint image to get matching score of that finger. Matching score between two 4 slap fingerprint images is obtained by fusing matching scores of various fingers along with their respective image quality and relative accuracies. Decision of matching has been taken based on the fused matching score. The system has been tested on two 4 slap fingerprint databases viz IITK-student and IITK-rural containing 1007 and 991 subjects respectively. Both databases are acquired in 2 sessions. The correct recognition rate obtained is 91.00% for IITK-rural database and 99.64% for IITK-student database. Respective EER values are 5.64% and 0.94%.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122592747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409234
Xu Li, Xiuyan Zhao, Fenglong Fan, Bai Liu
Unsupervised learning can address the general limitation of supervised learning that sense-tagged text is not available for most domains and is expensive to create. However, the existing unsupervised learning probabilistic models are computationally expensive and convergence slowly because of large numbers and random initialization of model parameters. This paper reduces the noise jamming and the dimensionality of the models by using proposed feature selection and initial parameter estimation. Experimental result shows the accuracy and efficiency of the proposed probabilistic model are obviously improved.
{"title":"An improved unsupervised learning probabilistic model of word sense disambiguation","authors":"Xu Li, Xiuyan Zhao, Fenglong Fan, Bai Liu","doi":"10.1109/WICT.2012.6409234","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409234","url":null,"abstract":"Unsupervised learning can address the general limitation of supervised learning that sense-tagged text is not available for most domains and is expensive to create. However, the existing unsupervised learning probabilistic models are computationally expensive and convergence slowly because of large numbers and random initialization of model parameters. This paper reduces the noise jamming and the dimensionality of the models by using proposed feature selection and initial parameter estimation. Experimental result shows the accuracy and efficiency of the proposed probabilistic model are obviously improved.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}