Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141283
Kusum Deep, Pinkey Chauhan, M. Pant
Particle Swarm Optimization (PSO), analogous to behaviour of bird flocks and fish schools, has emerged as an efficient global optimizer for solving nonlinear and complex real world problems. The performance of PSO depends on its parameters to a great extent. Among all other parameters of PSO, Inertia weight is crucial one that affects the performance of PSO significantly and therefore needs a special attention to be chosen appropriately. This paper proposes an adaptive exponentially decreasing inertia weight that depends on particle's performance iteration-wise and is different for each particle. The corresponding variant is termed as Fine Grained Inertia Weight PSO (FGIWPSO). The new inertia weight is proposed to improve the diversity of the swarm in order to avoid the stagnation phenomenon and a speeding convergence to global optima. The effectiveness of proposed approach is demonstrated by testing it on a suit of ten benchmark functions. The proposed FGIWPSO is compared with two existing PSO variants having nonlinear and exponential inertia weight strategies respectively. Experimental results assert that the proposed modification helps in improving PSO performance in terms of solution quality and convergence rate as well.
{"title":"A new fine grained inertia weight Particle Swarm Optimization","authors":"Kusum Deep, Pinkey Chauhan, M. Pant","doi":"10.1109/WICT.2011.6141283","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141283","url":null,"abstract":"Particle Swarm Optimization (PSO), analogous to behaviour of bird flocks and fish schools, has emerged as an efficient global optimizer for solving nonlinear and complex real world problems. The performance of PSO depends on its parameters to a great extent. Among all other parameters of PSO, Inertia weight is crucial one that affects the performance of PSO significantly and therefore needs a special attention to be chosen appropriately. This paper proposes an adaptive exponentially decreasing inertia weight that depends on particle's performance iteration-wise and is different for each particle. The corresponding variant is termed as Fine Grained Inertia Weight PSO (FGIWPSO). The new inertia weight is proposed to improve the diversity of the swarm in order to avoid the stagnation phenomenon and a speeding convergence to global optima. The effectiveness of proposed approach is demonstrated by testing it on a suit of ten benchmark functions. The proposed FGIWPSO is compared with two existing PSO variants having nonlinear and exponential inertia weight strategies respectively. Experimental results assert that the proposed modification helps in improving PSO performance in terms of solution quality and convergence rate as well.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125192267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141344
Neeraj Upadhyay, M. Misra
Grid systems differ from traditional distributed systems in terms of their large scale, heterogeneity and dynamism. These factors contribute towards higher frequency of fault occurrences; large scale causes lower values of Mean Time To Failure (MTTF), heterogeneity results in interaction faults (protocol mismatches) between communicating dissimilar nodes and dynamism with dynamically varying resource availability due to resources autonomously entering and leaving the grid effects execution of jobs. Another factor that increases probability of failure of applications is that applications running on grid are long running computations taking days to finish. Incorporating fault tolerance in scheduling algorithms is one of the approaches for handling faults in grid environment. Genetic Algorithms are a popular class of meta-heuristic algorithms used for grid scheduling. These are stochastic search algorithms based on the natural process of fitness based selection and reproduction. This paper combines GA-based scheduling with fault tolerance techniques such as checkpointing (dynamic) by modifying the fitness function. Also certain scenarios such as checkpointing without migration for resources with different downtimes and autonomous nature of grid resource providers are considered in building fitness functions. The motivation behind the work is that scheduling-assisted fault tolerance would help in finding the appropriate schedule for the jobs which would complete in the minimum time possible even when resources are prone to failures and thus help in meeting job deadlines. Simulation results for the proposed techniques are presented with respect to makespan and flowtime and fitness value of the resultant schedule obtained. The results show improvement in makespan and flowtime of the adaptive checkpointing approaches over static checkpointing approach. Also the approach which takes into consideration the last failure times of resources perform better than the approach based only on the mean failure times of resources.
{"title":"Incorporating fault tolerance in GA-based scheduling in grid environment","authors":"Neeraj Upadhyay, M. Misra","doi":"10.1109/WICT.2011.6141344","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141344","url":null,"abstract":"Grid systems differ from traditional distributed systems in terms of their large scale, heterogeneity and dynamism. These factors contribute towards higher frequency of fault occurrences; large scale causes lower values of Mean Time To Failure (MTTF), heterogeneity results in interaction faults (protocol mismatches) between communicating dissimilar nodes and dynamism with dynamically varying resource availability due to resources autonomously entering and leaving the grid effects execution of jobs. Another factor that increases probability of failure of applications is that applications running on grid are long running computations taking days to finish. Incorporating fault tolerance in scheduling algorithms is one of the approaches for handling faults in grid environment. Genetic Algorithms are a popular class of meta-heuristic algorithms used for grid scheduling. These are stochastic search algorithms based on the natural process of fitness based selection and reproduction. This paper combines GA-based scheduling with fault tolerance techniques such as checkpointing (dynamic) by modifying the fitness function. Also certain scenarios such as checkpointing without migration for resources with different downtimes and autonomous nature of grid resource providers are considered in building fitness functions. The motivation behind the work is that scheduling-assisted fault tolerance would help in finding the appropriate schedule for the jobs which would complete in the minimum time possible even when resources are prone to failures and thus help in meeting job deadlines. Simulation results for the proposed techniques are presented with respect to makespan and flowtime and fitness value of the resultant schedule obtained. The results show improvement in makespan and flowtime of the adaptive checkpointing approaches over static checkpointing approach. Also the approach which takes into consideration the last failure times of resources perform better than the approach based only on the mean failure times of resources.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123738497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141253
S. Tiwari, Kunwar Singh, Maneesha Gupta
The paper proposes a new methodology for optimization and characterization of flip-flops that can be utilized in designing EDA tool for NOC. In automated RTL to GDS II design space there is requirement of libraries with large number of cells. Now each design can have large number of different driving strength cells. Hence the paper proposes a methodology by virtue of which the library size can be reduced while reducing complexity. The proposed approach utilizes Levenberg-Marquardt (LM) algorithm embedded in SPICE. The optimization and characterization process is entirely automated which can dramatically reduce the time required for digital integrated circuit design process. Moreover, a new flip-flop for low noise environment is proposed and compared with benchmark flip-flops using the proposed methodology. To obtain the relative performance of proposed designs with in specified design constraints, extensive spice simulations were performed using 180nm technology with BSIM 3v3 parameters and 250MHz clock frequency. The automated layouts were also generated and post layout simulation with RC extraction were executed using Mentor Graphics tool.
{"title":"A novel methodology for flip-flop optimization and characterization in NOC design space","authors":"S. Tiwari, Kunwar Singh, Maneesha Gupta","doi":"10.1109/WICT.2011.6141253","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141253","url":null,"abstract":"The paper proposes a new methodology for optimization and characterization of flip-flops that can be utilized in designing EDA tool for NOC. In automated RTL to GDS II design space there is requirement of libraries with large number of cells. Now each design can have large number of different driving strength cells. Hence the paper proposes a methodology by virtue of which the library size can be reduced while reducing complexity. The proposed approach utilizes Levenberg-Marquardt (LM) algorithm embedded in SPICE. The optimization and characterization process is entirely automated which can dramatically reduce the time required for digital integrated circuit design process. Moreover, a new flip-flop for low noise environment is proposed and compared with benchmark flip-flops using the proposed methodology. To obtain the relative performance of proposed designs with in specified design constraints, extensive spice simulations were performed using 180nm technology with BSIM 3v3 parameters and 250MHz clock frequency. The automated layouts were also generated and post layout simulation with RC extraction were executed using Mentor Graphics tool.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122728150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Distributed System before task scheduling the assignment of tasks has to be done. Tasks are allocated to processors and task assignment heuristics are based on computation and communication cost. In order to minimize this communication cost between tasks optimal assignment of tasks has to be worked out. In this paper Task Duplication (TD) concept is used for the same purpose. TD is applied here on Mesh Topology of Distributed System.
{"title":"Duplication with task assignment in mesh distributed system","authors":"Rashmi Sharma, Nitin","doi":"10.3745/JIPS.01.0001","DOIUrl":"https://doi.org/10.3745/JIPS.01.0001","url":null,"abstract":"In Distributed System before task scheduling the assignment of tasks has to be done. Tasks are allocated to processors and task assignment heuristics are based on computation and communication cost. In order to minimize this communication cost between tasks optimal assignment of tasks has to be worked out. In this paper Task Duplication (TD) concept is used for the same purpose. TD is applied here on Mesh Topology of Distributed System.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141411
G. Kaur, V. Jain, Y. Chaba
The paper explores and compare the effect of different types of wormhole attacks on the performance of On demand routing protocols in Mobile Adhoc networks (MANET's). The evaluation has been done by studying and comparing End to End delay and throughput for all drop, all pass and threshold type of wormhole attacks.
{"title":"Wormhole attacks: Performance evaluation of on demand routing protocols in Mobile Adhoc networks","authors":"G. Kaur, V. Jain, Y. Chaba","doi":"10.1109/WICT.2011.6141411","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141411","url":null,"abstract":"The paper explores and compare the effect of different types of wormhole attacks on the performance of On demand routing protocols in Mobile Adhoc networks (MANET's). The evaluation has been done by studying and comparing End to End delay and throughput for all drop, all pass and threshold type of wormhole attacks.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141273
A. De, A. Bhattacharjee, C. K. Chanda, B. Maji
A Hybrid Particle Swarm Optimization algorithm that incorporates a Wavelet theory based mutation operation is used for segmentation of Magnetic Resonance Images. We use Entropy maximization using Hybrid Particle Swarm algorithm with Wavelet based mutation operation to get the region of interest of the Magnetic Resonance Image. It applies the Multi-resolution Wavelet theory to enhance the Particle Swarm Optimization Algorithm in exploring the solution space more effectively for a better solution. Tests on various MRI images with lesions show that lesions are successfully extracted.
{"title":"MRI segmentation using Entropy maximization and Hybrid Particle Swarm Optimization with Wavelet Mutation","authors":"A. De, A. Bhattacharjee, C. K. Chanda, B. Maji","doi":"10.1109/WICT.2011.6141273","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141273","url":null,"abstract":"A Hybrid Particle Swarm Optimization algorithm that incorporates a Wavelet theory based mutation operation is used for segmentation of Magnetic Resonance Images. We use Entropy maximization using Hybrid Particle Swarm algorithm with Wavelet based mutation operation to get the region of interest of the Magnetic Resonance Image. It applies the Multi-resolution Wavelet theory to enhance the Particle Swarm Optimization Algorithm in exploring the solution space more effectively for a better solution. Tests on various MRI images with lesions show that lesions are successfully extracted.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126457361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141365
Omprakash Meena, Sathisha Basavaraju, A. Sur
In recent literature, it has been found that, the visual quality of the watermarked image is evaluated with respect to the human visual system. In this paper, the effect of perturbation in the different position in the quantized 8×8 DCT coefficient block is analyzed in order to find suitable image independent block positions for embedding data with respect to the total perceptual error between cover and watermarked image. The JPEG domain data hiding using quantization index modulation (QIM) [7] is used for embedding. A set of experiments have been carried out to justify the applicability of the proposed analysis.
{"title":"DCT block location based data hiding","authors":"Omprakash Meena, Sathisha Basavaraju, A. Sur","doi":"10.1109/WICT.2011.6141365","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141365","url":null,"abstract":"In recent literature, it has been found that, the visual quality of the watermarked image is evaluated with respect to the human visual system. In this paper, the effect of perturbation in the different position in the quantized 8×8 DCT coefficient block is analyzed in order to find suitable image independent block positions for embedding data with respect to the total perceptual error between cover and watermarked image. The JPEG domain data hiding using quantization index modulation (QIM) [7] is used for embedding. A set of experiments have been carried out to justify the applicability of the proposed analysis.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128310843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141336
Bhavna Gupta, Harmeet Kaur, Punam Bedi
To addresses the problem of job failures in grid, which might be due to interaction between unknown entities, a reputation based multi agent system is proposed in this paper. The system is based on cooperative model of society in which agents share their experiences about the resource providers through feedback ratings. The uncertainty present in the feedback ratings is handled through Fuzzy Inference System (FIS). The resource providers also compute the trustworthiness of the user before giving access of their resources to safeguard themselves from malicious attacks, using neural networks. The resource providers train the neural network with their own data of already serviced user and predict the trustworthiness of the requesting user. Experiments confirm that the methods with neural networks are feasible and effective for estimation of the trustworthiness of the user.
{"title":"Predicting grid user trustworthiness using neural networks","authors":"Bhavna Gupta, Harmeet Kaur, Punam Bedi","doi":"10.1109/WICT.2011.6141336","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141336","url":null,"abstract":"To addresses the problem of job failures in grid, which might be due to interaction between unknown entities, a reputation based multi agent system is proposed in this paper. The system is based on cooperative model of society in which agents share their experiences about the resource providers through feedback ratings. The uncertainty present in the feedback ratings is handled through Fuzzy Inference System (FIS). The resource providers also compute the trustworthiness of the user before giving access of their resources to safeguard themselves from malicious attacks, using neural networks. The resource providers train the neural network with their own data of already serviced user and predict the trustworthiness of the requesting user. Experiments confirm that the methods with neural networks are feasible and effective for estimation of the trustworthiness of the user.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"69 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120967611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141243
A. S. Muthanantha Murugavel, S. Ramakrishnan, K. Balasamy, T. Gopalakrishnan
Electroencephalographms (EEGs) are records of brain electrical activity. It is an indispensable tool for diagnosing neurological diseases, such as epilepsy. Wavelet transform (WT) is an effective tool for analysis of non-stationary signal, such as EEGs. Wavelet analysis is used to decompose the EEG into delta, theta, alpha, beta, and gamma sub-bands. Lyapunov exponent is used to quantify the nonlinear chaotic dynamics of the signal‥ Furthermore, the distinct states of brain activity had different chaotic dynamics quantified by nonlinear invariant measures such as Lyapunov exponents. The probabilistic neural network (PNN) and radial basis function neural network were tested and also their performance of classification rate was evaluated using benchmark dataset. Decision making was performed in two stages: feature extraction by computing the Lyapunov exponents, Wavelet Coefficients and classification using the classifiers trained on the extracted features. Our research demonstrated that the Lyapunov exponents and Wavelet Coefficients are the features which well represent the EEG signals and the multi-class SVM and PNN trained on these features achieved high classification accuracies such as 96% and 94%.
{"title":"Lyapunov features based EEG signal classification by multi-class SVM","authors":"A. S. Muthanantha Murugavel, S. Ramakrishnan, K. Balasamy, T. Gopalakrishnan","doi":"10.1109/WICT.2011.6141243","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141243","url":null,"abstract":"Electroencephalographms (EEGs) are records of brain electrical activity. It is an indispensable tool for diagnosing neurological diseases, such as epilepsy. Wavelet transform (WT) is an effective tool for analysis of non-stationary signal, such as EEGs. Wavelet analysis is used to decompose the EEG into delta, theta, alpha, beta, and gamma sub-bands. Lyapunov exponent is used to quantify the nonlinear chaotic dynamics of the signal‥ Furthermore, the distinct states of brain activity had different chaotic dynamics quantified by nonlinear invariant measures such as Lyapunov exponents. The probabilistic neural network (PNN) and radial basis function neural network were tested and also their performance of classification rate was evaluated using benchmark dataset. Decision making was performed in two stages: feature extraction by computing the Lyapunov exponents, Wavelet Coefficients and classification using the classifiers trained on the extracted features. Our research demonstrated that the Lyapunov exponents and Wavelet Coefficients are the features which well represent the EEG signals and the multi-class SVM and PNN trained on these features achieved high classification accuracies such as 96% and 94%.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129973274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/WICT.2011.6141321
M. Rajan, Nimit Rana
A good investment strategy requires a combination of mathematical modeling with deep understanding of the economics of the market. The basis of the portfolio optimization is the mean-variance optimization put forwarded by Markowitz in 1952. The optimization procedure depends on the input parameters, the covariance matrix and expected return which have to be estimated using the historical data. The portfolio selection hence depends on the reliability of these inputs and often lead to wrong results due to inaccurate estimation of covariance matrix and expected return. In this paper, we examine the performance of portfolio optimization in Indian Stock market using stable models for covariance estimation and come up with a portfolio of stocks that gives a meaningful return in reality.
{"title":"A robust portfolio optimization in Indian Stock market","authors":"M. Rajan, Nimit Rana","doi":"10.1109/WICT.2011.6141321","DOIUrl":"https://doi.org/10.1109/WICT.2011.6141321","url":null,"abstract":"A good investment strategy requires a combination of mathematical modeling with deep understanding of the economics of the market. The basis of the portfolio optimization is the mean-variance optimization put forwarded by Markowitz in 1952. The optimization procedure depends on the input parameters, the covariance matrix and expected return which have to be estimated using the historical data. The portfolio selection hence depends on the reliability of these inputs and often lead to wrong results due to inaccurate estimation of covariance matrix and expected return. In this paper, we examine the performance of portfolio optimization in Indian Stock market using stable models for covariance estimation and come up with a portfolio of stocks that gives a meaningful return in reality.","PeriodicalId":178645,"journal":{"name":"2011 World Congress on Information and Communication Technologies","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128392343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}