In wireless sensor networks, MAC protocol is used to achieve efficient, fair and balanced allocation of wireless channel resources and access control of nodes in the network, and to control the communication process of nodes in the network. The energy consumption of wireless sensor nodes is mainly concentrated in the communication unit, including: data transceiver, idle listening, protocol control overhead, etc. A good MAC protocol is beneficial to reduce unnecessary energy consumption and enhance node life time. At the same time, MAC protocol, as the bottom protocol of wireless sensor network, provides a stable and reliable communication foundation for the realization of the upper protocol. Therefore, a large number of scholars at home and abroad have carried out extensive and in-depth research on MAC layer protocol of network. In this paper, the research process and progress of MAC layer protocol in wireless sensor networks are analyzed, and the research progress of MAC layer protocol in multi-radio frequency and multi-channel wireless sensor networks is analyzed.
{"title":"MAC Protocol Analysis for Wireless Sensor Networks","authors":"Huan Zhang, Feng Wang","doi":"10.4018/jitr.298617","DOIUrl":"https://doi.org/10.4018/jitr.298617","url":null,"abstract":"In wireless sensor networks, MAC protocol is used to achieve efficient, fair and balanced allocation of wireless channel resources and access control of nodes in the network, and to control the communication process of nodes in the network. The energy consumption of wireless sensor nodes is mainly concentrated in the communication unit, including: data transceiver, idle listening, protocol control overhead, etc. A good MAC protocol is beneficial to reduce unnecessary energy consumption and enhance node life time. At the same time, MAC protocol, as the bottom protocol of wireless sensor network, provides a stable and reliable communication foundation for the realization of the upper protocol. Therefore, a large number of scholars at home and abroad have carried out extensive and in-depth research on MAC layer protocol of network. In this paper, the research process and progress of MAC layer protocol in wireless sensor networks are analyzed, and the research progress of MAC layer protocol in multi-radio frequency and multi-channel wireless sensor networks is analyzed.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114570343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amine El Hadi, Youness Madani, R. Ayachi, M. Erritali
The field of information retrieval (IR) is an important area in computer science, this domain helps us to find information that we are interested in from an important volume of information. A search engine is the best example of the application of information retrieval to get the most relevant results. In this paper, we propose a new recommendation approach for recommending relevant documents to a search engine’s users. In this work, we proposed a new approach for calculating the similarity between a user query and a list of documents in a search engine. The proposed method uses a new reinforcement learning algorithm based on n-grams model (i.e., a sub-sequence of n constructed elements from a given sequence) and a similarity measure. Results show that our method outperforms some methods from the literature with a high value of accuracy.
{"title":"Finding Relevant Documents in a Search Engine Using N-Grams Model and Reinforcement Learning","authors":"Amine El Hadi, Youness Madani, R. Ayachi, M. Erritali","doi":"10.4018/jitr.299930","DOIUrl":"https://doi.org/10.4018/jitr.299930","url":null,"abstract":"The field of information retrieval (IR) is an important area in computer science, this domain helps us to find information that we are interested in from an important volume of information. A search engine is the best example of the application of information retrieval to get the most relevant results. In this paper, we propose a new recommendation approach for recommending relevant documents to a search engine’s users. In this work, we proposed a new approach for calculating the similarity between a user query and a list of documents in a search engine. The proposed method uses a new reinforcement learning algorithm based on n-grams model (i.e., a sub-sequence of n constructed elements from a given sequence) and a similarity measure. Results show that our method outperforms some methods from the literature with a high value of accuracy.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"14 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116822880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Within the advanced computation time, real-time application pulled in much more attention. Implementing a better high-quality real-time system requires to improve the responsiveness of the tasks set. This research work aims to achieve the best quality of service (QoS) in terms of improving the responsiveness of aperiodic tasks and also improved acceptability domain, by accepting to execute multiple aperiodic functions while maintaining the feasibility of periodic tasks in Real-Time System.The functional analysis with simulation shows that proposed algorithm is highly effective in terms of task sets deemed schedulable and also by allowing aperiodic tasks that were rejected by existing approaches.The simulation results indicate that it reduces overall average response time of aperiodic task approximately 13% at lowest periodic load (35%), 7% at 60% periodic load and 4% at 80% periodic load and in all observed circumstances the proposed novel algorithm received 7%-10% improve over existing one.
{"title":"A Multi-Budget-Based Approach to Enhance the Responsiveness of Aperiodic Task for a Bandwidth-Preserving Server in Real-Time Systems","authors":"Ajitesh Kumar, S. Gupta","doi":"10.4018/jitr.299917","DOIUrl":"https://doi.org/10.4018/jitr.299917","url":null,"abstract":"Within the advanced computation time, real-time application pulled in much more attention. Implementing a better high-quality real-time system requires to improve the responsiveness of the tasks set. This research work aims to achieve the best quality of service (QoS) in terms of improving the responsiveness of aperiodic tasks and also improved acceptability domain, by accepting to execute multiple aperiodic functions while maintaining the feasibility of periodic tasks in Real-Time System.The functional analysis with simulation shows that proposed algorithm is highly effective in terms of task sets deemed schedulable and also by allowing aperiodic tasks that were rejected by existing approaches.The simulation results indicate that it reduces overall average response time of aperiodic task approximately 13% at lowest periodic load (35%), 7% at 60% periodic load and 4% at 80% periodic load and in all observed circumstances the proposed novel algorithm received 7%-10% improve over existing one.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124485040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our study aims to analyze the change in coverage of health issues awareness, printed on the front page of Indian E-Papers (The Hindustan Times and The Times of India) for the pre-and- peri coronavirus period. The collected news articles are examined by performing the Latent Dirichlet Allocation algorithm. The sentiment analysis is performed to analyze the change in the emotions aroused from news articles. The outcome regarding the pre-coronavirus period reveals that the focus of the e-papers was mostly on politics, crime, and economy whereas, in the peri-coronavirus period, the e-papers are focusing more (i.e. 40 % topics) on publishing the news related to disseminating the awareness about the Coronavirus disease. The priority of news topics includes the active number of cases, medical facilities, COVID-19 testing. The outcome regarding sentiment analysis reveals that negative sentiments are prominent in the peri-coronavirus period due to fear of the outbreak of the virus.
{"title":"COVID-19 Pandemic: Insights of Newspaper Trends","authors":"J. Kaur, A. Chhabra, M. Saini, N. Bačanin","doi":"10.4018/jitr.299390","DOIUrl":"https://doi.org/10.4018/jitr.299390","url":null,"abstract":"Our study aims to analyze the change in coverage of health issues awareness, printed on the front page of Indian E-Papers (The Hindustan Times and The Times of India) for the pre-and- peri coronavirus period. The collected news articles are examined by performing the Latent Dirichlet Allocation algorithm. The sentiment analysis is performed to analyze the change in the emotions aroused from news articles. The outcome regarding the pre-coronavirus period reveals that the focus of the e-papers was mostly on politics, crime, and economy whereas, in the peri-coronavirus period, the e-papers are focusing more (i.e. 40 % topics) on publishing the news related to disseminating the awareness about the Coronavirus disease. The priority of news topics includes the active number of cases, medical facilities, COVID-19 testing. The outcome regarding sentiment analysis reveals that negative sentiments are prominent in the peri-coronavirus period due to fear of the outbreak of the virus.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122369517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the problems existing in the optimal allocation of financial resources, this paper establishes an optimization model and calculates the optimal allocation coefficient. With the help of Markowitz's investment theory, two indicators, which are investment risk and return rate, are analyzed quantitatively. Firstly, by analyzing the allocation efficiency and risk of financial resources, the allocation efficiency model is established, and the problem is decomposed into a finite 0-1 programming problem, which is solved by Hungarian Method. Secondly, considering the minimum allocation risk and the expected maximum return, the multi-objective model is solved by progressive optimal algorithm. The model reflects both unsatisfaction and risk avoidance which are the two characteristics of rational investment behavior. The analysis shows that the model has strong applicability and can be expected to improve the allocation efficiency of financial resources and reduce the allocation risk.
{"title":"Research on the Multi-Objective Optimization for Return Rate and Risk of Financial Resource Allocation","authors":"S. Wan","doi":"10.4018/jitr.299950","DOIUrl":"https://doi.org/10.4018/jitr.299950","url":null,"abstract":"Aiming at the problems existing in the optimal allocation of financial resources, this paper establishes an optimization model and calculates the optimal allocation coefficient. With the help of Markowitz's investment theory, two indicators, which are investment risk and return rate, are analyzed quantitatively. Firstly, by analyzing the allocation efficiency and risk of financial resources, the allocation efficiency model is established, and the problem is decomposed into a finite 0-1 programming problem, which is solved by Hungarian Method. Secondly, considering the minimum allocation risk and the expected maximum return, the multi-objective model is solved by progressive optimal algorithm. The model reflects both unsatisfaction and risk avoidance which are the two characteristics of rational investment behavior. The analysis shows that the model has strong applicability and can be expected to improve the allocation efficiency of financial resources and reduce the allocation risk.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"346 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122646548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, many methods have appeared to solve the problem of the evolution of alignment under the change of ontologies. The main challenge for them is to maintain consistency of alignment after applying the change. An alignment is consistent if and only if the ontologies remain consistent even when used in conjunction with the alignment. The objective of this work is to take a step forward by considering the alignment evolution according to the conservativity principle under the change of ontologies. In this context, an alignment is conservative if the ontological change should not introduce new semantic relationships between concepts from one of the input ontologies. We give methods for the conservativity violation detection and repair under the change of ontologies and we carry out an experiment on a dataset adapted from the Ontology Alignment Evaluation Initiative. The experiment demonstrates both the practical applicability of the proposed approach and shows the limits of the alignment evolution methods compared to the alignment conservativity under the change of ontologies.
{"title":"Alignment Conservativity Under the Ontology Change","authors":"Yahia Atig, Ahmed Zahaf, D. Bouchiha, M. Malki","doi":"10.4018/jitr.299923","DOIUrl":"https://doi.org/10.4018/jitr.299923","url":null,"abstract":"Recently, many methods have appeared to solve the problem of the evolution of alignment under the change of ontologies. The main challenge for them is to maintain consistency of alignment after applying the change. An alignment is consistent if and only if the ontologies remain consistent even when used in conjunction with the alignment. The objective of this work is to take a step forward by considering the alignment evolution according to the conservativity principle under the change of ontologies. In this context, an alignment is conservative if the ontological change should not introduce new semantic relationships between concepts from one of the input ontologies. We give methods for the conservativity violation detection and repair under the change of ontologies and we carry out an experiment on a dataset adapted from the Ontology Alignment Evaluation Initiative. The experiment demonstrates both the practical applicability of the proposed approach and shows the limits of the alignment evolution methods compared to the alignment conservativity under the change of ontologies.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133062609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to get sparsity clustering ability of unbalanced cloud data set, combined with adaptive environment density screening, data clustering was carried out, and an improved adaptive environment density peak clustering algorithm under cloud computing technology was proposed. The storage structure model of grid sparse unbalanced cloud data set is constructed, and structure of grid sparse unbalanced cloud data set is reconstructed by combining feature space reconstruction technology. Rough feature quantity of grid sparse unbalanced cloud data set is extracted, and feature extraction and registration are carried out through strict feature registration method. Cloud fusion and peak feature clustering were carried out according to the grid block distribution of the data set. Peak feature quantities of the grid sparse unbalanced cloud data set were extracted, and binary semantic feature distributed detection of the data was carried out.
{"title":"Adaptive Peak Environmental Density Clustering Algorithm in Cloud Computing Technology","authors":"Qiangshan Zhang","doi":"10.4018/jitr.298614","DOIUrl":"https://doi.org/10.4018/jitr.298614","url":null,"abstract":"In order to get sparsity clustering ability of unbalanced cloud data set, combined with adaptive environment density screening, data clustering was carried out, and an improved adaptive environment density peak clustering algorithm under cloud computing technology was proposed. The storage structure model of grid sparse unbalanced cloud data set is constructed, and structure of grid sparse unbalanced cloud data set is reconstructed by combining feature space reconstruction technology. Rough feature quantity of grid sparse unbalanced cloud data set is extracted, and feature extraction and registration are carried out through strict feature registration method. Cloud fusion and peak feature clustering were carried out according to the grid block distribution of the data set. Peak feature quantities of the grid sparse unbalanced cloud data set were extracted, and binary semantic feature distributed detection of the data was carried out.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123435449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart homes and cities is one of the crucial topics for an individual of any age that requires almost zero computer literacy in order to benefit the leisure and luxury offered by smart homes and cities. Benefits offered by smart homes & cities are not only limited to leisure and luxury but other various areas of an individual’s life as well to aid them with information and communication; intelligent responses with the information collected and analyzed; environmental protection and public safety with surveillance. ‘Internet of things’ was invented in 1999 since then there has been a huge bloom in technologies, keeping in mind the present systematic development in sensors, wireless technology, artificial intelligence and machines & devices. This paper outlines the working prototypes that have been developed and deployed in developed countries and recommend to the pacific island nations to accept these technologies for the betterment of their countries. It will also compare the usage of energy and cost saving in smart city and how this can be beneficial to the nations in the Pacific.
{"title":"Internet of Things and Its Significance on Smart Homes/Cities","authors":"Sam Goundar, Akashdeep Bhardwaj, Deepika Bandhana, Melvin Avineshwar Prasad, Krishaal Kavish Chand","doi":"10.4018/jitr.299936","DOIUrl":"https://doi.org/10.4018/jitr.299936","url":null,"abstract":"Smart homes and cities is one of the crucial topics for an individual of any age that requires almost zero computer literacy in order to benefit the leisure and luxury offered by smart homes and cities. Benefits offered by smart homes & cities are not only limited to leisure and luxury but other various areas of an individual’s life as well to aid them with information and communication; intelligent responses with the information collected and analyzed; environmental protection and public safety with surveillance. ‘Internet of things’ was invented in 1999 since then there has been a huge bloom in technologies, keeping in mind the present systematic development in sensors, wireless technology, artificial intelligence and machines & devices. This paper outlines the working prototypes that have been developed and deployed in developed countries and recommend to the pacific island nations to accept these technologies for the betterment of their countries. It will also compare the usage of energy and cost saving in smart city and how this can be beneficial to the nations in the Pacific.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121567647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sam Goundar, Akashdeep Bhardwaj, S. Prakash, Pranil Sadal
A number of numerical practices exist that actuaries use to predict annual medical claims expense in an insurance company. This amount needs to be included in the yearly financial budgets. Inappropriate estimating generally has negative effects on the overall performance of the business. This paper presents the development of Artificial Neural Network model that is appropriate for predicting the anticipated annual medical claims. Once the implementation of the neural network models were finished, the focus was to decrease the Mean Absolute Percentage Error by adjusting the parameters such as epoch, learning rate and neuron in different layers. Both Feed Forward and Recurrent Neural Networks were implemented to forecast the yearly claims amount. In conclusion, the Artificial Neural Network Model that was implemented proved to be an effective tool for forecasting the anticipated annual medical claims. Recurrent neural network outperformed Feed Forward neural network in terms of accuracy and computation power required to carry out the forecasting.
{"title":"Use of Artificial Neural Network for Forecasting Health Insurance Entitlements","authors":"Sam Goundar, Akashdeep Bhardwaj, S. Prakash, Pranil Sadal","doi":"10.4018/jitr.299372","DOIUrl":"https://doi.org/10.4018/jitr.299372","url":null,"abstract":"A number of numerical practices exist that actuaries use to predict annual medical claims expense in an insurance company. This amount needs to be included in the yearly financial budgets. Inappropriate estimating generally has negative effects on the overall performance of the business. This paper presents the development of Artificial Neural Network model that is appropriate for predicting the anticipated annual medical claims. Once the implementation of the neural network models were finished, the focus was to decrease the Mean Absolute Percentage Error by adjusting the parameters such as epoch, learning rate and neuron in different layers. Both Feed Forward and Recurrent Neural Networks were implemented to forecast the yearly claims amount. In conclusion, the Artificial Neural Network Model that was implemented proved to be an effective tool for forecasting the anticipated annual medical claims. Recurrent neural network outperformed Feed Forward neural network in terms of accuracy and computation power required to carry out the forecasting.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115981726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extracting an effective facial feature representation is the critical task for an automatic expression recognition system. Local Binary Pattern (LBP) is known to be a popular texture feature for facial expression recognition. However, only a few approaches utilize the relationship between local neighborhood pixels itself. This paper presents a Hybrid Local Texture Descriptor (HLTD) which is derived from the logical fusion of Local Neighborhood XNOR Patterns (LNXP) and LBP to investigate the potential of positional pixel relationship in automatic emotion recognition. The LNXP encodes texture information based on two nearest vertical and/or horizontal neighboring pixel of the current pixel whereas LBP encodes the center pixel relationship of the neighboring pixel. After logical feature fusion, the Deep Stacked Autoencoder (DSA) is established on the CK+, MMI and KDEF-dyn dataset and the results show that the proposed HLTD based approach outperforms many of the state of art methods with an average recognition rate of 97.5% for CK+, 94.1% for MMI and 88.5% for KDEF.
{"title":"Deep Stacked Autoencoder-Based Automatic Emotion Recognition Using an Efficient Hybrid Local Texture Descriptor","authors":"Shanthi Pitchaiyan, N. Savarimuthu","doi":"10.4018/jitr.2022010103","DOIUrl":"https://doi.org/10.4018/jitr.2022010103","url":null,"abstract":"Extracting an effective facial feature representation is the critical task for an automatic expression recognition system. Local Binary Pattern (LBP) is known to be a popular texture feature for facial expression recognition. However, only a few approaches utilize the relationship between local neighborhood pixels itself. This paper presents a Hybrid Local Texture Descriptor (HLTD) which is derived from the logical fusion of Local Neighborhood XNOR Patterns (LNXP) and LBP to investigate the potential of positional pixel relationship in automatic emotion recognition. The LNXP encodes texture information based on two nearest vertical and/or horizontal neighboring pixel of the current pixel whereas LBP encodes the center pixel relationship of the neighboring pixel. After logical feature fusion, the Deep Stacked Autoencoder (DSA) is established on the CK+, MMI and KDEF-dyn dataset and the results show that the proposed HLTD based approach outperforms many of the state of art methods with an average recognition rate of 97.5% for CK+, 94.1% for MMI and 88.5% for KDEF.","PeriodicalId":296080,"journal":{"name":"J. Inf. Technol. Res.","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129184112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}