We address the problem of the optimization of sparse matrix-vector product (SpMV) on homogeneous distributed systems. For this purpose, we propose three approaches based on partitioning the matrix into row blocks. These blocks are defined by a set of a fixed number of rows and a set of contiguous (resp. non-contiguous) rows containing a fixed number of non-zero elements. These approaches lead to solve some specific NP-hard scheduling problems. Thus, adequate heuristics are designed. We analyse the theoretical performance of the proposed approaches and validate them by a series of experiments. This work represents an important step in an overall objective which is to determine the best-balanced distribution for the SpMV computation on a distributed system. In order to validate our approaches for sparse matrix distribution, we compare them to hypergraph model as well as to PETSc library for SpMV distribution on a homogenous multicore cluster. Experimentations show that our approaches provide performances 2 times better than hypergraph and 49 times better than PETSc.
{"title":"Optimization of Sparse Distributed Computations","authors":"O. Hamdi-Larbi","doi":"10.4018/ijghpc.301586","DOIUrl":"https://doi.org/10.4018/ijghpc.301586","url":null,"abstract":"We address the problem of the optimization of sparse matrix-vector product (SpMV) on homogeneous distributed systems. For this purpose, we propose three approaches based on partitioning the matrix into row blocks. These blocks are defined by a set of a fixed number of rows and a set of contiguous (resp. non-contiguous) rows containing a fixed number of non-zero elements. These approaches lead to solve some specific NP-hard scheduling problems. Thus, adequate heuristics are designed. We analyse the theoretical performance of the proposed approaches and validate them by a series of experiments. This work represents an important step in an overall objective which is to determine the best-balanced distribution for the SpMV computation on a distributed system. In order to validate our approaches for sparse matrix distribution, we compare them to hypergraph model as well as to PETSc library for SpMV distribution on a homogenous multicore cluster. Experimentations show that our approaches provide performances 2 times better than hypergraph and 49 times better than PETSc.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"5 1","pages":"1-18"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78568602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to improve the quality of service operations, it is necessary to take the initiative to prevent service failures and service performance fluctuations, instead of triggering handlers when service errors occur. Effective prediction and analysis of the large-scale services performance is an effective and feasible proactive prevention tool. However, the traditional service performance prediction model mostly adopts the full batch training mode, it is difficult to meet the real-time requirements of large-scale service calculation. Based on the comprehensive trade-off between the method of full batch learning and the stochastic gradient descent method, a large-scale service performance prediction model is established based on online learning, and a service performance prediction method is proposed based on small batch online learning. Through properly setting the batch parameters, the proposed approach only need to train the sample data with small batches in one iteration, the time efficiency is improved for large-scale service performance prediction.
{"title":"An Online Service Performance Prediction Learning Method","authors":"Hua Liang, Sha Wang","doi":"10.4018/ijghpc.301577","DOIUrl":"https://doi.org/10.4018/ijghpc.301577","url":null,"abstract":"In order to improve the quality of service operations, it is necessary to take the initiative to prevent service failures and service performance fluctuations, instead of triggering handlers when service errors occur. Effective prediction and analysis of the large-scale services performance is an effective and feasible proactive prevention tool. However, the traditional service performance prediction model mostly adopts the full batch training mode, it is difficult to meet the real-time requirements of large-scale service calculation. Based on the comprehensive trade-off between the method of full batch learning and the stochastic gradient descent method, a large-scale service performance prediction model is established based on online learning, and a service performance prediction method is proposed based on small batch online learning. Through properly setting the batch parameters, the proposed approach only need to train the sample data with small batches in one iteration, the time efficiency is improved for large-scale service performance prediction.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"19 1","pages":"1-14"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74434354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) induces an immense volume of data every day. Smart IoT based applications need immediate response and processing of data, for which fog computing was introduced. Fog computing contains many small-scale data centres, which helps to process the incoming data from IoT immediately. More the data more is the requirement of resources in the fog layer. Hence there may be overloading of data, which needs to be handled directly. There is a need to provide a framework that can reduce energy consumption and enhance resource utilization during storage, processing and network functioning. This article proposed smart traffic management architecture, which improves resource utilization and conserves intelligent vehicles' energy. The article also proposes a load balancing algorithm for avoiding the overloading of resources in the proposed architecture while executing a large number of vehicle requests. Further, this paper provides some key challenges and issues of fog computing. The article concludes by providing future directions.
{"title":"A Novel Load Balancing Technique for Smart Application in a Fog Computing Environment","authors":"Mandeep Kaur Saroa, Rajni Aron","doi":"10.4018/ijghpc.301583","DOIUrl":"https://doi.org/10.4018/ijghpc.301583","url":null,"abstract":"Internet of Things (IoT) induces an immense volume of data every day. Smart IoT based applications need immediate response and processing of data, for which fog computing was introduced. Fog computing contains many small-scale data centres, which helps to process the incoming data from IoT immediately. More the data more is the requirement of resources in the fog layer. Hence there may be overloading of data, which needs to be handled directly. There is a need to provide a framework that can reduce energy consumption and enhance resource utilization during storage, processing and network functioning. This article proposed smart traffic management architecture, which improves resource utilization and conserves intelligent vehicles' energy. The article also proposes a load balancing algorithm for avoiding the overloading of resources in the proposed architecture while executing a large number of vehicle requests. Further, this paper provides some key challenges and issues of fog computing. The article concludes by providing future directions.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"196 1","pages":"1-19"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81074006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mostefa Hamdani, Youcef Aklouf, Hadj Ahmed Bouarara
Cloud Computing is an IT organization concept that places the Internet at the heart of business activity, allowing it to use hardware resources. In recent years, Load Balancing has been an active area of research, and has played a very important role in the case of the Cloud environment. There is a wide range of improvements in this arena. In this paper, we propose a new Load Balancing algorithm, based on the weights of the nearest servers in the cloud platform. We use the fuzzy logic to represent the weight of the different nodes. Moreover, we implement separate requests in parallel, and use a token to dispatch tasks efficiently. The several scenarios in this paper are considered for experimentation and compare the result of existing Round Robin, Throttled Load Balancing, Equal Spread Load and proposed algorithm .The experiment results show that this approach improves the Load Balancing process effectively in terms of overall response time, data center processing time, total virtual machine cost, and total data transfer cost.
{"title":"A Parallel Fuzzy Load Balancing Algorithm for Distributed Nodes Over a Cloud System","authors":"Mostefa Hamdani, Youcef Aklouf, Hadj Ahmed Bouarara","doi":"10.4018/ijghpc.301576","DOIUrl":"https://doi.org/10.4018/ijghpc.301576","url":null,"abstract":"Cloud Computing is an IT organization concept that places the Internet at the heart of business activity, allowing it to use hardware resources. In recent years, Load Balancing has been an active area of research, and has played a very important role in the case of the Cloud environment. There is a wide range of improvements in this arena. In this paper, we propose a new Load Balancing algorithm, based on the weights of the nearest servers in the cloud platform. We use the fuzzy logic to represent the weight of the different nodes. Moreover, we implement separate requests in parallel, and use a token to dispatch tasks efficiently. The several scenarios in this paper are considered for experimentation and compare the result of existing Round Robin, Throttled Load Balancing, Equal Spread Load and proposed algorithm .The experiment results show that this approach improves the Load Balancing process effectively in terms of overall response time, data center processing time, total virtual machine cost, and total data transfer cost.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"26 1","pages":"1-22"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82279262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In different periods of time, the real-time reasoning tasks generated by autonomous vehicles are scheduled within the tolerance time, which is an important problem to be solved in autonomous driving. Traditionally, tasks are arranged on the on-board unit (OBU), which results in a long time to complete. Heuristic algorithm is widely used in task scheduling, which often leads to premature convergence. Task scheduling in the edge environment can effectively reduce the completion time of tasks. A workflow scheduling strategy in edge environment is designed. To optimize the completion time of reasoning tasks, this paper proposes a Q-learning algorithm based on simulated annealing (SA-QL). Moreover, this paper comprehensively reflects the performance of SA-RL and PSO algorithm from four aspects. Experimental results show that SA-RL algorithm and PSO algorithm have good performance in feasibility and effectiveness. TD(0) algorithms show better performance of exploration, TD(λ) algorithms show that of convergence.
{"title":"A Workflow Scheduling Strategy for Reasoning Tasks of Autonomous Driving","authors":"Jianbin Liao, Rong-jia Xu, Kai Lin, Bing Lin, Xinwei Chen, Hongliang Yu","doi":"10.4018/ijghpc.304907","DOIUrl":"https://doi.org/10.4018/ijghpc.304907","url":null,"abstract":"In different periods of time, the real-time reasoning tasks generated by autonomous vehicles are scheduled within the tolerance time, which is an important problem to be solved in autonomous driving. Traditionally, tasks are arranged on the on-board unit (OBU), which results in a long time to complete. Heuristic algorithm is widely used in task scheduling, which often leads to premature convergence. Task scheduling in the edge environment can effectively reduce the completion time of tasks. A workflow scheduling strategy in edge environment is designed. To optimize the completion time of reasoning tasks, this paper proposes a Q-learning algorithm based on simulated annealing (SA-QL). Moreover, this paper comprehensively reflects the performance of SA-RL and PSO algorithm from four aspects. Experimental results show that SA-RL algorithm and PSO algorithm have good performance in feasibility and effectiveness. TD(0) algorithms show better performance of exploration, TD(λ) algorithms show that of convergence.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"1 1","pages":"1-21"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78715197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The outlook for cloud computing is growing day by day. It is a developing field which is evolving and giving new ways to build, manage and process data. The most difficult task in cloud computing is to provide best quality parameters like maintaining deadline, minimizing make-span time, increasing utilization of resources etc. Therefore, dynamic scheduling of algorithm is needed by a service provider that executes the task within given time span while reducing make-span time.The proposed algorithm utilizes merits of max min, round robin, and min-min and tries to remove the demerits of them. The proposed algorithm has been simulated in cloud Sim by varying number of tasks and analysis has been made based on make-span, average utilization of resources and balancing of load. The results show that the proposed technique has better results as compared to heuristic techniques such as min-min, round robin and max-min.
{"title":"Time Restraint Load Balancing in the Cloud Environment","authors":"Nikita Malhotra, S. Tyagi, Monika Singh","doi":"10.4018/ijghpc.301592","DOIUrl":"https://doi.org/10.4018/ijghpc.301592","url":null,"abstract":"The outlook for cloud computing is growing day by day. It is a developing field which is evolving and giving new ways to build, manage and process data. The most difficult task in cloud computing is to provide best quality parameters like maintaining deadline, minimizing make-span time, increasing utilization of resources etc. Therefore, dynamic scheduling of algorithm is needed by a service provider that executes the task within given time span while reducing make-span time.The proposed algorithm utilizes merits of max min, round robin, and min-min and tries to remove the demerits of them. The proposed algorithm has been simulated in cloud Sim by varying number of tasks and analysis has been made based on make-span, average utilization of resources and balancing of load. The results show that the proposed technique has better results as compared to heuristic techniques such as min-min, round robin and max-min.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"22 1","pages":"1-11"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88274849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nour El Imane Zeghib, A. Alwan, A. Abualkishik, Yonis Gulzar
The main concern of fog computing is reducing data transmission on the cloud. Moreover, due to the short distance between end-user and fog nodes, fog computing considered more reliable to handle time-sensitive situations like the critical data provided by the Internet of Things (IoT). This may include sensory healthcare data which needs rapid processing to make decisions. However, in healthcare monitoring systems it is necessary to ensure the services’ availability when fog node failure occurred. The issue of monitoring service interruption during fog node failure has not received much attention. This paper proposes a multi-route plan that aims to identify an alternative route to ensure the availability of time-critical medical services. Various scenarios have been designed to evaluate the performance of the proposed strategy. The experimental results illustrate the superiority of our approach in terms of latency, energy consumption, and network usage in comparison with most recent related work.
{"title":"Multi-Route Plan for Reliable Services in Fog-Based Healthcare Monitoring Systems","authors":"Nour El Imane Zeghib, A. Alwan, A. Abualkishik, Yonis Gulzar","doi":"10.4018/ijghpc.304908","DOIUrl":"https://doi.org/10.4018/ijghpc.304908","url":null,"abstract":"The main concern of fog computing is reducing data transmission on the cloud. Moreover, due to the short distance between end-user and fog nodes, fog computing considered more reliable to handle time-sensitive situations like the critical data provided by the Internet of Things (IoT). This may include sensory healthcare data which needs rapid processing to make decisions. However, in healthcare monitoring systems it is necessary to ensure the services’ availability when fog node failure occurred. The issue of monitoring service interruption during fog node failure has not received much attention. This paper proposes a multi-route plan that aims to identify an alternative route to ensure the availability of time-critical medical services. Various scenarios have been designed to evaluate the performance of the proposed strategy. The experimental results illustrate the superiority of our approach in terms of latency, energy consumption, and network usage in comparison with most recent related work.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"37 1","pages":"1-20"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84726835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adapting a dynamic load dissemination system for distributed computing pasture has become a hot spot problem of current research. In the instance of overloaded VM’s or node failure, the associated resources face difficult to determine which VM should be selected for load exchanging and/or how many VM’s should migrate to manage load imbalance. This work, introduces a Hierarchical Adaptive Push-Pull system for disseminating dynamic workload and live migration of VM’s among resources in the Cloud. Adhering to the Adaptive Push-Pull, Cloud Resources frequently pull’s the workload or through VM Managers based on load dynamics. In contrast, status information pertaining to Cloud Resources maintained by the Cloud Resource Managers that possess push capability to push the workload only to those VM’s which are capable enough to receive additional load. These two practices contain balancing possessions through efficient load management complications and simulation result addresses reduced load deviation and scalable resources utilization.
{"title":"An Adaptive Push-Pull for Disseminating Dynamic Workload and Virtual Machine Live Migration in Cloud Computing","authors":"K. Naik","doi":"10.4018/ijghpc.301591","DOIUrl":"https://doi.org/10.4018/ijghpc.301591","url":null,"abstract":"Adapting a dynamic load dissemination system for distributed computing pasture has become a hot spot problem of current research. In the instance of overloaded VM’s or node failure, the associated resources face difficult to determine which VM should be selected for load exchanging and/or how many VM’s should migrate to manage load imbalance. This work, introduces a Hierarchical Adaptive Push-Pull system for disseminating dynamic workload and live migration of VM’s among resources in the Cloud. Adhering to the Adaptive Push-Pull, Cloud Resources frequently pull’s the workload or through VM Managers based on load dynamics. In contrast, status information pertaining to Cloud Resources maintained by the Cloud Resource Managers that possess push capability to push the workload only to those VM’s which are capable enough to receive additional load. These two practices contain balancing possessions through efficient load management complications and simulation result addresses reduced load deviation and scalable resources utilization.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"34 1","pages":"1-25"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75236430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ling Zhang, Jian-Wei Zhang, Nai Mei Fan, Hao Hao Zhao
Currently, redundant data affects the speed of intrusion detection, many intrusion detection systems (IDS) have low detection rates and high false alert rate. Focusing on these weakness, a new intrusion detection model based on rough set and random forest (RSRFID) is designed. In the intrusion detection model, rough set (RS) is used to reduce the dimension of redundant attributes; the algorithm of decision tree(DT) is improved; a random forest (RF) algorithm based on attribute significances is proposed. Finally, the simulation experiment is given on NSL-KDD and UNSW-NB15 dataset. The results show: attributes of different types of datasets are reduced using RS; the detection rate of NSL-KDD is 93.73%, the false alert rate is 1.02%; the detection rate of NSL-KDD is 98.92%, the false alert rate is 2.92%.
{"title":"Intrusion Detection Model Based on Rough Set and Random Forest","authors":"Ling Zhang, Jian-Wei Zhang, Nai Mei Fan, Hao Hao Zhao","doi":"10.4018/ijghpc.301581","DOIUrl":"https://doi.org/10.4018/ijghpc.301581","url":null,"abstract":"Currently, redundant data affects the speed of intrusion detection, many intrusion detection systems (IDS) have low detection rates and high false alert rate. Focusing on these weakness, a new intrusion detection model based on rough set and random forest (RSRFID) is designed. In the intrusion detection model, rough set (RS) is used to reduce the dimension of redundant attributes; the algorithm of decision tree(DT) is improved; a random forest (RF) algorithm based on attribute significances is proposed. Finally, the simulation experiment is given on NSL-KDD and UNSW-NB15 dataset. The results show: attributes of different types of datasets are reduced using RS; the detection rate of NSL-KDD is 93.73%, the false alert rate is 1.02%; the detection rate of NSL-KDD is 98.92%, the false alert rate is 2.92%.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"40 1","pages":"1-13"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75471386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In large-scale network scenarios, network security data are characterized by complex association and redundancy, forming network security big data, which makes network security attack and defense more complicated. In this paper, the authors propose a framework for network attack risk control in large-scale network topology, called NARC. Using NARC, a user can determine the influence level of different nodes on the diffusion of attack risk in complex network topology, thus giving optimal risk control decisions. Specifically, this paper designs a topology-oriented node importance assessment model, combined with node vulnerability correlation analysis, to construct a diffusion network of attack risks for identifying potential attack paths. Furthermore, the optimal risk control node selection method based on game theory is proposed to obtain the optimal set of defense nodes. The experimental results demonstrate the feasibility of the proposed NARC, which helps to ease the risk of network attacks
{"title":"A Network Attack Risk Control Framework for Large-Scale Network Topology Driven by Node Importance Assessment","authors":"Yanhua Liu, Zhihuang Liu, Wentao Deng, Yanbin Qiu, Ximeng Liu, Wenzhong Guo","doi":"10.4018/ijghpc.301590","DOIUrl":"https://doi.org/10.4018/ijghpc.301590","url":null,"abstract":"In large-scale network scenarios, network security data are characterized by complex association and redundancy, forming network security big data, which makes network security attack and defense more complicated. In this paper, the authors propose a framework for network attack risk control in large-scale network topology, called NARC. Using NARC, a user can determine the influence level of different nodes on the diffusion of attack risk in complex network topology, thus giving optimal risk control decisions. Specifically, this paper designs a topology-oriented node importance assessment model, combined with node vulnerability correlation analysis, to construct a diffusion network of attack risks for identifying potential attack paths. Furthermore, the optimal risk control node selection method based on game theory is proposed to obtain the optimal set of defense nodes. The experimental results demonstrate the feasibility of the proposed NARC, which helps to ease the risk of network attacks","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"74 1","pages":"1-22"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84407494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}