Cloud computing delivers the on-demand virtualized resources to its consumer for servicing their request on a metered basis. During the high demand of cloud resources the load on system increases that may unbalance the system which affects the quality of service parameters (QoS) adversely that leads to violations of service level agreement (SLA). Role of load balancing is significant in such an environment as it enhances the distribution of workload across multiple devices for example across network links, a cluster of servers, disk drives, etc. The present research work introduced a multi scheduler for balancing the load across the system that aims to optimize the QoS parameters such as response time, resource utilization, and the average waiting time by exploiting these virtual resources in the cloud environment. The performance of the proposed approach analyzed and tested in CloudSim that to optimize these parameters for the current approach. The authors found that our QoS enabled JMLQ approach achieved better results in comparison to our previous JMLQ approach and other variants.
{"title":"A QoS-Enabled Load Balancing Approach for Cloud Computing Environment Join Minimum Loaded Queue (JMLQ)","authors":"Minakshi Sharma, Rajneesh Kumar, Anurag Jain","doi":"10.4018/ijghpc.301587","DOIUrl":"https://doi.org/10.4018/ijghpc.301587","url":null,"abstract":"Cloud computing delivers the on-demand virtualized resources to its consumer for servicing their request on a metered basis. During the high demand of cloud resources the load on system increases that may unbalance the system which affects the quality of service parameters (QoS) adversely that leads to violations of service level agreement (SLA). Role of load balancing is significant in such an environment as it enhances the distribution of workload across multiple devices for example across network links, a cluster of servers, disk drives, etc. The present research work introduced a multi scheduler for balancing the load across the system that aims to optimize the QoS parameters such as response time, resource utilization, and the average waiting time by exploiting these virtual resources in the cloud environment. The performance of the proposed approach analyzed and tested in CloudSim that to optimize these parameters for the current approach. The authors found that our QoS enabled JMLQ approach achieved better results in comparison to our previous JMLQ approach and other variants.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"22 1","pages":"1-19"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90620124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shakila Basheer, S. vivekanadan, Parthasarathy Panchatcharam, U. Gandhi
Shopping mall or a super market is a first choice for buying all necessary products and that is why it attracts more number of customers than the retail shop. But the problem is it creates too many hurdles for customer and owner to maintain the order and keep sync with the time at billing counter. The normal billing and shopping system lacks the time saving approach also it doesn't have proper security arrangements to avoid theft and duplication. To overcome this issue, we developed a smart way for shopping using LabVIEW incorporated with internet of things (IOT). Each and every product contains RFID tag. The smart trolley will consist of a RFID reader, transmitter and ZigBee unit. Along with this, the cart is equipped with the child monitor unit. The whole system is controlled by utilizing the controller developed using LabVIEW. The ZigBee module attached in the cart is responsible for sending the bill to the main billing system and IOT feature is also enabled so that shopping-bill of each customer is mailed to their respective mail-id through esp8266 Wi-Fi module.
{"title":"Internet of Things-Based Automated Shopping Cart Incorporated With Virtual Instrumentation Using LabVIEW for Control Applications","authors":"Shakila Basheer, S. vivekanadan, Parthasarathy Panchatcharam, U. Gandhi","doi":"10.4018/ijghpc.301593","DOIUrl":"https://doi.org/10.4018/ijghpc.301593","url":null,"abstract":"Shopping mall or a super market is a first choice for buying all necessary products and that is why it attracts more number of customers than the retail shop. But the problem is it creates too many hurdles for customer and owner to maintain the order and keep sync with the time at billing counter. The normal billing and shopping system lacks the time saving approach also it doesn't have proper security arrangements to avoid theft and duplication. To overcome this issue, we developed a smart way for shopping using LabVIEW incorporated with internet of things (IOT). Each and every product contains RFID tag. The smart trolley will consist of a RFID reader, transmitter and ZigBee unit. Along with this, the cart is equipped with the child monitor unit. The whole system is controlled by utilizing the controller developed using LabVIEW. The ZigBee module attached in the cart is responsible for sending the bill to the main billing system and IOT feature is also enabled so that shopping-bill of each customer is mailed to their respective mail-id through esp8266 Wi-Fi module.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"85 1","pages":"1-16"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80531465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a workload and machine categorization based resource allocation framework for balancing the load across active physical machines as well as utilizing their different resource capacities in a balanced manner. The workload, essentially independent and non-preemptive tasks are allocated resources on the physical machines whose resource availability complements the resource requirement of tasks. Simulation based experiments are performed using CloudSim simulator to execute three different set of tasks comprising 10000, 20000, and 30000 number of tasks. The metric of load imbalance across active physical machines and the metric of utilization imbalance among their considered resource capacities (i.e., CPU and RAM) are measured in different scheduling cycles of a simulation run. Simulation results show that the proposed resource allocation method outperforms the compared methods in terms of balancing the load across active physical machines and utilizing their different resource capacities in a balanced manner.
{"title":"A Workload and Machine Categorization-Based Resource Allocation Framework for Load Balancing and Balanced Resource Utilization in the Cloud","authors":"Avnish Thakur, Major Singh Goraya","doi":"10.4018/ijghpc.301594","DOIUrl":"https://doi.org/10.4018/ijghpc.301594","url":null,"abstract":"This paper proposes a workload and machine categorization based resource allocation framework for balancing the load across active physical machines as well as utilizing their different resource capacities in a balanced manner. The workload, essentially independent and non-preemptive tasks are allocated resources on the physical machines whose resource availability complements the resource requirement of tasks. Simulation based experiments are performed using CloudSim simulator to execute three different set of tasks comprising 10000, 20000, and 30000 number of tasks. The metric of load imbalance across active physical machines and the metric of utilization imbalance among their considered resource capacities (i.e., CPU and RAM) are measured in different scheduling cycles of a simulation run. Simulation results show that the proposed resource allocation method outperforms the compared methods in terms of balancing the load across active physical machines and utilizing their different resource capacities in a balanced manner.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"25 1","pages":"1-16"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89911248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengrui Li, Weiwei Lin, James Z. Wang, Peng Peng, Jianpeng Lin, Victor I. Chang, Jianghu Pan
In recent years, machine learning models have exhibited remarkable performance in the fourth industrial revolution. However, especially in the field of stock forecasting, most of the existing models demonstrate either relatively weak interpretability or unsatisfactory performance. This paper proposes an interpretable stock selection algorithm(ISSA) to achieve accurate prediction results and high interpretability for stock selection. The excellent performance of ISSA lies in its integration of the learning to rank algorithm LambdaMART with the SHapley Additive exPlanations (SHAP) interpretation method. Performance evaluation over the Shanghai Stock Exchange A-share market shows that ISSA outperforms regression and classification models in stock selection performance. Our results also demonstrate that our proposed ISSA solution can effectively filter out the most impactful features, potentially used for investment strategy.
{"title":"A Novel Interpretable Stock Selection Algorithm for Quantitative Trading","authors":"Zhengrui Li, Weiwei Lin, James Z. Wang, Peng Peng, Jianpeng Lin, Victor I. Chang, Jianghu Pan","doi":"10.4018/ijghpc.301589","DOIUrl":"https://doi.org/10.4018/ijghpc.301589","url":null,"abstract":"In recent years, machine learning models have exhibited remarkable performance in the fourth industrial revolution. However, especially in the field of stock forecasting, most of the existing models demonstrate either relatively weak interpretability or unsatisfactory performance. This paper proposes an interpretable stock selection algorithm(ISSA) to achieve accurate prediction results and high interpretability for stock selection. The excellent performance of ISSA lies in its integration of the learning to rank algorithm LambdaMART with the SHapley Additive exPlanations (SHAP) interpretation method. Performance evaluation over the Shanghai Stock Exchange A-share market shows that ISSA outperforms regression and classification models in stock selection performance. Our results also demonstrate that our proposed ISSA solution can effectively filter out the most impactful features, potentially used for investment strategy.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"23 1","pages":"1-19"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76160733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In image distortion and low adaptive recognition after medical image compression, a high precision medical image lossless compression algorithm based on discrete wavelet transform is proposed. A 3D imaging model of multi-dimensional medical images is constructed, and adaptive information enhancement and image restoration processing are performed on the collected medical images. According to the results of high-dimensional segmentation and segmentation, discrete wavelet transform is used to achieve high-precision lossless compression of medical images. The results show that the medical image compression is better non-destructive and the image fidelity is higher, which improves the detection and adaptive recognition ability of medical images
{"title":"Lossless Compression Algorithm for Medical Images With High Precision Based on Discrete Wavelet Transform","authors":"Meishan Li, Jiamei Xue, Yuntao Wei","doi":"10.4018/ijghpc.301582","DOIUrl":"https://doi.org/10.4018/ijghpc.301582","url":null,"abstract":"In image distortion and low adaptive recognition after medical image compression, a high precision medical image lossless compression algorithm based on discrete wavelet transform is proposed. A 3D imaging model of multi-dimensional medical images is constructed, and adaptive information enhancement and image restoration processing are performed on the collected medical images. According to the results of high-dimensional segmentation and segmentation, discrete wavelet transform is used to achieve high-precision lossless compression of medical images. The results show that the medical image compression is better non-destructive and the image fidelity is higher, which improves the detection and adaptive recognition ability of medical images","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"89 1","pages":"1-13"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83542699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing can speed up the training process of deep learning models. In this process, training data and model parameters stored in the cloud are prone to threats of being stolen. In model protection, model watermarking is a commonly used method. Using the adversarial example as model watermarking can make watermarked images have better concealment. Oriented from the signature mechanism in cryptography, a signature-based scheme is proposed to guarantee the performance of deep learning algorithms via identifying these adversarial examples. In the adversarial example generation stage, the corresponding signature information and classification information will be embedded in the noise space, so that the generated adversarial example will have implicit identity information, which can be verified by the secret key. The experiment using the ImageNet dataset shows that the adversarial examples generated by the authors’ scheme must be correctly recognized by the classifier with the secret key.
{"title":"A Traitor Tracking Method Towards Deep Learning Models in Cloud Environments","authors":"Yu Zhang, Linfeng Wei, Hailiang Li, Hexin Cai, Ying Wu","doi":"10.4018/ijghpc.301588","DOIUrl":"https://doi.org/10.4018/ijghpc.301588","url":null,"abstract":"Cloud computing can speed up the training process of deep learning models. In this process, training data and model parameters stored in the cloud are prone to threats of being stolen. In model protection, model watermarking is a commonly used method. Using the adversarial example as model watermarking can make watermarked images have better concealment. Oriented from the signature mechanism in cryptography, a signature-based scheme is proposed to guarantee the performance of deep learning algorithms via identifying these adversarial examples. In the adversarial example generation stage, the corresponding signature information and classification information will be embedded in the noise space, so that the generated adversarial example will have implicit identity information, which can be verified by the secret key. The experiment using the ImageNet dataset shows that the adversarial examples generated by the authors’ scheme must be correctly recognized by the classifier with the secret key.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"30 1","pages":"1-17"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75190032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is the most developing technology, which allow users to access data, software and IT services. Cloud systems are characterized by the uncertainty of the resources availability. For that reason, its performance is greatly affected by the applied scheduling and allocation algorithm used to map submitted tasks to resources. This paper introduces a heuristic approach that combine Ant Colony and priority-aware schema to achieve task scheduling and resource allocation in cloud computing environments. The algorithm provides three prioritized levels of quality of services to be employed by users per their demand. A level’s priorities dynamically affect the way tasks are distributed in the system. The resources are allocated using a modified version of Ant Colony Optimization. Results show that the proposed algorithm improves the performance of the system by minimizing makespan, decreasing the degree of imbalance between virtual machines, and enhancing the Cloud’s quality of service by achieving user-priority goals.
{"title":"Tasks and Resources Allocation Approach with Priority Constraints in Cloud Computing","authors":"Nouf Ahmad Almojel, Alaa E. S. Ahmed","doi":"10.4018/ijghpc.301584","DOIUrl":"https://doi.org/10.4018/ijghpc.301584","url":null,"abstract":"Cloud computing is the most developing technology, which allow users to access data, software and IT services. Cloud systems are characterized by the uncertainty of the resources availability. For that reason, its performance is greatly affected by the applied scheduling and allocation algorithm used to map submitted tasks to resources. This paper introduces a heuristic approach that combine Ant Colony and priority-aware schema to achieve task scheduling and resource allocation in cloud computing environments. The algorithm provides three prioritized levels of quality of services to be employed by users per their demand. A level’s priorities dynamically affect the way tasks are distributed in the system. The resources are allocated using a modified version of Ant Colony Optimization. Results show that the proposed algorithm improves the performance of the system by minimizing makespan, decreasing the degree of imbalance between virtual machines, and enhancing the Cloud’s quality of service by achieving user-priority goals.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"228 1","pages":"1-17"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77330872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Chen, Jinghua Yan, Tieliang Gao, Yuhua Li, Huan Ma
For duplicate image detection, the more advanced large-scale image retrieval systems in recent years have mainly used the Bag-of-Feature ( BoF ) model to meet the real-time. However, due to the lack of semantic information in the training process of the visual dictionary, BoF model cannot guarantee semantic similarity. Therefore, this paper proposes a duplicate image representation algorithm based on semi-supervised learning. This algorithm first generates semi-supervised hashes, and then maps the image local descriptors to binary codes based on semi-supervised learning. Finally, an image is represented by a frequency histogram of binary codes. Since the semantic information can be effectively introduced through the construction of the marker matrix and the classification matrix during the training process, semi-supervised learning can not only guarantee the metric similarity of the local descriptors, but also guarantee the semantic similarity. And the experimental results also show this algorithm has a better retrieval effect compared with traditional algorithms.
{"title":"Duplicate Image Representation Based on Semi-Supervised Learning","authors":"Ming Chen, Jinghua Yan, Tieliang Gao, Yuhua Li, Huan Ma","doi":"10.4018/ijghpc.301578","DOIUrl":"https://doi.org/10.4018/ijghpc.301578","url":null,"abstract":"For duplicate image detection, the more advanced large-scale image retrieval systems in recent years have mainly used the Bag-of-Feature ( BoF ) model to meet the real-time. However, due to the lack of semantic information in the training process of the visual dictionary, BoF model cannot guarantee semantic similarity. Therefore, this paper proposes a duplicate image representation algorithm based on semi-supervised learning. This algorithm first generates semi-supervised hashes, and then maps the image local descriptors to binary codes based on semi-supervised learning. Finally, an image is represented by a frequency histogram of binary codes. Since the semantic information can be effectively introduced through the construction of the marker matrix and the classification matrix during the training process, semi-supervised learning can not only guarantee the metric similarity of the local descriptors, but also guarantee the semantic similarity. And the experimental results also show this algorithm has a better retrieval effect compared with traditional algorithms.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"66 1","pages":"1-13"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85810382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The internet is spreading fast and the diversity of its components affects the performance unpredictably. This leads to the continuous examination of internet hardware structure for the purpose of user experience improvement. Network congestion is one of the challenges that affects network performance, which mostly occurs when the arriving packets exceed available network resources. When this occurs, incoming packets face unpredicted losses or delay. Thus, congestion has an impact on worsening the network performance due to an increase in packet loss. Therefore, a high performance approach called CDGRED was proposed to overcome these constraints using adaptive techniques. An optimized implementation with a suitable parameter tuning for CDGRED method was proposed with results showing clearly enhanced outputs. The CDGRED approach performance is empirically tested and compared with existing methods such as GRED, DGRED, and FLRED. Experimental results prove that the proposed approach has higher performance in early congestion detection over existing approaches.
{"title":"High Performance Changeable Dynamic Gentle Random Early Detection (CDGRED) for Congestion Control at Router Buffer","authors":"Amin Jarrah, Mohammad Omar Alshiab, M. Shurman","doi":"10.4018/ijghpc.301585","DOIUrl":"https://doi.org/10.4018/ijghpc.301585","url":null,"abstract":"The internet is spreading fast and the diversity of its components affects the performance unpredictably. This leads to the continuous examination of internet hardware structure for the purpose of user experience improvement. Network congestion is one of the challenges that affects network performance, which mostly occurs when the arriving packets exceed available network resources. When this occurs, incoming packets face unpredicted losses or delay. Thus, congestion has an impact on worsening the network performance due to an increase in packet loss. Therefore, a high performance approach called CDGRED was proposed to overcome these constraints using adaptive techniques. An optimized implementation with a suitable parameter tuning for CDGRED method was proposed with results showing clearly enhanced outputs. The CDGRED approach performance is empirically tested and compared with existing methods such as GRED, DGRED, and FLRED. Experimental results prove that the proposed approach has higher performance in early congestion detection over existing approaches.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"12 1","pages":"1-14"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85999101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) is an emerging field in the area of research and the emergence of the Internet of Things has developed an explosion in the area of sensor computing platforms. A wide range of applications has been developed using this sensor platform by using IoT devices ranging from simple devices to complex machines like the implementation of Artificial intelligence in various devices. Developers are working on more complex devices that can generate more performance but at the same time, they are targeting low-cost machine systems like CPU, and sometimes this low cost might generate low performance. To overcome these low-performance issues one should properly differentiate the features so that it can select the proper platform might be a CPU system or it can be a custom platform with hardware accelerators that includes GPUs and FPGAs. These custom platforms are costlier than the CPU systems but it will generate better performance than the CPU systems. This paper shows how FPGA can optimize the performance of the Internet of Things.
{"title":"Optimizing the Performance of IoT Using FPGA as Compared to GPU","authors":"Rajit Nair, Preeti Sharma, Tripti Sharma","doi":"10.4018/ijghpc.301580","DOIUrl":"https://doi.org/10.4018/ijghpc.301580","url":null,"abstract":"Internet of Things (IoT) is an emerging field in the area of research and the emergence of the Internet of Things has developed an explosion in the area of sensor computing platforms. A wide range of applications has been developed using this sensor platform by using IoT devices ranging from simple devices to complex machines like the implementation of Artificial intelligence in various devices. Developers are working on more complex devices that can generate more performance but at the same time, they are targeting low-cost machine systems like CPU, and sometimes this low cost might generate low performance. To overcome these low-performance issues one should properly differentiate the features so that it can select the proper platform might be a CPU system or it can be a custom platform with hardware accelerators that includes GPUs and FPGAs. These custom platforms are costlier than the CPU systems but it will generate better performance than the CPU systems. This paper shows how FPGA can optimize the performance of the Internet of Things.","PeriodicalId":43565,"journal":{"name":"International Journal of Grid and High Performance Computing","volume":"36 1","pages":"1-15"},"PeriodicalIF":1.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86862753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}