Deep learning with Deep Neural Networks (DNNs) can achieve much higher accuracy on many computer vision tasks than classic machine learning algorithms. Because of the high demand for both computation and storage resources, DNNs are often deployed in the cloud. Unfortunately, executing deep learning inference in the cloud, especially for real-time video analysis, often incurs high bandwidth consumption, high latency, reliability issues, and privacy concerns. Moving the DNNs close to the data source with an edge computing paradigm is a good approach to address those problems. The lack of an open source framework with a high-level API also complicates the deployment of deep learning-enabled service at the Internet edge. This paper presents EdgeEye, an edge-computing framework for real-time intelligent video analytics applications. EdgeEye provides a high-level, task-specific API for developers so that they can focus solely on application logic. EdgeEye does so by enabling developers to transform models trained with popular deep learning frameworks to deployable components with minimal effort. It leverages the optimized inference engines from industry to achieve the optimized inference performance and efficiency.
{"title":"EdgeEye","authors":"Peng Liu, Bozhao Qi, Suman Banerjee","doi":"10.1145/3213344.3213345","DOIUrl":"https://doi.org/10.1145/3213344.3213345","url":null,"abstract":"Deep learning with Deep Neural Networks (DNNs) can achieve much higher accuracy on many computer vision tasks than classic machine learning algorithms. Because of the high demand for both computation and storage resources, DNNs are often deployed in the cloud. Unfortunately, executing deep learning inference in the cloud, especially for real-time video analysis, often incurs high bandwidth consumption, high latency, reliability issues, and privacy concerns. Moving the DNNs close to the data source with an edge computing paradigm is a good approach to address those problems. The lack of an open source framework with a high-level API also complicates the deployment of deep learning-enabled service at the Internet edge. This paper presents EdgeEye, an edge-computing framework for real-time intelligent video analytics applications. EdgeEye provides a high-level, task-specific API for developers so that they can focus solely on application logic. EdgeEye does so by enabling developers to transform models trained with popular deep learning frameworks to deployable components with minimal effort. It leverages the optimized inference engines from industry to achieve the optimized inference performance and efficiency.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115613870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent content delivery mechanisms, popular contents tend to be placed closer to the users for better delivery performance and lower network resource occupation. Caching mechanisms in Content Delivery Networks (CDN), Mobile Edge Clouds (MECs) and fog computing have implemented edge caching paradigm for different application scenarios. However, state-of-the-art caching mechanisms in literature are mostly bounded by application scenarios. With the rapid development of heterogeneous networks, the lack of uniform caching management has become an issue. Therefore, a novel caching mechanism, Semi-Edge caching (SE), is proposed in this paper. SE caching mechanism is based on in-network caching technique and it could be generically applied into various types of network fog. Furthermore, two content allocation strategies, SE-U (unicast) and SE-B (broadcast), are proposed within SE mechanism. The performance of SE-U and SE-B are evaluated in three typical topologies with various scenario contexts. Compared to edge caching, SE can reduce latency by 7% and increase cache hit ratio by 45%.
{"title":"Semi-Edge: From Edge Caching to Hierarchical Caching in Network Fog","authors":"Yining Hua, L. Guan, K. Kyriakopoulos","doi":"10.1145/3213344.3213352","DOIUrl":"https://doi.org/10.1145/3213344.3213352","url":null,"abstract":"In recent content delivery mechanisms, popular contents tend to be placed closer to the users for better delivery performance and lower network resource occupation. Caching mechanisms in Content Delivery Networks (CDN), Mobile Edge Clouds (MECs) and fog computing have implemented edge caching paradigm for different application scenarios. However, state-of-the-art caching mechanisms in literature are mostly bounded by application scenarios. With the rapid development of heterogeneous networks, the lack of uniform caching management has become an issue. Therefore, a novel caching mechanism, Semi-Edge caching (SE), is proposed in this paper. SE caching mechanism is based on in-network caching technique and it could be generically applied into various types of network fog. Furthermore, two content allocation strategies, SE-U (unicast) and SE-B (broadcast), are proposed within SE mechanism. The performance of SE-U and SE-B are evaluated in three typical topologies with various scenario contexts. Compared to edge caching, SE can reduce latency by 7% and increase cache hit ratio by 45%.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"2 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","authors":"","doi":"10.1145/3213344","DOIUrl":"https://doi.org/10.1145/3213344","url":null,"abstract":"","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116408726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianwei Xing, S. Sandha, Bharathan Balaji, Supriyo Chakraborty, M. Srivastava
Edge devices rely extensively on machine learning for intelligent inferences and pattern matching. However, edge devices use a multitude of sensing modalities and are exposed to wide ranging contexts. It is difficult to develop separate machine learning models for each scenario as manual labeling is not scalable. To reduce the amount of labeled data and to speed up the training process, we propose to transfer knowledge between edge devices by using unlabeled data. Our approach, called RecycleML, uses cross modal transfer to accelerate the learning of edge devices across different sensing modalities. Using human activity recognition as a case study, over our collected CMActivity dataset, we observe that RecycleML reduces the amount of required labeled data by at least 90% and speeds up the training process by up to 50 times in comparison to training the edge device from scratch.
{"title":"Enabling Edge Devices that Learn from Each Other: Cross Modal Training for Activity Recognition","authors":"Tianwei Xing, S. Sandha, Bharathan Balaji, Supriyo Chakraborty, M. Srivastava","doi":"10.1145/3213344.3213351","DOIUrl":"https://doi.org/10.1145/3213344.3213351","url":null,"abstract":"Edge devices rely extensively on machine learning for intelligent inferences and pattern matching. However, edge devices use a multitude of sensing modalities and are exposed to wide ranging contexts. It is difficult to develop separate machine learning models for each scenario as manual labeling is not scalable. To reduce the amount of labeled data and to speed up the training process, we propose to transfer knowledge between edge devices by using unlabeled data. Our approach, called RecycleML, uses cross modal transfer to accelerate the learning of edge devices across different sensing modalities. Using human activity recognition as a case study, over our collected CMActivity dataset, we observe that RecycleML reduces the amount of required labeled data by at least 90% and speeds up the training process by up to 50 times in comparison to training the edge device from scratch.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125411942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julien Gedeon, Jeff Krisztinkovics, Christian Meurisch, Michael Stein, L. Wang, M. Mühlhäuser
The emerging paradigm of edge computing has proposed cloudlets to offload data and computations from mobile, resource-constrained devices. However, little attention has been paid to the question on where to deploy cloudlets in the context of smart city environments. In this vision paper, we propose to deploy cloudlets on a city-wide scale by leveraging three kinds of existing infrastructures: cellular base stations, routers and street lamps. We motivate the use of this infrastructure with real location data of nearly 50,000 access points from a major city. We provide an analysis on the potential coverage for the different cloudlet types. Besides spatial coverage, we also consider user traces from two mobile applications. Our results show that upgrading only a relatively small number of access points can lead to a city-scale cloudlet coverage. This is especially true for the coverage analysis of the mobility traces, where mobile users are within the communication range of a cloudlet-enabled access point most of the time.
{"title":"A Multi-Cloudlet Infrastructure for Future Smart Cities: An Empirical Study","authors":"Julien Gedeon, Jeff Krisztinkovics, Christian Meurisch, Michael Stein, L. Wang, M. Mühlhäuser","doi":"10.1145/3213344.3213348","DOIUrl":"https://doi.org/10.1145/3213344.3213348","url":null,"abstract":"The emerging paradigm of edge computing has proposed cloudlets to offload data and computations from mobile, resource-constrained devices. However, little attention has been paid to the question on where to deploy cloudlets in the context of smart city environments. In this vision paper, we propose to deploy cloudlets on a city-wide scale by leveraging three kinds of existing infrastructures: cellular base stations, routers and street lamps. We motivate the use of this infrastructure with real location data of nearly 50,000 access points from a major city. We provide an analysis on the potential coverage for the different cloudlet types. Besides spatial coverage, we also consider user traces from two mobile applications. Our results show that upgrading only a relatively small number of access points can lead to a city-scale cloudlet coverage. This is especially true for the coverage analysis of the mobility traces, where mobile users are within the communication range of a cloudlet-enabled access point most of the time.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"271 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116175214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Vasilakis, Pranjal Goel, Henri Maxime Demoulin, Jonathan M. Smith
Perceived as a vast, interconnected graph of content, the reality of the web is very different. Immense computational resources are used to deliver this content and associated services. An even larger pool of computing power is comprised by edge user devices. This latent potential has gone unused. Ar~frames the web as a distributed computing platform, unifying processing and storage infrastructure with a core programming model and a common set of browser-provided services. By exposing the inherent capacities to programmers, a far more powerful capability has been unleashed, that of the Internet as a distributed computing system. We have implemented a prototype system that, while modest in scale, fully illustrates what can be realized.
{"title":"The Web as a Distributed Computing Platform","authors":"N. Vasilakis, Pranjal Goel, Henri Maxime Demoulin, Jonathan M. Smith","doi":"10.1145/3213344.3213346","DOIUrl":"https://doi.org/10.1145/3213344.3213346","url":null,"abstract":"Perceived as a vast, interconnected graph of content, the reality of the web is very different. Immense computational resources are used to deliver this content and associated services. An even larger pool of computing power is comprised by edge user devices. This latent potential has gone unused. Ar~frames the web as a distributed computing platform, unifying processing and storage infrastructure with a core programming model and a common set of browser-provided services. By exposing the inherent capacities to programmers, a far more powerful capability has been unleashed, that of the Internet as a distributed computing system. We have implemented a prototype system that, while modest in scale, fully illustrates what can be realized.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edge Computing (EC) represents the most promising solution to the real-time or near-real-time processing needs of the data generated by Internet of Things devices. The emergence of Edge Infrastructure Providers (EIPs) will bring the EC benefits to those enterprises that cannot afford to purchase, deploy, and manage their own edge infrastructures. The main goal of EIPs will be that of max-imizing their profit, i.e. the difference of the revenues they make to host applications, and the cost they incur to run the infrastructure plus the penalty they have to pay when QoS requirements of hosted applications are not met. To maximize profit, an EIP must strike a balance between the above two factors. In this paper we present the Online Profit Maximization (OPM) algorithm, an approximation algorithm that aims at increasing the profit of an EIP without a priori knowledge. We assess the performance of OPM by simulating its behavior for a variety of realistic scenarios, in which data are generated by a population of moving users, and by comparing the results it yields against those attained by an oracle (i.e., an unrealistic algorithm able to always make optimal decisions) and by a state-of-the-art alternative. Our results indicate that OPM is able to achieve results that are always within 1% of the optimal ones, and that always outperforms the alternative solution.
{"title":"Profit-aware Resource Management for Edge Computing Systems","authors":"C. Anglano, M. Canonico, Marco Guazzone","doi":"10.1145/3213344.3213349","DOIUrl":"https://doi.org/10.1145/3213344.3213349","url":null,"abstract":"Edge Computing (EC) represents the most promising solution to the real-time or near-real-time processing needs of the data generated by Internet of Things devices. The emergence of Edge Infrastructure Providers (EIPs) will bring the EC benefits to those enterprises that cannot afford to purchase, deploy, and manage their own edge infrastructures. The main goal of EIPs will be that of max-imizing their profit, i.e. the difference of the revenues they make to host applications, and the cost they incur to run the infrastructure plus the penalty they have to pay when QoS requirements of hosted applications are not met. To maximize profit, an EIP must strike a balance between the above two factors. In this paper we present the Online Profit Maximization (OPM) algorithm, an approximation algorithm that aims at increasing the profit of an EIP without a priori knowledge. We assess the performance of OPM by simulating its behavior for a variety of realistic scenarios, in which data are generated by a population of moving users, and by comparing the results it yields against those attained by an oracle (i.e., an unrealistic algorithm able to always make optimal decisions) and by a state-of-the-art alternative. Our results indicate that OPM is able to achieve results that are always within 1% of the optimal ones, and that always outperforms the alternative solution.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124011039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In typical IoT systems, sensors and actuators are connected to small embedded computers, called IoT devices, and the IoT devices are connected to one or more appropriate cloud services over the internet through an edge access router. A very important design aspect of an IoT edge router is the size of the output packet buffer of its interface that connects to the access link. Selecting an appropriate size for this buffer is crucial because it directly impacts two key performance metrics: 1) access link utilization and 2) latency. In this paper, we calculate the size of the output buffer that ensures that the access link stays highly utilized and at the same time, significantly lowers the average latency experienced by the packets. To calculate this buffer size, we theoretically model the average TCP congestion window size of all IoT devices while eliminating three key assumptions of prior art that do not hold true for IoT TCP traffic, as we will demonstrate through a measurement study. We show that for IoT traffic, buffer size calculated by our method results in 50% lower queuing delay compared to the state of the art schemes while achieving similar access link utilization and loss-rate.
{"title":"Sizing Buffers of IoT Edge Routers","authors":"J. A. Khan, Muhammad Shahzad, A. Butt","doi":"10.1145/3213344.3213354","DOIUrl":"https://doi.org/10.1145/3213344.3213354","url":null,"abstract":"In typical IoT systems, sensors and actuators are connected to small embedded computers, called IoT devices, and the IoT devices are connected to one or more appropriate cloud services over the internet through an edge access router. A very important design aspect of an IoT edge router is the size of the output packet buffer of its interface that connects to the access link. Selecting an appropriate size for this buffer is crucial because it directly impacts two key performance metrics: 1) access link utilization and 2) latency. In this paper, we calculate the size of the output buffer that ensures that the access link stays highly utilized and at the same time, significantly lowers the average latency experienced by the packets. To calculate this buffer size, we theoretically model the average TCP congestion window size of all IoT devices while eliminating three key assumptions of prior art that do not hold true for IoT TCP traffic, as we will demonstrate through a measurement study. We show that for IoT traffic, buffer size calculated by our method results in 50% lower queuing delay compared to the state of the art schemes while achieving similar access link utilization and loss-rate.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitris Deyannis, Rafail Tsirbas, G. Vasiliadis, R. Montella, Sokol Kosta, S. Ioannidis
Antivirus software are the most popular tools for detecting and stopping malicious or unwanted files. However, the performance requirements of traditional host-based antivirus make their wide adoption to mobile, embedded, and hand-held devices questionable. Their computational- and memory-intensive characteristics, which are needed to cope with the evolved and sophisticated malware, makes their deployment to mobile processors a hard task. Moreover, their increasing complexity may result in vulnerabilities that can be exploited by malware. In this paper, we first describe a GPU-based antivirus algorithm for Android devices. Then, due to the limited number of GPU-enabled Android devices, we present different architecture designs that exploit code offloading for running the antivirus on more powerful machines. This approach enables lower execution and memory overheads, better performance, and improved deployability and management. We evaluate the performance, scalability, and efficacy of the system in several different scenarios and setups. We show that the time to detect a malware is 8.4 times lower than the typical local execution approach.
{"title":"Enabling GPU-assisted Antivirus Protection on Android Devices through Edge Offloading","authors":"Dimitris Deyannis, Rafail Tsirbas, G. Vasiliadis, R. Montella, Sokol Kosta, S. Ioannidis","doi":"10.1145/3213344.3213347","DOIUrl":"https://doi.org/10.1145/3213344.3213347","url":null,"abstract":"Antivirus software are the most popular tools for detecting and stopping malicious or unwanted files. However, the performance requirements of traditional host-based antivirus make their wide adoption to mobile, embedded, and hand-held devices questionable. Their computational- and memory-intensive characteristics, which are needed to cope with the evolved and sophisticated malware, makes their deployment to mobile processors a hard task. Moreover, their increasing complexity may result in vulnerabilities that can be exploited by malware. In this paper, we first describe a GPU-based antivirus algorithm for Android devices. Then, due to the limited number of GPU-enabled Android devices, we present different architecture designs that exploit code offloading for running the antivirus on more powerful machines. This approach enables lower execution and memory overheads, better performance, and improved deployability and management. We evaluate the performance, scalability, and efficacy of the system in several different scenarios and setups. We show that the time to detect a malware is 8.4 times lower than the typical local execution approach.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126083938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmad Bisher Tarakji, Jian Xu, Juan A. Colmenares, Iqbal Mohomed
Improvements in cloud-based speech recognition have led to an explosion in voice assistants, as bespoke devices in the home, cars, wearables or on smart phones. In this paper, we present UIVoice, through which we enable voice assistants (that heavily utilize the cloud) to dynamically interact with mobile applications running in the edge. We present a framework that can be used by third party developers to easily create Voice User Interfaces (VUIs) on top of existing applications. We demonstrate the feasibility of our approach through a prototype based on Android and Amazon Alexa, describe how we added voice to several popular applications and provide an initial performance evaluation. We also highlight research challenges that are relevant to the edge computing community.
{"title":"Voice enabling mobile applications with UIVoice","authors":"Ahmad Bisher Tarakji, Jian Xu, Juan A. Colmenares, Iqbal Mohomed","doi":"10.1145/3213344.3213353","DOIUrl":"https://doi.org/10.1145/3213344.3213353","url":null,"abstract":"Improvements in cloud-based speech recognition have led to an explosion in voice assistants, as bespoke devices in the home, cars, wearables or on smart phones. In this paper, we present UIVoice, through which we enable voice assistants (that heavily utilize the cloud) to dynamically interact with mobile applications running in the edge. We present a framework that can be used by third party developers to easily create Voice User Interfaces (VUIs) on top of existing applications. We demonstrate the feasibility of our approach through a prototype based on Android and Amazon Alexa, describe how we added voice to several popular applications and provide an initial performance evaluation. We also highlight research challenges that are relevant to the edge computing community.","PeriodicalId":433649,"journal":{"name":"Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121720376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}