This demonstrator illustrates the performance of our feedback-channel-free distributed video coding system for extremely low-resolution visual sensors. The demonstrator includes a setup where a low-power sensor capturing 30 x 30 pixels video data is connected to a laptop PC. The video sequence is encoded, decoded and displayed on the computer screen in real-time for side-by-side comparison between the original input and the reconstructed data. A software environment allows the user to adjust all the control parameters of the video codec and to evaluate the influence of changes on the visual quality. The objective performance of the coding system can be monitored in terms of bits per pixel, decoding delays, decoding speed and decoding failures.
该演示演示了我们的无反馈信道分布式视频编码系统在极低分辨率视觉传感器上的性能。该演示包括一个设置,其中一个捕获30 x 30像素视频数据的低功耗传感器连接到笔记本电脑。对视频序列进行编码、解码并实时显示在计算机屏幕上,以便将原始输入数据与重构数据进行并排比较。软件环境允许用户调整视频编解码器的所有控制参数,并评估变化对视觉质量的影响。编码系统的客观性能可以从每像素位数、解码延迟、解码速度和解码失败等方面进行监控。
{"title":"Real-time distributed video coding simulator for 1K-pixel visual sensor","authors":"Jan Hanca, N. Deligiannis, A. Munteanu","doi":"10.1145/2789116.2802651","DOIUrl":"https://doi.org/10.1145/2789116.2802651","url":null,"abstract":"This demonstrator illustrates the performance of our feedback-channel-free distributed video coding system for extremely low-resolution visual sensors. The demonstrator includes a setup where a low-power sensor capturing 30 x 30 pixels video data is connected to a laptop PC. The video sequence is encoded, decoded and displayed on the computer screen in real-time for side-by-side comparison between the original input and the reconstructed data. A software environment allows the user to adjust all the control parameters of the video codec and to evaluate the influence of changes on the visual quality. The objective performance of the coding system can be monitored in terms of bits per pixel, decoding delays, decoding speed and decoding failures.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127154644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selman Ergünay, Vladan Popovic, Kerem Seyid, Y. Leblebici
The Panoptic camera is an omnidirectional multi-aperture visual system which is realized by mounting multiple imaging sensors on a hemispherical frame. It is a spherical light-field camera system that records light information from any direction around its center. Omnidirectional light field reconstruction algorithm, its centralized and distributed real-time hardware implementations were previously presented by the authors. In this work, we analyze advantages and disadvantages of previous approaches and propose a novel high performance hybrid architecture based on a tree based network topology for real-time omnidirectional image reconstruction. The novel hybrid architecture increases the scalability of the Panoptic camera systems while utilizing fewer resources. Furthermore, the tree based structure allows implementing further signal processing applications such as omnidirectional feature extraction which was not possible in centralized and distributed implementations.
{"title":"A novel hybrid architecture for real-time omnidirectional image reconstruction","authors":"Selman Ergünay, Vladan Popovic, Kerem Seyid, Y. Leblebici","doi":"10.1145/2789116.2802647","DOIUrl":"https://doi.org/10.1145/2789116.2802647","url":null,"abstract":"The Panoptic camera is an omnidirectional multi-aperture visual system which is realized by mounting multiple imaging sensors on a hemispherical frame. It is a spherical light-field camera system that records light information from any direction around its center. Omnidirectional light field reconstruction algorithm, its centralized and distributed real-time hardware implementations were previously presented by the authors. In this work, we analyze advantages and disadvantages of previous approaches and propose a novel high performance hybrid architecture based on a tree based network topology for real-time omnidirectional image reconstruction. The novel hybrid architecture increases the scalability of the Panoptic camera systems while utilizing fewer resources. Furthermore, the tree based structure allows implementing further signal processing applications such as omnidirectional feature extraction which was not possible in centralized and distributed implementations.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133907160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a multi-object tracking system designed for a low cost embedded smart camera is proposed. Objects tracking constitutes a main step in video-surveillance applications. Because of the number of cameras used to cover a large area, surveillance applications are constrained by the cost of each node, the power efficiency of the system, the robustness of the tracking algorithm and the real-time processing. They require a reliable multi-object tracking algorithm that can run in a real-time on light computing architectures. In this paper, we propose a tracking pipeline designed for a fixed smart camera that can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on the RaspberryPi board equipped with the RaspiCam camera. The tracking quality of the proposed pipeline is evaluated on publicly available datatsets: PETS2009 and CAVIAR.
{"title":"Reliable multi-object tracking dealing with occlusions for a smart camera","authors":"Aziz Dziri, M. Duranton, R. Chapuis","doi":"10.1145/2789116.2789119","DOIUrl":"https://doi.org/10.1145/2789116.2789119","url":null,"abstract":"In this paper, a multi-object tracking system designed for a low cost embedded smart camera is proposed. Objects tracking constitutes a main step in video-surveillance applications. Because of the number of cameras used to cover a large area, surveillance applications are constrained by the cost of each node, the power efficiency of the system, the robustness of the tracking algorithm and the real-time processing. They require a reliable multi-object tracking algorithm that can run in a real-time on light computing architectures. In this paper, we propose a tracking pipeline designed for a fixed smart camera that can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on the RaspberryPi board equipped with the RaspiCam camera. The tracking quality of the proposed pipeline is evaluated on publicly available datatsets: PETS2009 and CAVIAR.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132776637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Trevisi, R. Carmona-Galán, Á. Rodríguez-Vázquez
Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this sets as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.
{"title":"Hardware-oriented feature extraction based on compressive sensing","authors":"Marco Trevisi, R. Carmona-Galán, Á. Rodríguez-Vázquez","doi":"10.1145/2789116.2802657","DOIUrl":"https://doi.org/10.1145/2789116.2802657","url":null,"abstract":"Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this sets as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"41 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132972700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keeping inventories of road assets up-to-date is an important activity road authorities and mapping companies face. The information needs to be accurate as it impacts safety compliance, maintenance and ability to efficiently route cars through cities using GPS navigation devices. Such inventories are live documents and need to be updated when additions or other changes occur. Currently, authorities and mapping companies survey the roads for changes using dedicated vehicles, although due to excessive costs they are usually not able to do this more often than every few years. Recent research suggests that the overall costs of a mapping/inventory system can be significantly reduced by using an ad-hoc system of low cost automatic installations in fleet-vehicles such as taxis. This paper proposes a method to performing a cost-benefit analysis of such a system, and then applies this to the specific case of the taxi fleet of Beijing. In particular, the analysis considers the random patterns with which taxis travel over time to estimate coverage, cost as a function of number of installations and benefit as a function of surveying frequency. Since the additional benefit of a higher surveying frequency declines and the total cost of the system increases with the number of installations, the optimal number of installations can be computed that maximises the profit.
{"title":"A cost-benefit analysis of an ad-hoc road asset data collection system using fleet-vehicles","authors":"Dana Pordel, L. Petersson","doi":"10.1145/2789116.2789146","DOIUrl":"https://doi.org/10.1145/2789116.2789146","url":null,"abstract":"Keeping inventories of road assets up-to-date is an important activity road authorities and mapping companies face. The information needs to be accurate as it impacts safety compliance, maintenance and ability to efficiently route cars through cities using GPS navigation devices. Such inventories are live documents and need to be updated when additions or other changes occur. Currently, authorities and mapping companies survey the roads for changes using dedicated vehicles, although due to excessive costs they are usually not able to do this more often than every few years. Recent research suggests that the overall costs of a mapping/inventory system can be significantly reduced by using an ad-hoc system of low cost automatic installations in fleet-vehicles such as taxis. This paper proposes a method to performing a cost-benefit analysis of such a system, and then applies this to the specific case of the taxi fleet of Beijing. In particular, the analysis considers the random patterns with which taxis travel over time to estimate coverage, cost as a function of number of installations and benefit as a function of surveying frequency. Since the additional benefit of a higher surveying frequency declines and the total cost of the system increases with the number of installations, the optimal number of installations can be computed that maximises the profit.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132959338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe the strategy adopted to design, from scratch, an embedded RGBD sensor for accurate and dense depth perception on a low-cost FPGA. This device infers, at more than 30 Hz, dense depth maps according to a state-of-the-art stereo vision processing pipeline entirely mapped into the FPGA without buffering partial results on external memories. The strategy outlined in this paper enables accurate depth computation with a low latency and a simple hardware design. On the other hand, it poses major constraints to the computing structure of the algorithms that fit with this simplified architecture and thus, in this paper, we discuss the solutions devised to overcome these issues. We report experimental results concerned with practical application scenarios in which the proposed RGBD sensor provides accurate and real-time depth sensing suited for the embedded vision domain.
{"title":"A passive RGBD sensor for accurate and real-time depth sensing self-contained into an FPGA","authors":"S. Mattoccia, Matteo Poggi","doi":"10.1145/2789116.2789148","DOIUrl":"https://doi.org/10.1145/2789116.2789148","url":null,"abstract":"In this paper we describe the strategy adopted to design, from scratch, an embedded RGBD sensor for accurate and dense depth perception on a low-cost FPGA. This device infers, at more than 30 Hz, dense depth maps according to a state-of-the-art stereo vision processing pipeline entirely mapped into the FPGA without buffering partial results on external memories. The strategy outlined in this paper enables accurate depth computation with a low latency and a simple hardware design. On the other hand, it poses major constraints to the computing structure of the algorithms that fit with this simplified architecture and thus, in this paper, we discuss the solutions devised to overcome these issues. We report experimental results concerned with practical application scenarios in which the proposed RGBD sensor provides accurate and real-time depth sensing suited for the embedded vision domain.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121273968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Reichel, Christoph Hoppe, Jens Döge, Nico Peter
Imagers with programmable, highly parallel signal processing execute computationally intensive processing steps directly on the sensor, thereby allowing early reduction of the amount of data to relevant features. For the purposes of architectural exploration during development of a novel Vision-System-on-Chip (VSoC), it has been modelled on system level. Aside from the integrated control unit with multiple independent control flows, the model also realises digital and analogue signal processing. Due to high simulation speed and compatibility with the real system, especially regarding the programs to be executed, the resulting simulation model is very well suited for usage during application development. By providing the ability to purposefully introduce parameter deviations or defects at various points of analogue processing, it becomes possible to study them with respect to their influence on image processing algorithms executed within the VSoC.
{"title":"Simulation environment for a vision-system-on-chip with integrated processing","authors":"Peter Reichel, Christoph Hoppe, Jens Döge, Nico Peter","doi":"10.1145/2789116.2789133","DOIUrl":"https://doi.org/10.1145/2789116.2789133","url":null,"abstract":"Imagers with programmable, highly parallel signal processing execute computationally intensive processing steps directly on the sensor, thereby allowing early reduction of the amount of data to relevant features. For the purposes of architectural exploration during development of a novel Vision-System-on-Chip (VSoC), it has been modelled on system level. Aside from the integrated control unit with multiple independent control flows, the model also realises digital and analogue signal processing. Due to high simulation speed and compatibility with the real system, especially regarding the programs to be executed, the resulting simulation model is very well suited for usage during application development. By providing the ability to purposefully introduce parameter deviations or defects at various points of analogue processing, it becomes possible to study them with respect to their influence on image processing algorithms executed within the VSoC.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128428808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noelia Vállez, José Luis Espinosa-Aranda, O. Déniz-Suárez, Daniel Aguado-Araujo, Gloria Bueno García, Carlos Sanchez-Bueno
The Eyes of Things (EoT) EU H2020 project envisages a computer vision platform that can be used both standalone and embedded into more complex artifacts, particularly for wearable applications, robotics, home products, surveillance etc. The core hardware will be based on a Software on Chip (SoC) that has been designed for maximum performance of the always-demanding vision applications while keeping the lowest energy consumption. This will allow "always on" and truly mobile vision processing. This demo presents the first prototype applications developed within EoT. First, example vision processing applications will be shown. Additionally, an RTSP server implemented in the device will be demonstrated. This server can capture and stream images. Finally, connectivity will be shown using a minimal MQTT broker specifically implemented for the device.
物之眼(EoT) EU H2020项目设想了一个计算机视觉平台,可以独立使用,也可以嵌入到更复杂的工件中,特别是可穿戴应用、机器人、家庭产品、监控等。核心硬件将基于片上软件(SoC),该芯片旨在为始终要求苛刻的视觉应用提供最大性能,同时保持最低的能耗。这将实现“永远在线”和真正的移动视觉处理。这个演示展示了在EoT中开发的第一个原型应用程序。首先,将展示视觉处理应用示例。此外,还将演示在设备中实现的RTSP服务器。该服务器可以捕获和传输图像。最后,将使用专门为设备实现的最小MQTT代理显示连通性。
{"title":"The eyes of things project","authors":"Noelia Vállez, José Luis Espinosa-Aranda, O. Déniz-Suárez, Daniel Aguado-Araujo, Gloria Bueno García, Carlos Sanchez-Bueno","doi":"10.1145/2789116.2802648","DOIUrl":"https://doi.org/10.1145/2789116.2802648","url":null,"abstract":"The Eyes of Things (EoT) EU H2020 project envisages a computer vision platform that can be used both standalone and embedded into more complex artifacts, particularly for wearable applications, robotics, home products, surveillance etc. The core hardware will be based on a Software on Chip (SoC) that has been designed for maximum performance of the always-demanding vision applications while keeping the lowest energy consumption. This will allow \"always on\" and truly mobile vision processing. This demo presents the first prototype applications developed within EoT. First, example vision processing applications will be shown. Additionally, an RTSP server implemented in the device will be demonstrated. This server can capture and stream images. Finally, connectivity will be shown using a minimal MQTT broker specifically implemented for the device.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"2766 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127439847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. López-Fernández, F. J. Madrid-Cuevas, Ángel Carmona Poyato, R. Muñoz-Salinas, R. Carnicer
Appearance changes due to viewing angle changes cause difficulties for most of the gait recognition methods. In this paper, we propose a new approach for multi-view recognition, which allows to recognize people walking on curved paths. The recognition is based on 3D angular analysis of the movement of the walking human. A coarse-to-fine gait signature represents local variations on the angular measurements along time. A Support Vector Machine is used for classifying, and a sliding temporal window for majority vote policy is used to smooth and reinforce the classification results. The proposed approach has been experimentally validated on the publicly available "Kyushu University 4D Gait Database". The results show that this new approach achieves promising results in the problem of gait recognition on curved paths.
{"title":"Multi-view gait recognition on curved trajectories","authors":"D. López-Fernández, F. J. Madrid-Cuevas, Ángel Carmona Poyato, R. Muñoz-Salinas, R. Carnicer","doi":"10.1145/2789116.2789122","DOIUrl":"https://doi.org/10.1145/2789116.2789122","url":null,"abstract":"Appearance changes due to viewing angle changes cause difficulties for most of the gait recognition methods. In this paper, we propose a new approach for multi-view recognition, which allows to recognize people walking on curved paths. The recognition is based on 3D angular analysis of the movement of the walking human. A coarse-to-fine gait signature represents local variations on the angular measurements along time. A Support Vector Machine is used for classifying, and a sliding temporal window for majority vote policy is used to smooth and reinforce the classification results. The proposed approach has been experimentally validated on the publicly available \"Kyushu University 4D Gait Database\". The results show that this new approach achieves promising results in the problem of gait recognition on curved paths.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130803976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method for tracking people using multiple cameras. The system is implemented with a two level processing strategy. In low-level, object trajectories are detected on each camera image sequence (track detection). This procedure involves active region extraction and matching. In high-level, all the trajectories extracted from the multi camera system are related in order to create a global view (track matching). This is accomplished by homography transformations between image planes. The total set of detected trajectories and there relations is represented by a graph. Experimental results are preformed with recorded data sets and PETS2001 sequence.
{"title":"People tracking with multi-camera system","authors":"J. Dias, P. Jorge","doi":"10.1145/2789116.2789141","DOIUrl":"https://doi.org/10.1145/2789116.2789141","url":null,"abstract":"This paper presents a method for tracking people using multiple cameras. The system is implemented with a two level processing strategy. In low-level, object trajectories are detected on each camera image sequence (track detection). This procedure involves active region extraction and matching. In high-level, all the trajectories extracted from the multi camera system are related in order to create a global view (track matching). This is accomplished by homography transformations between image planes. The total set of detected trajectories and there relations is represented by a graph. Experimental results are preformed with recorded data sets and PETS2001 sequence.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117198349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}