Pub Date : 2023-04-01DOI: 10.1016/j.vrih.2022.03.005
Qiang Zhou, Zhong Zhou
Mixed Reality (MR) video fusion system fuses video imagery with 3D scenes. It makes the scene much more realistic and helps the users understand the video contents and temporalspatial correlation between them, thus reducing the user’s cognitive load. Nowadays, MR video fusion has been used in various applications. However, video fusion systems require powerful client machines because video streaming delivery, stitching, and rendering are computation-intensive. Moreover, huge bandwidth usage is also another critical factor that affects the scalability of video fusion systems. The framework proposed in this paper overcomes this client limitation by utilizing remote rendering. Furthermore, the framework we built is based on browsers. Therefore, the user could try the MR video fusion system with a laptop or even pad, no extra plug-ins or application programs need to be installed. Several experiments on diverse metrics demonstrate the effectiveness of the proposed framework.
{"title":"Web-based Mixed Reality Video Fusion with Remote Rendering","authors":"Qiang Zhou, Zhong Zhou","doi":"10.1016/j.vrih.2022.03.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.03.005","url":null,"abstract":"<div><p>Mixed Reality (MR) video fusion system fuses video imagery with 3D scenes. It makes the scene much more realistic and helps the users understand the video contents and temporalspatial correlation between them, thus reducing the user’s cognitive load. Nowadays, MR video fusion has been used in various applications. However, video fusion systems require powerful client machines because video streaming delivery, stitching, and rendering are computation-intensive. Moreover, huge bandwidth usage is also another critical factor that affects the scalability of video fusion systems. The framework proposed in this paper overcomes this client limitation by utilizing remote rendering. Furthermore, the framework we built is based on browsers. Therefore, the user could try the MR video fusion system with a laptop or even pad, no extra plug-ins or application programs need to be installed. Several experiments on diverse metrics demonstrate the effectiveness of the proposed framework.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 188-199"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1016/j.vrih.2022.01.006
Dongyan Nie , Xiaoying Sun
Background
Adequate-data collection could enhance the realism of surface texture haptic online-rendering or offline-playback. A parallel challenge is how to reduce communication delays and improve storage space utilization.
Methods
Based on the similarity of the short-term amplitude spectrumtrend, this paper proposes a frequency-domain compression method. A compression framework is designed, firstly to map the amplitude spectrum into a trend similarity grayscale image, compress it with the stillpicture-compression method, and then to adaptively encode the maximum amplitude and part of the initial phase of each time-window, achieving the final compression.
Results
The comparison between the original signal and the recovered signal shows that when the time-frequency similarity is 90%, the average compression ratio of our method is 9.85% in the case of a single interact point. The subjective score for the similarity reached an excellent level, with an average score of 87.85.
Conclusions
Our method can be used for offline compression of vibrotactile data. For the case of multi-interact points in space, the trend similarity grayscale image can be reused, and the compression ratio is further reduced.
{"title":"Compression of Surface Texture Acceleration Signal Based on Spectrum Characteristics","authors":"Dongyan Nie , Xiaoying Sun","doi":"10.1016/j.vrih.2022.01.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.01.006","url":null,"abstract":"<div><h3>Background</h3><p>Adequate-data collection could enhance the realism of surface texture haptic online-rendering or offline-playback. A parallel challenge is how to reduce communication delays and improve storage space utilization.</p></div><div><h3>Methods</h3><p>Based on the similarity of the short-term amplitude spectrumtrend, this paper proposes a frequency-domain compression method. A compression framework is designed, firstly to map the amplitude spectrum into a trend similarity grayscale image, compress it with the stillpicture-compression method, and then to adaptively encode the maximum amplitude and part of the initial phase of each time-window, achieving the final compression.</p></div><div><h3>Results</h3><p>The comparison between the original signal and the recovered signal shows that when the time-frequency similarity is 90%, the average compression ratio of our method is 9.85% in the case of a single interact point. The subjective score for the similarity reached an excellent level, with an average score of 87.85.</p></div><div><h3>Conclusions</h3><p>Our method can be used for offline compression of vibrotactile data. For the case of multi-interact points in space, the trend similarity grayscale image can be reused, and the compression ratio is further reduced.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 110-123"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1016/j.vrih.2022.07.001
Changchen Zhao , Hongsheng Wang , Yuanjing Feng
Background
Using remote photoplethysmography (rPPG) to estimate blood volume pulse in a non-contact way is an active research topic in recent years. Existing methods are mainly based on the single-scale region of interest (ROI). However, some noise signals that are not easily separated in single-scale space can be easily separated in multi-scale space. In addition, existing spatiotemporal networks mainly focus on local spatiotemporal information and lack emphasis on temporal information which is crucial in pulse extraction problems, resulting in insufficient spatiotemporal feature modeling.
Methods
This paper proposes a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution and dimension separable attention. First, in order to solve the problem of single-scale ROI, we construct a multi-scale feature space for initial signal separation. Secondly, separable spatiotemporal convolution and dimension separable attention are designed for efficient spatiotemporal correlation modeling, which increases the information interaction between long-span time and space dimensions and puts more emphasis on temporal features.
Results
The signal-to-noise ratio (SNR) of the proposed network reaches 9.58 dB on the PURE dataset and 6.77 dB on the UBFC-rPPG dataset, which outperforms state-of-the-art algorithms.
Conclusions
Results show that fusing multi-scale signals generally obtains better results than methods based on the only single-scale signal. The proposed separable spatiotemporal convolution and dimension separable attention mechanism contributes to more accurate pulse signal extraction.
{"title":"MSSTNet: Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention","authors":"Changchen Zhao , Hongsheng Wang , Yuanjing Feng","doi":"10.1016/j.vrih.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.001","url":null,"abstract":"<div><h3>Background</h3><p>Using remote photoplethysmography (rPPG) to estimate blood volume pulse in a non-contact way is an active research topic in recent years. Existing methods are mainly based on the single-scale region of interest (ROI). However, some noise signals that are not easily separated in single-scale space can be easily separated in multi-scale space. In addition, existing spatiotemporal networks mainly focus on local spatiotemporal information and lack emphasis on temporal information which is crucial in pulse extraction problems, resulting in insufficient spatiotemporal feature modeling.</p></div><div><h3>Methods</h3><p>This paper proposes a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution and dimension separable attention. First, in order to solve the problem of single-scale ROI, we construct a multi-scale feature space for initial signal separation. Secondly, separable spatiotemporal convolution and dimension separable attention are designed for efficient spatiotemporal correlation modeling, which increases the information interaction between long-span time and space dimensions and puts more emphasis on temporal features.</p></div><div><h3>Results</h3><p>The signal-to-noise ratio (SNR) of the proposed network reaches 9.58 dB on the PURE dataset and 6.77 dB on the UBFC-rPPG dataset, which outperforms state-of-the-art algorithms.</p></div><div><h3>Conclusions</h3><p>Results show that fusing multi-scale signals generally obtains better results than methods based on the only single-scale signal. The proposed separable spatiotemporal convolution and dimension separable attention mechanism contributes to more accurate pulse signal extraction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 124-141"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1016/j.vrih.2022.07.002
Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Dongmei Yan
This work aims to build a comprehensive and effective fire emergency management system based on the Internet of Things (IoT) and achieve an actual intelligent fire rescue. A smart fire protection information system was designed based on the IoT. A detailed analysis was conducted on the problem of rescue vehicle scheduling and the evacuation of trapped persons in the process of fire rescue. The intelligent fire visualization platform based on the three-dimensional (3D) Geographic Information Science (GIS) covers project overview, equipment status, equipment classification, equipment alarm information, alarm classification, alarm statistics, equipment account information, and other modules. The live video accessed through the visual interface can clearly identify the stage of the fire, which facilitates the arrangement of rescue equipment and personnel. The vehicle scheduling model in the system primarily used two objective functions to solve the Pareto Non-Dominated Solution Set Optimization: emergency rescue time and the number of vehicles. In addition, an evacuation path optimization method based on the Improved Ant Colony (IAC) algorithm was designed to realize the dynamic optimization of building fire evacuation paths. The experimental results indicate that all the values of detection signals were significantly larger in the smoldering fire scene at t = 17s than the initial value. In addition, the probability of smoldering fire and the probability of open fire were relatively large according to the probability function of the corresponding fire situation, demonstrating that this model could detect fire. The IAC algorithm reported here avoided the passages near the fire and spreading areas as much as possible and took the safety of the trapped persons as the premise when planning the evacuation route. Therefore, the IoT-based fire information system has important value for ensuring fire safety and carrying out emergency rescue and is worthy of popularization and application.
{"title":"Intelligent Fire Information System Based on 3D GIS","authors":"Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Dongmei Yan","doi":"10.1016/j.vrih.2022.07.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.002","url":null,"abstract":"<div><p>This work aims to build a comprehensive and effective fire emergency management system based on the Internet of Things (IoT) and achieve an actual intelligent fire rescue. A smart fire protection information system was designed based on the IoT. A detailed analysis was conducted on the problem of rescue vehicle scheduling and the evacuation of trapped persons in the process of fire rescue. The intelligent fire visualization platform based on the three-dimensional (3D) Geographic Information Science (GIS) covers project overview, equipment status, equipment classification, equipment alarm information, alarm classification, alarm statistics, equipment account information, and other modules. The live video accessed through the visual interface can clearly identify the stage of the fire, which facilitates the arrangement of rescue equipment and personnel. The vehicle scheduling model in the system primarily used two objective functions to solve the Pareto Non-Dominated Solution Set Optimization: emergency rescue time and the number of vehicles. In addition, an evacuation path optimization method based on the Improved Ant Colony (IAC) algorithm was designed to realize the dynamic optimization of building fire evacuation paths. The experimental results indicate that all the values of detection signals were significantly larger in the smoldering fire scene at t = 17s than the initial value. In addition, the probability of smoldering fire and the probability of open fire were relatively large according to the probability function of the corresponding fire situation, demonstrating that this model could detect fire. The IAC algorithm reported here avoided the passages near the fire and spreading areas as much as possible and took the safety of the trapped persons as the premise when planning the evacuation route. Therefore, the IoT-based fire information system has important value for ensuring fire safety and carrying out emergency rescue and is worthy of popularization and application.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 93-109"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49866020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardware—A New Open Access Journal","authors":"Peter C. Hauser","doi":"10.3390/hardware1010001","DOIUrl":"https://doi.org/10.3390/hardware1010001","url":null,"abstract":"Hardware (ISSN 2813-6640) [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85126224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of new hardware has never been as accessible as it is today [...]
新硬件的开发从来没有像今天这样容易获得。
{"title":"Publisher’s Note: Hardware—A New Open Access Journal","authors":"Liliane Auwerter","doi":"10.3390/hardware1010002","DOIUrl":"https://doi.org/10.3390/hardware1010002","url":null,"abstract":"The development of new hardware has never been as accessible as it is today [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74840471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1016/j.vrih.2022.07.006
Mengting Zhang, Xiuxia Tian
Background
Image anomaly detection is a popular task in computer graphics, which is widely used in industrial fields. Previous works that address this problem often train CNN-based (e.g. Auto-Encoder, GANs) models to reconstruct covered parts of input images and calculate the difference between the input and the reconstructed image. However, convolutional operations are good at extracting local features making it difficult to identify larger image anomalies. To this end, we propose a transformer architecture based on mutual attention for image anomaly separation. This architecture can capture long-term dependencies and fuse local features with global features to facilitate better image anomaly detection. Our method was extensively evaluated on several benchmarks, and experimental results showed that it improved detection capability by 3.1% and localization capability by 1.0% compared with state-of-the-art reconstruction-based methods.
{"title":"A Transformer Architecture based mutual attention for Image Anomaly Detection","authors":"Mengting Zhang, Xiuxia Tian","doi":"10.1016/j.vrih.2022.07.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.006","url":null,"abstract":"<div><h3>Background</h3><p>Image anomaly detection is a popular task in computer graphics, which is widely used in industrial fields. Previous works that address this problem often train CNN-based (e.g. Auto-Encoder, GANs) models to reconstruct covered parts of input images and calculate the difference between the input and the reconstructed image. However, convolutional operations are good at extracting local features making it difficult to identify larger image anomalies. To this end, we propose a transformer architecture based on mutual attention for image anomaly separation. This architecture can capture long-term dependencies and fuse local features with global features to facilitate better image anomaly detection. Our method was extensively evaluated on several benchmarks, and experimental results showed that it improved detection capability by 3.1% and localization capability by 1.0% compared with state-of-the-art reconstruction-based methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 57-67"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1016/j.vrih.2022.11.001
Chihiro Hoshizawa, Takashi Komuro
In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.
{"title":"View Interpolation Networks for Reproducing Material Appearance of Specular Objects","authors":"Chihiro Hoshizawa, Takashi Komuro","doi":"10.1016/j.vrih.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.001","url":null,"abstract":"<div><p>In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 1-10"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing","authors":"Hui Liang, Jiupeng Li, Yi Wang, Junjun Pan, Yazhou Zhang, Xiaohang Dong","doi":"10.1016/j.vrih.2022.07.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.007","url":null,"abstract":"","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"68 - 80"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55184258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-01DOI: 10.1016/j.vrih.2022.06.002
Kailong Lin, Shaowei Zhang, Yu Luo, Jie Ling
Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.
{"title":"Unrolling Rain-guided Detail Recovery Network for Single Image Deraining","authors":"Kailong Lin, Shaowei Zhang, Yu Luo, Jie Ling","doi":"10.1016/j.vrih.2022.06.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.002","url":null,"abstract":"<div><p>Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 1","pages":"Pages 11-23"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}