Efficient streaming of video over wireless networks requires real-time assessment of distortion due to packet loss, especially because predictive coding at the encoder can cause inter-frame propagation of errors and impact the overall quality of the transmitted video. This paper presents an algorithm to evaluate the expected receiver distortion on the source side by utilizing encoder information, transmission channel characteristics and error concealment. Specifically, distinct video transmission units, Group of Blocks (GOBs), are iteratively built at the source by taking into account macroblock coding modes and motion-compensated error concealment for three different combinations of packet loss. Distortion of these units is then calculated using the structural similarity (SSIM) metric and they are stochastically combined to derive the overall expected distortion. The proposed model provides a more accurate estimate of the distortion that closely models quality as perceived through the human visual system. When incorporated into a content-aware utility function, preliminary experimental results show improved packet ordering & scheduling efficiency and overall video signal at the receiver.
{"title":"Distortion Estimation Using Structural Similarity for Video Transmission over Wireless Networks","authors":"Arun Sankisa, A. Katsaggelos, P. Pahalawatta","doi":"10.1109/ISM.2015.88","DOIUrl":"https://doi.org/10.1109/ISM.2015.88","url":null,"abstract":"Efficient streaming of video over wireless networks requires real-time assessment of distortion due to packet loss, especially because predictive coding at the encoder can cause inter-frame propagation of errors and impact the overall quality of the transmitted video. This paper presents an algorithm to evaluate the expected receiver distortion on the source side by utilizing encoder information, transmission channel characteristics and error concealment. Specifically, distinct video transmission units, Group of Blocks (GOBs), are iteratively built at the source by taking into account macroblock coding modes and motion-compensated error concealment for three different combinations of packet loss. Distortion of these units is then calculated using the structural similarity (SSIM) metric and they are stochastically combined to derive the overall expected distortion. The proposed model provides a more accurate estimate of the distortion that closely models quality as perceived through the human visual system. When incorporated into a content-aware utility function, preliminary experimental results show improved packet ordering & scheduling efficiency and overall video signal at the receiver.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116018657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Beecks, Klaus Schöffmann, M. Lux, M. S. Uysal, T. Seidl
In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures, such as surgeries and examinations, in long-term video archives. In order to support surgeons in accessing these endoscopic video archives in a content-based way, we propose a simple yet effective signature-based approach: the Signature Matching Distance based on adaptive-binning feature signatures. The proposed distance-based similarity model facilitates an adaptive representation of the visual properties of endoscopic images and allows for matching these properties efficiently. We conduct an extensive performance analysis with respect to the task of linking specific endoscopic images with video segments and show the high efficacy of our approach. We are able to link more than 88% of the endoscopic images to their corresponding correct video segments, which improves the current state of the art by one order of magnitude.
{"title":"Endoscopic Video Retrieval: A Signature-Based Approach for Linking Endoscopic Images with Video Segments","authors":"C. Beecks, Klaus Schöffmann, M. Lux, M. S. Uysal, T. Seidl","doi":"10.1109/ISM.2015.21","DOIUrl":"https://doi.org/10.1109/ISM.2015.21","url":null,"abstract":"In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures, such as surgeries and examinations, in long-term video archives. In order to support surgeons in accessing these endoscopic video archives in a content-based way, we propose a simple yet effective signature-based approach: the Signature Matching Distance based on adaptive-binning feature signatures. The proposed distance-based similarity model facilitates an adaptive representation of the visual properties of endoscopic images and allows for matching these properties efficiently. We conduct an extensive performance analysis with respect to the task of linking specific endoscopic images with video segments and show the high efficacy of our approach. We are able to link more than 88% of the endoscopic images to their corresponding correct video segments, which improves the current state of the art by one order of magnitude.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125094710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an interactive approach for analyzing crowd videos and generating content for multimedia applications. Our formulation combines online tracking algorithms from computer vision, non-linear pedestrian motion models from computer graphics, and machine learning techniques to automatically compute the trajectory-level pedestrian behaviors for each agent in the video. These learned behaviors are used to detect anomalous behaviors, perform crowd replication, augment crowd videos with virtual agents, and segment the motion of pedestrians. We demonstrate the performance of these tasks using indoor and outdoor crowd video benchmarks consisting of tens of human agents, moreover, our algorithm takes less than a tenth of a second per frame on a multi-core PC. The overall approach can handle dense and heterogeneous crowd behaviors and is useful for realtime crowd scene analysis applications.
{"title":"Interactive Crowd Content Generation and Analysis Using Trajectory-Level Behavior Learning","authors":"Sujeong Kim, Aniket Bera, Dinesh Manocha","doi":"10.1109/ISM.2015.89","DOIUrl":"https://doi.org/10.1109/ISM.2015.89","url":null,"abstract":"We present an interactive approach for analyzing crowd videos and generating content for multimedia applications. Our formulation combines online tracking algorithms from computer vision, non-linear pedestrian motion models from computer graphics, and machine learning techniques to automatically compute the trajectory-level pedestrian behaviors for each agent in the video. These learned behaviors are used to detect anomalous behaviors, perform crowd replication, augment crowd videos with virtual agents, and segment the motion of pedestrians. We demonstrate the performance of these tasks using indoor and outdoor crowd video benchmarks consisting of tens of human agents, moreover, our algorithm takes less than a tenth of a second per frame on a multi-core PC. The overall approach can handle dense and heterogeneous crowd behaviors and is useful for realtime crowd scene analysis applications.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129745317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takahiro Hayashi, Masato Ishimori, N. Ishii, K. Abe
In this paper, we propose a framework for extending existing matting methods to actualize more reliable alpha estimation. The key idea of the framework is integration of multiple alpha maps based on their reliabilities. In the proposed framework, the given input image is converted into multiple grayscale images having various luminance appearances. Then, alpha maps are generated corresponding to these grayscale images by utilizing an existing matting method. At the same time reliability maps (single channel images visualizing the reliabilities of the estimated alpha values) are generated. Finally, by combining alpha maps having the highest reliabilities in each local region, one reliable alpha map is generated. The experimental results have shown that reliable alpha estimation can be actualized by the proposed framework.
{"title":"Improvement of Image and Video Matting with Multiple Reliability Maps","authors":"Takahiro Hayashi, Masato Ishimori, N. Ishii, K. Abe","doi":"10.1109/ISM.2015.28","DOIUrl":"https://doi.org/10.1109/ISM.2015.28","url":null,"abstract":"In this paper, we propose a framework for extending existing matting methods to actualize more reliable alpha estimation. The key idea of the framework is integration of multiple alpha maps based on their reliabilities. In the proposed framework, the given input image is converted into multiple grayscale images having various luminance appearances. Then, alpha maps are generated corresponding to these grayscale images by utilizing an existing matting method. At the same time reliability maps (single channel images visualizing the reliabilities of the estimated alpha values) are generated. Finally, by combining alpha maps having the highest reliabilities in each local region, one reliable alpha map is generated. The experimental results have shown that reliable alpha estimation can be actualized by the proposed framework.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122761675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Viguier, Chung-Ching Lin, H. Aliakbarpour, F. Bunyak, Sharath Pankanti, G. Seetharaman, K. Palaniappan
It is estimated that less than five percent of videos are currently analyzed to any degree. In addition to petabyte-sized multimedia archives, continuing innovations in optics, imaging sensors, camera arrays, (aerial) platforms, and storage technologies indicates that for the foreseeable future existing and new applications will continue to generate enormous volumes of video imagery. Contextual video summarizations and activity maps offers one innovative direction to tackling this Big Data problem in computer vision. The goal of this work is to develop semi-automatic exploitation algorithms and tools to increase utility, dissemination and usage potential by providing quick dynamic overview geospatial mosaics and motion maps. We present a framework to summarize (multiple) video streams from unmanned aerial vehicles (UAV) or drones which have very different characteristics compared to structured commercial and consumer videos that have been analyzed in the past. Using both metadata geospatial characteristics of the video combined with fast low-level image-based algorithms, the proposed method first generates mini-mosaics that can then be combined into geo-referenced meta-mosaics imagery. These geospatial maps enable rapid assessment of hours long videos with arbitrary spatial coverage from multiple sensors by generating quick look imagery, composed of multiple mini-mosaics, summarizing spatiotemporal dynamics such as coverage, dwell time, activity, etc. The overall summarization pipeline was tested on several DARPA Video and Image Retrieval and Analysis Tool (VIRAT) datasets. We evaluate the effectiveness of the proposed video summarization framework using metrics such as compression and hours of viewing time.
{"title":"Automatic Video Content Summarization Using Geospatial Mosaics of Aerial Imagery","authors":"R. Viguier, Chung-Ching Lin, H. Aliakbarpour, F. Bunyak, Sharath Pankanti, G. Seetharaman, K. Palaniappan","doi":"10.1109/ISM.2015.124","DOIUrl":"https://doi.org/10.1109/ISM.2015.124","url":null,"abstract":"It is estimated that less than five percent of videos are currently analyzed to any degree. In addition to petabyte-sized multimedia archives, continuing innovations in optics, imaging sensors, camera arrays, (aerial) platforms, and storage technologies indicates that for the foreseeable future existing and new applications will continue to generate enormous volumes of video imagery. Contextual video summarizations and activity maps offers one innovative direction to tackling this Big Data problem in computer vision. The goal of this work is to develop semi-automatic exploitation algorithms and tools to increase utility, dissemination and usage potential by providing quick dynamic overview geospatial mosaics and motion maps. We present a framework to summarize (multiple) video streams from unmanned aerial vehicles (UAV) or drones which have very different characteristics compared to structured commercial and consumer videos that have been analyzed in the past. Using both metadata geospatial characteristics of the video combined with fast low-level image-based algorithms, the proposed method first generates mini-mosaics that can then be combined into geo-referenced meta-mosaics imagery. These geospatial maps enable rapid assessment of hours long videos with arbitrary spatial coverage from multiple sensors by generating quick look imagery, composed of multiple mini-mosaics, summarizing spatiotemporal dynamics such as coverage, dwell time, activity, etc. The overall summarization pipeline was tested on several DARPA Video and Image Retrieval and Analysis Tool (VIRAT) datasets. We evaluate the effectiveness of the proposed video summarization framework using metrics such as compression and hours of viewing time.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"44 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121012492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel strategy for depth video denoising in RGBD camera systems. Today's depth map sequences obtained by state-of-the-art Time-of-Flight sensors suffer from high temporal noise. All high-level RGB video renderings based on the accompanied depth map's 3D geometry like augmented reality applications will have severe temporal flickering artifacts. We approached this limitation by decoupling depth map upscaling from the temporal denoising step. Thereby, denoising is processed on raw pixels including uncorrelated pixel-wise noise distributions. Our denoising methodology utilizes joint sparse 3D transform-domain collaborative filtering. Therein, we extract RGB texture information to yield a more stable and accurate highly sparse 3D depth block representation for the consecutive shrinkage operation. We show the effectiveness of our method on real RGBD camera data and on a publicly available synthetic data set. The evaluation reveals that our method is superior to state-of-the-art methods. Our method delivers improved flicker-free depth video streams for future applications, which are especially sensitive to temporal noise and arbitrary depth artifacts.
{"title":"Joint Video and Sparse 3D Transform-Domain Collaborative Filtering for Time-of-Flight Depth Maps","authors":"T. Hach, Tamara Seybold, H. Böttcher","doi":"10.1109/ISM.2015.112","DOIUrl":"https://doi.org/10.1109/ISM.2015.112","url":null,"abstract":"This paper proposes a novel strategy for depth video denoising in RGBD camera systems. Today's depth map sequences obtained by state-of-the-art Time-of-Flight sensors suffer from high temporal noise. All high-level RGB video renderings based on the accompanied depth map's 3D geometry like augmented reality applications will have severe temporal flickering artifacts. We approached this limitation by decoupling depth map upscaling from the temporal denoising step. Thereby, denoising is processed on raw pixels including uncorrelated pixel-wise noise distributions. Our denoising methodology utilizes joint sparse 3D transform-domain collaborative filtering. Therein, we extract RGB texture information to yield a more stable and accurate highly sparse 3D depth block representation for the consecutive shrinkage operation. We show the effectiveness of our method on real RGBD camera data and on a publicly available synthetic data set. The evaluation reveals that our method is superior to state-of-the-art methods. Our method delivers improved flicker-free depth video streams for future applications, which are especially sensitive to temporal noise and arbitrary depth artifacts.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122908200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Techniques for the specification and representation of the locational component of multimedia data are reviewed. The focus is on how the locational component is specified and also on how it is represented. For the specification component we also discuss textual specifications. For the representation component, the emphasis is on a sorting approach which yields an index to the locational component where the data includes both points as well as objects with a spatial extent.
{"title":"Location Specification and Representation in Multimedia Databases","authors":"H. Samet","doi":"10.1109/ISM.2015.128","DOIUrl":"https://doi.org/10.1109/ISM.2015.128","url":null,"abstract":"Techniques for the specification and representation of the locational component of multimedia data are reviewed. The focus is on how the locational component is specified and also on how it is represented. For the specification component we also discuss textual specifications. For the representation component, the emphasis is on a sorting approach which yields an index to the locational component where the data includes both points as well as objects with a spatial extent.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131232263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video is the most prevalent traffic type on the Internet today. Significant research has been done on measuring user's Quality of Experience (QoE) through different metrics. We take the position that energy use must be incorporated into quality metrics for digital video. We present our novel, energy-aware QoE metric for video, the Energy-Video Index (EnVI). We present our EnVI measurements from the playback of a diverse set of online videos. We observe that 4K-UHD (2160p) video can use ~30% more energy on a client device compared to HD (720p), and up to ~600% more network bandwidth than FHD (1080p), without significant improvement in objective QoE measurements.
{"title":"Go Green with EnVI: the Energy-Video Index","authors":"Oche Ejembi, S. Bhatti","doi":"10.1109/ISM.2015.50","DOIUrl":"https://doi.org/10.1109/ISM.2015.50","url":null,"abstract":"Video is the most prevalent traffic type on the Internet today. Significant research has been done on measuring user's Quality of Experience (QoE) through different metrics. We take the position that energy use must be incorporated into quality metrics for digital video. We present our novel, energy-aware QoE metric for video, the Energy-Video Index (EnVI). We present our EnVI measurements from the playback of a diverse set of online videos. We observe that 4K-UHD (2160p) video can use ~30% more energy on a client device compared to HD (720p), and up to ~600% more network bandwidth than FHD (1080p), without significant improvement in objective QoE measurements.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116758338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Dickson, Chris Kondrat, Ryan B. Szeto, W. R. Adrion, Tung T. Pham, Tim D. Richards
Lecture recording is not a new concept nor is high-resolution recording of multimedia presentations that include computer and whiteboard material. We describe a novel portable lecture capture system that captures not only computer content and video as do most modern lecture capture systems but also captures content from whiteboards. The white-board material is captured at high resolution and processed for clarity without the necessity for the electronic whiteboards required by many capture systems. Our presentation system also processes the entire lecture in real time. The system we present is the logical next step in lecture capture technology.
{"title":"Portable Lecture Capture that Captures the Complete Lecture","authors":"P. Dickson, Chris Kondrat, Ryan B. Szeto, W. R. Adrion, Tung T. Pham, Tim D. Richards","doi":"10.1109/ISM.2015.22","DOIUrl":"https://doi.org/10.1109/ISM.2015.22","url":null,"abstract":"Lecture recording is not a new concept nor is high-resolution recording of multimedia presentations that include computer and whiteboard material. We describe a novel portable lecture capture system that captures not only computer content and video as do most modern lecture capture systems but also captures content from whiteboards. The white-board material is captured at high resolution and processed for clarity without the necessity for the electronic whiteboards required by many capture systems. Our presentation system also processes the entire lecture in real time. The system we present is the logical next step in lecture capture technology.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115448066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High dynamic range (HDR) imaging enables to capture details in both dark and very bright regions of a scene, and is therefore supposed to provide higher robustness to illumination changes than conventional low dynamic range (LDR) imaging in tasks such as visual features extraction. However, it is not clear how much this gain is, and which are the best modalities of using HDR to obtain it. In this paper we evaluate the first block of the visual feature extraction pipeline, i.e., keypoint detection, using both LDR and different HDR-based modalities, when significant illumination changes are present in the scene. To this end, we captured a dataset with two scenes and a wide range of illumination conditions. On these images, we measure how the repeatability of either corner or blob interest points is affected with different LDR/HDR approaches. Our observations confirm the potential of HDR over conventional LDR acquisition. Moreover, extracting features directly from HDR pixel values is more effective than first tonemapping and then extracting features, provided that HDR luminance information is previously encoded to perceptually linear values.
{"title":"Evaluation of Feature Detection in HDR Based Imaging Under Changes in Illumination Conditions","authors":"A. Rana, G. Valenzise, F. Dufaux","doi":"10.1109/ISM.2015.58","DOIUrl":"https://doi.org/10.1109/ISM.2015.58","url":null,"abstract":"High dynamic range (HDR) imaging enables to capture details in both dark and very bright regions of a scene, and is therefore supposed to provide higher robustness to illumination changes than conventional low dynamic range (LDR) imaging in tasks such as visual features extraction. However, it is not clear how much this gain is, and which are the best modalities of using HDR to obtain it. In this paper we evaluate the first block of the visual feature extraction pipeline, i.e., keypoint detection, using both LDR and different HDR-based modalities, when significant illumination changes are present in the scene. To this end, we captured a dataset with two scenes and a wide range of illumination conditions. On these images, we measure how the repeatability of either corner or blob interest points is affected with different LDR/HDR approaches. Our observations confirm the potential of HDR over conventional LDR acquisition. Moreover, extracting features directly from HDR pixel values is more effective than first tonemapping and then extracting features, provided that HDR luminance information is previously encoded to perceptually linear values.","PeriodicalId":250353,"journal":{"name":"2015 IEEE International Symposium on Multimedia (ISM)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114292006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}