Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051502
Yuanbo Chen, Yanyun Zhao, Bojin Zhuang, A. Cai
A discriminative multi-modality non-negative sparse (DMNS) graph model is proposed in this paper. In the model, features in each modality are first projected into the Mahalanobis space by a transformation learned for this modality, a multi-modality non-negative sparse graph is then constructed in the Mahalanobis space with shared coefficients across modalities. Both the labeled and unlabeled data can be introduced into the graph, and label propagation can then be performed to predict labels of the unlabeled samples. Extensive experiments over two benchmark datasets demonstrate the advantages of the proposed DMNS-graph method over the state-of-the-art methods.
{"title":"Discriminative multi-modality non-negative sparse graph model for action recognition","authors":"Yuanbo Chen, Yanyun Zhao, Bojin Zhuang, A. Cai","doi":"10.1109/VCIP.2014.7051502","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051502","url":null,"abstract":"A discriminative multi-modality non-negative sparse (DMNS) graph model is proposed in this paper. In the model, features in each modality are first projected into the Mahalanobis space by a transformation learned for this modality, a multi-modality non-negative sparse graph is then constructed in the Mahalanobis space with shared coefficients across modalities. Both the labeled and unlabeled data can be introduced into the graph, and label propagation can then be performed to predict labels of the unlabeled samples. Extensive experiments over two benchmark datasets demonstrate the advantages of the proposed DMNS-graph method over the state-of-the-art methods.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124051400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051561
C. Wang, E. Sahin, O. Suominen, A. Gotchev
We investigate possible improvements that can be achieved in depth estimation by merging coded apertures and stereo cameras. We analyze several stereo camera setups which are equipped with different sets of coded apertures to explore such possibilities. The demonstrated results of this analysis are encouraging in the sense that coded apertures can provide valuable complementary information to stereo vision based depth estimation in some cases. In addition to that, we take advantage of stereo camera arrangement to have a single shot multiple coded aperture system. We show that with this system, it is possible to extract depth information robustly, by utilizing the inherent relation between the disparity and defocus cues, even for scene regions which are problematic for stereo matching.
{"title":"Depth estimation by combining stereo matching and coded aperture","authors":"C. Wang, E. Sahin, O. Suominen, A. Gotchev","doi":"10.1109/VCIP.2014.7051561","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051561","url":null,"abstract":"We investigate possible improvements that can be achieved in depth estimation by merging coded apertures and stereo cameras. We analyze several stereo camera setups which are equipped with different sets of coded apertures to explore such possibilities. The demonstrated results of this analysis are encouraging in the sense that coded apertures can provide valuable complementary information to stereo vision based depth estimation in some cases. In addition to that, we take advantage of stereo camera arrangement to have a single shot multiple coded aperture system. We show that with this system, it is possible to extract depth information robustly, by utilizing the inherent relation between the disparity and defocus cues, even for scene regions which are problematic for stereo matching.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117133339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051615
Sergey Smirnov, A. Gotchev
Some of the best performing local stereo-matching approaches use cross-bilateral filters for proper cost aggregation. The recent attempts have been directed toward efficient approximations of such filter aimed at higher speed. In this paper, we suggest a simple yet efficient coarse-to-fine cost volume aggregation scheme, which employs pyramidal decomposition of the cost volume followed by edge-avoiding reconstruction and aggregation. The scheme substantially reduces the computational complexity while providing fair quality of the estimated disparity maps compared to other approximated bilateral filtering schemes. In fact, the speed of the proposed technique is comparable with the speed of fixed kernel aggregation implemented through integral images.
{"title":"Fast hierarchical cost volume aggregation for stereo-matching","authors":"Sergey Smirnov, A. Gotchev","doi":"10.1109/VCIP.2014.7051615","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051615","url":null,"abstract":"Some of the best performing local stereo-matching approaches use cross-bilateral filters for proper cost aggregation. The recent attempts have been directed toward efficient approximations of such filter aimed at higher speed. In this paper, we suggest a simple yet efficient coarse-to-fine cost volume aggregation scheme, which employs pyramidal decomposition of the cost volume followed by edge-avoiding reconstruction and aggregation. The scheme substantially reduces the computational complexity while providing fair quality of the estimated disparity maps compared to other approximated bilateral filtering schemes. In fact, the speed of the proposed technique is comparable with the speed of fixed kernel aggregation implemented through integral images.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114263158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051621
Paul Blondel, A. Potelle, C. Pégard, Rogelio Lozano
Human detection is a very popular field of computer vision. Few works propose a solution for detecting people whatever the camera's viewpoint such as for UAV applications. In this context even state-of-the-art detectors can fail to detect people. We found that the Integral Channel Features detector (ICF) is inoperant in such a context. In this paper, we propose an approach to still benefit from the assets of the ICF while considerably extending the angular robustness during the detection. The main contributions of this work are: a new framework based on the Cluster Boosting Tree and the ICF detector for viewpoint robust human detection; a new training dataset for taking into account the human shape modifications occuring when the pitch angle of the camera changes. We showed that our detector (the PRD) is superior to the ICF for detecting people from complex viewpoints in uncluttered environments and that the computation time of the detector is real-time compatible.
{"title":"Fast and viewpoint robust human detection in uncluttered environments","authors":"Paul Blondel, A. Potelle, C. Pégard, Rogelio Lozano","doi":"10.1109/VCIP.2014.7051621","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051621","url":null,"abstract":"Human detection is a very popular field of computer vision. Few works propose a solution for detecting people whatever the camera's viewpoint such as for UAV applications. In this context even state-of-the-art detectors can fail to detect people. We found that the Integral Channel Features detector (ICF) is inoperant in such a context. In this paper, we propose an approach to still benefit from the assets of the ICF while considerably extending the angular robustness during the detection. The main contributions of this work are: a new framework based on the Cluster Boosting Tree and the ICF detector for viewpoint robust human detection; a new training dataset for taking into account the human shape modifications occuring when the pitch angle of the camera changes. We showed that our detector (the PRD) is superior to the ICF for detecting people from complex viewpoints in uncluttered environments and that the computation time of the detector is real-time compatible.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126925104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051521
Xingyu Zhang, R. Cohen, A. Vetro
Many of the existing video coding standards in use today were developed primarily using camera-captured content as test material. Today, with the more widespread use of connected devices, there is an increased interest in developing video coding tools that target screen content video. Screen content video is often characterized by having sharp edges, noiseless graphics-generated region, repeated patterns, limited sets of colors, etc. This paper presents an independent uniform prediction (IUP) mode for improving the coding efficiency of screen content video. IUP chooses one color out of a small set of global colors to form a uniform prediction block. Unlike existing palette-based modes, IUP does not have to construct and signal a color index map for every block that is coded. Experimental results using IUP in the HEVC Range Extensions 6.0 framework are presented, along with results using techniques that reduce complexity so that the IUP-based encoder is faster than the reference encoder.
目前使用的许多现有视频编码标准主要是使用摄像机捕获的内容作为测试材料开发的。如今,随着互联设备的广泛使用,人们对开发针对屏幕内容视频的视频编码工具越来越感兴趣。屏幕内容视频通常具有锐利的边缘、无噪声的图形生成区域、重复的图案、有限的颜色集合等特征。为了提高屏幕内容视频的编码效率,提出了一种独立的统一预测(IUP)模式。IUP从一小组全局颜色中选择一种颜色,形成统一的预测块。与现有的基于调色板的模式不同,IUP不需要为每个编码的块构建和标记颜色索引图。给出了在HEVC Range Extensions 6.0框架中使用IUP的实验结果,以及使用降低复杂性的技术的结果,使得基于IUP的编码器比参考编码器更快。
{"title":"Independent uniform prediction mode for screen content video coding","authors":"Xingyu Zhang, R. Cohen, A. Vetro","doi":"10.1109/VCIP.2014.7051521","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051521","url":null,"abstract":"Many of the existing video coding standards in use today were developed primarily using camera-captured content as test material. Today, with the more widespread use of connected devices, there is an increased interest in developing video coding tools that target screen content video. Screen content video is often characterized by having sharp edges, noiseless graphics-generated region, repeated patterns, limited sets of colors, etc. This paper presents an independent uniform prediction (IUP) mode for improving the coding efficiency of screen content video. IUP chooses one color out of a small set of global colors to form a uniform prediction block. Unlike existing palette-based modes, IUP does not have to construct and signal a color index map for every block that is coded. Experimental results using IUP in the HEVC Range Extensions 6.0 framework are presented, along with results using techniques that reduce complexity so that the IUP-based encoder is faster than the reference encoder.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126959445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051516
A. Chetouani
In this paper, we present a New Stereo Full-Reference Image Quality Metric (SFR-IQM) based on Cyclopean Image (CI) computation and 2D IQM fusion. The Cyclopean images of the reference image and its degraded version are first computed from the left and the right views. 2D measures are then extracted from the obtained CIs and are combined using an Artificial Neural Networks (ANN) in order to derive a single index. The 3D LIVE Image Quality Database has been here used to evaluate our method and its capability to predict the subjective judgments. The obtained results have been compared to some recent methods considered as the state-of-the-art. The experimental results show the relevance of our method.
{"title":"Full reference image quality metric for stereo images based on Cyclopean image computation and neural fusion","authors":"A. Chetouani","doi":"10.1109/VCIP.2014.7051516","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051516","url":null,"abstract":"In this paper, we present a New Stereo Full-Reference Image Quality Metric (SFR-IQM) based on Cyclopean Image (CI) computation and 2D IQM fusion. The Cyclopean images of the reference image and its degraded version are first computed from the left and the right views. 2D measures are then extracted from the obtained CIs and are combined using an Artificial Neural Networks (ANN) in order to derive a single index. The 3D LIVE Image Quality Database has been here used to evaluate our method and its capability to predict the subjective judgments. The obtained results have been compared to some recent methods considered as the state-of-the-art. The experimental results show the relevance of our method.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122640477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051557
Manuel Ivancsics, N. Brosch, M. Gelautz
In this paper we propose an optimized semiautomatic approach for efficient 2D-to-3D video conversion. It is based on a conversion algorithm that leverages segmentation and filtering techniques to propagate sparse depth information that was provided by a user. Our GPU acceleration of in the work of Brosch et al. (2011) significantly reduces the computation time of the original algorithm. Since the limited capacity of the CPU's onboard memory hinders the parallel execution of large data such as videos, we additionally propose a temporally coherent clip-based 2D-to-3D conversion approach for long videos. Evaluations show that the proposed, optimized conversion approach is capable of generating high-quality results, while significantly reducing the execution time compared to the original, un-optimized approach.
{"title":"Efficient depth propagation in videos with GPU-acceleration","authors":"Manuel Ivancsics, N. Brosch, M. Gelautz","doi":"10.1109/VCIP.2014.7051557","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051557","url":null,"abstract":"In this paper we propose an optimized semiautomatic approach for efficient 2D-to-3D video conversion. It is based on a conversion algorithm that leverages segmentation and filtering techniques to propagate sparse depth information that was provided by a user. Our GPU acceleration of in the work of Brosch et al. (2011) significantly reduces the computation time of the original algorithm. Since the limited capacity of the CPU's onboard memory hinders the parallel execution of large data such as videos, we additionally propose a temporally coherent clip-based 2D-to-3D conversion approach for long videos. Evaluations show that the proposed, optimized conversion approach is capable of generating high-quality results, while significantly reducing the execution time compared to the original, un-optimized approach.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123204396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051616
Q. Hu, Xiaoyun Zhang, Zhiyong Gao, Jun Sun
x265 is an open-source encoder project which aims to deliver the world's fastest and most computationally efficient HEVC encoder. Although x265 has been developed efficiently with many optimization techniques, it is still not able to encode HD videos in real time even at its faster setting. In this paper, we deeply investigate the encoding framework and computational complexity of x265, and find that RDO process is the most time consuming part. Then, an efficient prediction scheme is proposed which includes decreasing the number of RDO times, early skip detection and fast intra mode decision. Experimental results show that the proposed method improves the speed of x265 from 19.86fps to 37.76fps for HD test sequences, i.e., 47.44% complexity reduction, with only 1.37% BDBR coding performance loss.
{"title":"Analysis and optimization of x265 encoder","authors":"Q. Hu, Xiaoyun Zhang, Zhiyong Gao, Jun Sun","doi":"10.1109/VCIP.2014.7051616","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051616","url":null,"abstract":"x265 is an open-source encoder project which aims to deliver the world's fastest and most computationally efficient HEVC encoder. Although x265 has been developed efficiently with many optimization techniques, it is still not able to encode HD videos in real time even at its faster setting. In this paper, we deeply investigate the encoding framework and computational complexity of x265, and find that RDO process is the most time consuming part. Then, an efficient prediction scheme is proposed which includes decreasing the number of RDO times, early skip detection and fast intra mode decision. Experimental results show that the proposed method improves the speed of x265 from 19.86fps to 37.76fps for HD test sequences, i.e., 47.44% complexity reduction, with only 1.37% BDBR coding performance loss.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121131309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051571
Zhongpai Gao, Guangtao Zhai, Xiaolin Wu, Xiongkuo Min, Chunjia Hu
Camcorder piracy has great impact on the movie industry. Although there are many methods to prevent recording in theatre, no recognized technology satisfies the need of defeating camcorder piracy as well as having no effect on the audience. To realize anti-piracy, we uses a new paradigm of information display technology, called temporal psychovisual modulation (TPVM). TPVM exploits the difference in image formation mechanisms of human eyes and imaging sensors. Based on this difference, we build a prototype system on the platform of DLP® LightCrafter 4500™ which features high speed pattern display. The display system serves as a proof-of-concept of anti-piracy system.
{"title":"Demo: DLP based anti-piracy display system","authors":"Zhongpai Gao, Guangtao Zhai, Xiaolin Wu, Xiongkuo Min, Chunjia Hu","doi":"10.1109/VCIP.2014.7051571","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051571","url":null,"abstract":"Camcorder piracy has great impact on the movie industry. Although there are many methods to prevent recording in theatre, no recognized technology satisfies the need of defeating camcorder piracy as well as having no effect on the audience. To realize anti-piracy, we uses a new paradigm of information display technology, called temporal psychovisual modulation (TPVM). TPVM exploits the difference in image formation mechanisms of human eyes and imaging sensors. Based on this difference, we build a prototype system on the platform of DLP® LightCrafter 4500™ which features high speed pattern display. The display system serves as a proof-of-concept of anti-piracy system.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129663151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-01DOI: 10.1109/VCIP.2014.7051612
Thomas Maugey, G. Petrazzuoli, P. Frossard, Marco Cagnazzo, B. Pesquet-Popescu
Multiview image and video systems with large number of views lead to new problems in data representation, transmission and user interaction. In order to reduce the data volumes, most distributed multiview coding schemes exploit the inter-view redundancies at the decoder side, using view synthesis from key views. In the situation where many views are considered, the two following questions become fundamental: i) how many key views have to be chosen for keeping a good reconstruction quality with reasonable coding cost? ii) where to place them optimally in the multiview sequences? We propose in this paper an algorithm for selecting the key views in a distributed multiview coding scheme. Based on a novel metric for the correlation between the views, we formulate an optimization problem for the positioning of the key views such that both the distortion of the reconstruction and the coding rate cost are effectively minimized. We then propose a new optimization strategy based on shortest path algorithm that permits to determine both the optimal number of key views and their positions in the image set. We experimentally validate our solution in a practical distributed multiview coding system and we show that considering the 3D scene geometry in the key view positioning brings significant rate-distortion improvements compared to distance-based key view selection as it is commonly done in the literature.
{"title":"Key view selection in distributed multiview coding","authors":"Thomas Maugey, G. Petrazzuoli, P. Frossard, Marco Cagnazzo, B. Pesquet-Popescu","doi":"10.1109/VCIP.2014.7051612","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051612","url":null,"abstract":"Multiview image and video systems with large number of views lead to new problems in data representation, transmission and user interaction. In order to reduce the data volumes, most distributed multiview coding schemes exploit the inter-view redundancies at the decoder side, using view synthesis from key views. In the situation where many views are considered, the two following questions become fundamental: i) how many key views have to be chosen for keeping a good reconstruction quality with reasonable coding cost? ii) where to place them optimally in the multiview sequences? We propose in this paper an algorithm for selecting the key views in a distributed multiview coding scheme. Based on a novel metric for the correlation between the views, we formulate an optimization problem for the positioning of the key views such that both the distortion of the reconstruction and the coding rate cost are effectively minimized. We then propose a new optimization strategy based on shortest path algorithm that permits to determine both the optimal number of key views and their positions in the image set. We experimentally validate our solution in a practical distributed multiview coding system and we show that considering the 3D scene geometry in the key view positioning brings significant rate-distortion improvements compared to distance-based key view selection as it is commonly done in the literature.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134554107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}