Pub Date : 2017-05-22DOI: 10.1109/IWSSIP.2017.7965586
Philipp Helle, H. Schwarz, T. Wiegand, K. Müller
In todays video compression systems, the encoder typically follows an optimization procedure to find a compressed representation of the video signal. While primary optimization criteria are bit rate and image distortion, low complexity of this procedure may also be of importance in some applications, making complexity a third objective. We approach this problem by treating the encoding procedure as a decision process in time and make it amenable to reinforcement learning. Our learning algorithm computes a strategy in a compact functional representation, which is then employed in the video encoder to control its search. By including measured execution time into the reinforcement signal with a lagrangian weight, we realize a trade-off between RD-performance and computational complexity controlled by a single parameter. Using the reference software test model (HM) of the HEVC video coding standard, we show that over half the encoding time can be saved at the same RD-performance.
{"title":"Reinforcement learning for video encoder control in HEVC","authors":"Philipp Helle, H. Schwarz, T. Wiegand, K. Müller","doi":"10.1109/IWSSIP.2017.7965586","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965586","url":null,"abstract":"In todays video compression systems, the encoder typically follows an optimization procedure to find a compressed representation of the video signal. While primary optimization criteria are bit rate and image distortion, low complexity of this procedure may also be of importance in some applications, making complexity a third objective. We approach this problem by treating the encoding procedure as a decision process in time and make it amenable to reinforcement learning. Our learning algorithm computes a strategy in a compact functional representation, which is then employed in the video encoder to control its search. By including measured execution time into the reinforcement signal with a lagrangian weight, we realize a trade-off between RD-performance and computational complexity controlled by a single parameter. Using the reference software test model (HM) of the HEVC video coding standard, we show that over half the encoding time can be saved at the same RD-performance.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"425 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127604566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-22DOI: 10.1109/IWSSIP.2017.7965585
Jan Kufa, T. Kratochvil
In comparison with older standards, High Efficiency Video Coding (HEVC) significantly improves coding efficiency. At the same time, it increases computational complexity of coding and therefore encoding takes a longer time. In this paper, the usage of different implementations of HEVC is proposed where some of them can take advantage of a multicore Central Processing Unit (CPU). The others are accelerated by using a Video Engine (VE) in the Graphics Processing Unit (GPU). In the paper, different predefined quality presets are also used which set the balance between the video quality and encoding speed. Another aspect was to compare power consumption and utilization of components in a Personal Computer (PC) depending on different HEVC implementations. Research has been carried out for both resolutions, Full HD and Ultra HD. Our experimental results show that hardware-accelerated encoding can encode video that consumes less CPU time, with only small impact on video quality.
HEVC (High Efficiency Video Coding,高效视频编码)与传统编码标准相比,显著提高了编码效率。同时,它增加了编码的计算复杂度,从而增加了编码的时间。在本文中,提出了HEVC的不同实现方法,其中一些可以利用多核中央处理器(CPU)。其他处理器通过GPU中的VE (Video Engine)进行加速。在本文中,还使用了不同的预定义质量预置来设置视频质量和编码速度之间的平衡。另一方面是比较不同HEVC实现下个人计算机(PC)中组件的功耗和利用率。对全高清和超高清两种分辨率都进行了研究。我们的实验结果表明,硬件加速编码可以编码视频,占用较少的CPU时间,对视频质量的影响很小。
{"title":"Software and hardware HEVC encoding","authors":"Jan Kufa, T. Kratochvil","doi":"10.1109/IWSSIP.2017.7965585","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965585","url":null,"abstract":"In comparison with older standards, High Efficiency Video Coding (HEVC) significantly improves coding efficiency. At the same time, it increases computational complexity of coding and therefore encoding takes a longer time. In this paper, the usage of different implementations of HEVC is proposed where some of them can take advantage of a multicore Central Processing Unit (CPU). The others are accelerated by using a Video Engine (VE) in the Graphics Processing Unit (GPU). In the paper, different predefined quality presets are also used which set the balance between the video quality and encoding speed. Another aspect was to compare power consumption and utilization of components in a Personal Computer (PC) depending on different HEVC implementations. Research has been carried out for both resolutions, Full HD and Ultra HD. Our experimental results show that hardware-accelerated encoding can encode video that consumes less CPU time, with only small impact on video quality.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"65 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131514134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-22DOI: 10.1109/IWSSIP.2017.7965587
Jarosław Samelak, J. Stankowski, M. Domański
The paper presents application of the emerging HEVC Screen Content Coding for frame-compatible compression of stereoscopic video. Such a solution may be an alternative to the Multiview HEVC, which is the state-of-the-art dedicated technique for multiview video compression. The paper provides an extensive description of main differences between both compression techniques. Authors also present adaptation of the Screen Content Coding to compress stereoscopic video as fast and efficiently as possible. The paper reports experimental results of the comparison between HEVC Screen Content Coding and Main profiles for frame-compatible compression of stereoscopic video. The advantages and disadvantages of the proposed technique are enumerated in the conclusions.
{"title":"Efficient frame-compatible stereoscopic video coding using HEVC screen content coding","authors":"Jarosław Samelak, J. Stankowski, M. Domański","doi":"10.1109/IWSSIP.2017.7965587","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965587","url":null,"abstract":"The paper presents application of the emerging HEVC Screen Content Coding for frame-compatible compression of stereoscopic video. Such a solution may be an alternative to the Multiview HEVC, which is the state-of-the-art dedicated technique for multiview video compression. The paper provides an extensive description of main differences between both compression techniques. Authors also present adaptation of the Screen Content Coding to compress stereoscopic video as fast and efficiently as possible. The paper reports experimental results of the comparison between HEVC Screen Content Coding and Main profiles for frame-compatible compression of stereoscopic video. The advantages and disadvantages of the proposed technique are enumerated in the conclusions.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124544629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-22DOI: 10.1109/IWSSIP.2017.7965617
Kai Liu, Jianhua Yang, Xiangui Kang
Recent studies have indicated that well-designed convolutional neural network (CNN) has achieved comparable performance to the spatial rich models with ensemble classifier (SRM-EC) in digital image steganalysis. In this paper, we discuss the difference and correlation between a CNN model and a SRM-EC model, and explore the classification error rate varying with texture complexity of an image for both models. Then we propose an ensemble method to combine CNN with SRM-EC by averaging their output classification probability. Compared with the state-of-the-art performance of spatial steganalysis achieved by maxSRMdZ, which is the latest variant of SRM-EC, experimental result shows that the proposed ensemble method furtherly improves the accuracy by nearly 2% in detecting S-UNIWARD and WOW on BOSSbase.
{"title":"Ensemble of CNN and rich model for steganalysis","authors":"Kai Liu, Jianhua Yang, Xiangui Kang","doi":"10.1109/IWSSIP.2017.7965617","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965617","url":null,"abstract":"Recent studies have indicated that well-designed convolutional neural network (CNN) has achieved comparable performance to the spatial rich models with ensemble classifier (SRM-EC) in digital image steganalysis. In this paper, we discuss the difference and correlation between a CNN model and a SRM-EC model, and explore the classification error rate varying with texture complexity of an image for both models. Then we propose an ensemble method to combine CNN with SRM-EC by averaging their output classification probability. Compared with the state-of-the-art performance of spatial steganalysis achieved by maxSRMdZ, which is the latest variant of SRM-EC, experimental result shows that the proposed ensemble method furtherly improves the accuracy by nearly 2% in detecting S-UNIWARD and WOW on BOSSbase.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132669287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/IWSSIP.2017.7965572
Dawid Mieloch, A. Dziembowski, Adam Grzelka, O. Stankiewicz, M. Domański
This paper presents the temporal enhancement of the graph-based depth estimation method, designed for multiview systems with arbitrarily located cameras. The primary goal of the proposed enhancement is to increase the quality of estimated depth maps and simultaneously decrease the time of estimation. The method consists of two stages: the temporal enhancement of segmentation required in used depth estimation method, and the exploitation of depth information from the previous frame in the energy function minimization. Performed experiments show that for all tested sequences the quality of estimated depth maps was increased. Even if only one cycle of optimization is used in proposed method, the quality is higher than for unmodified method, apart from number of cycles. Therefore, use of proposed enhancement allows estimating depth of better quality even with 40% reduction of estimation time.
{"title":"Temporal enhancement of graph-based depth estimation method","authors":"Dawid Mieloch, A. Dziembowski, Adam Grzelka, O. Stankiewicz, M. Domański","doi":"10.1109/IWSSIP.2017.7965572","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965572","url":null,"abstract":"This paper presents the temporal enhancement of the graph-based depth estimation method, designed for multiview systems with arbitrarily located cameras. The primary goal of the proposed enhancement is to increase the quality of estimated depth maps and simultaneously decrease the time of estimation. The method consists of two stages: the temporal enhancement of segmentation required in used depth estimation method, and the exploitation of depth information from the previous frame in the energy function minimization. Performed experiments show that for all tested sequences the quality of estimated depth maps was increased. Even if only one cycle of optimization is used in proposed method, the quality is higher than for unmodified method, apart from number of cycles. Therefore, use of proposed enhancement allows estimating depth of better quality even with 40% reduction of estimation time.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127205392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/IWSSIP.2017.7965595
N. Skaljo, A. Begovic, E. Turajlić, N. Behlilovic
This paper shows a review investigation the possibility of increasing the efficiency of existing line test solutions for troubleshooting IPTV over xDSL, by the results of experimental research on real system under commercial exploitation. At the beginning of this paper the main weaknesses of the existing troubleshooting testing are described. In the rest of the paper the parameters of the physical layer of xDSL transceiver are listed, followed by analysis how they can be used for the purposes of more efficient measurement of parameters of copper pairs.
{"title":"On using of physical layer parameters of xDSL transceivers for troubleshooting","authors":"N. Skaljo, A. Begovic, E. Turajlić, N. Behlilovic","doi":"10.1109/IWSSIP.2017.7965595","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965595","url":null,"abstract":"This paper shows a review investigation the possibility of increasing the efficiency of existing line test solutions for troubleshooting IPTV over xDSL, by the results of experimental research on real system under commercial exploitation. At the beginning of this paper the main weaknesses of the existing troubleshooting testing are described. In the rest of the paper the parameters of the physical layer of xDSL transceiver are listed, followed by analysis how they can be used for the purposes of more efficient measurement of parameters of copper pairs.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127382675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/IWSSIP.2017.7965607
Stanisław Kacprzak
This paper presents the results of language clustering in the i-vectors space, a method to determine in an unsupervised manner how many languages are in a data set and which recordings contain the same language. The most dense i-vectors clusters are found using the DBSCAN algorithm in a low dimensional space obtained by the t-SNE method. Quality of clustering for spherical k-means and the proposed method are tested with the data from NIST 2015 i-Vector Challenge. Usefulness of obtained clustering is tested in the challenge evaluation system. The results demonstrate that the proposed method allows to find 109 dense clusters with low impurity for 50 target languages.
{"title":"Spoken language clustering in the i-vectors space","authors":"Stanisław Kacprzak","doi":"10.1109/IWSSIP.2017.7965607","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965607","url":null,"abstract":"This paper presents the results of language clustering in the i-vectors space, a method to determine in an unsupervised manner how many languages are in a data set and which recordings contain the same language. The most dense i-vectors clusters are found using the DBSCAN algorithm in a low dimensional space obtained by the t-SNE method. Quality of clustering for spherical k-means and the proposed method are tested with the data from NIST 2015 i-Vector Challenge. Usefulness of obtained clustering is tested in the challenge evaluation system. The results demonstrate that the proposed method allows to find 109 dense clusters with low impurity for 50 target languages.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122415892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/IWSSIP.2017.7965581
Agnieszka Wielgus, J. Zarzycki
We present efficient Schur parametrization algorithms for a subclass of near-stationary second-order stochastic processes which we call p-stationary processes. This approach allows for complexity reduction of the general linear Schur algorithm in a uniform way and results in a hierachical class of the algorithms, suitable for efficient implementations, being a good starting point for nonlinear generalizations.
{"title":"Efficient Schur parametrization of near-stationary stochastic processes","authors":"Agnieszka Wielgus, J. Zarzycki","doi":"10.1109/IWSSIP.2017.7965581","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965581","url":null,"abstract":"We present efficient Schur parametrization algorithms for a subclass of near-stationary second-order stochastic processes which we call p-stationary processes. This approach allows for complexity reduction of the general linear Schur algorithm in a uniform way and results in a hierachical class of the algorithms, suitable for efficient implementations, being a good starting point for nonlinear generalizations.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117084483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/IWSSIP.2017.7965600
Andrzej Brzoza, G. Muszynski
Segmentation task plays an important role in image processing. In this paper, we attempt to extract information from images using texture analysis. Moreover, we propose characterization of pixels in images to define the similarity relation between them. These are based on textural information and findings of shortest paths in the graph representation of images. To reflect effectiveness of our method, we apply it to the benchmark Berkeley image database and we compare it to well-established image segmentation methods (sum and difference histograms for texture classification method, Mean-Shift method and mixture of Gaussian distributions method). The proposed approach achieves the best segmentation results measured by distance-based metrics. The experimental results show that our approach is efficient method for texture analysis and image segmentation.
{"title":"An approach to image segmentation based on shortest paths in graphs","authors":"Andrzej Brzoza, G. Muszynski","doi":"10.1109/IWSSIP.2017.7965600","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965600","url":null,"abstract":"Segmentation task plays an important role in image processing. In this paper, we attempt to extract information from images using texture analysis. Moreover, we propose characterization of pixels in images to define the similarity relation between them. These are based on textural information and findings of shortest paths in the graph representation of images. To reflect effectiveness of our method, we apply it to the benchmark Berkeley image database and we compare it to well-established image segmentation methods (sum and difference histograms for texture classification method, Mean-Shift method and mixture of Gaussian distributions method). The proposed approach achieves the best segmentation results measured by distance-based metrics. The experimental results show that our approach is efficient method for texture analysis and image segmentation.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129261093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-01DOI: 10.1109/IWSSIP.2017.7965584
Biao Min, Zhe Xu, R. Cheung
As the successor of H.264, High Efficient Video Coding (HEVC) standard includes various novel techniques, including Coding Tree Unit (CTU) structure and additional angular modes used in intra coding. These new techniques promote the coding efficiency on one hand, while increasing the computational complexity significantly on the other hand. In this paper, we propose a fast intra block partitioning algorithm for HEVC to reduce the coding complexity, based on the statistical cost and corner detection algorithm. A block is considered as a multiple gradients region which will be split into multiple small ones, as the corner points are detected inside the block. A block without corner points existing is treated as being non-split when its RD cost is small according the statistics of the previous frames. The proposed fast algorithm achieves nearly 63% encoding time reduction with 3.42%, 2.80%, and 2.53% BD-Rate loss for Y, U, and V components, averagely. The experimental results show that the proposed method is efficient to fast decide the block partitioning in intra coding of HEVC, even though only static parameters are applied to all test sequences.
HEVC (High efficiency Video Coding)标准作为H.264的后继者,采用了多种新技术,包括编码树单元(Coding Tree Unit, CTU)结构和用于帧内编码的附加角度模式。这些新技术一方面提高了编码效率,另一方面显著增加了计算复杂度。本文提出了一种基于统计代价和角点检测算法的HEVC快速块内分割算法,以降低编码复杂度。一个块被认为是一个多梯度区域,它将被分割成多个小的梯度区域,因为在块内检测到角点。不存在角点的块,根据前几帧的统计,当其RD代价较小时,视为未分割块。提出的快速算法在Y、U和V分量的平均BD-Rate损失分别为3.42%、2.80%和2.53%,编码时间减少了近63%。实验结果表明,即使对所有测试序列只使用静态参数,该方法也能快速确定HEVC编码中的块划分。
{"title":"Fast HEVC intra coding decision based on statistical cost and corner detection","authors":"Biao Min, Zhe Xu, R. Cheung","doi":"10.1109/IWSSIP.2017.7965584","DOIUrl":"https://doi.org/10.1109/IWSSIP.2017.7965584","url":null,"abstract":"As the successor of H.264, High Efficient Video Coding (HEVC) standard includes various novel techniques, including Coding Tree Unit (CTU) structure and additional angular modes used in intra coding. These new techniques promote the coding efficiency on one hand, while increasing the computational complexity significantly on the other hand. In this paper, we propose a fast intra block partitioning algorithm for HEVC to reduce the coding complexity, based on the statistical cost and corner detection algorithm. A block is considered as a multiple gradients region which will be split into multiple small ones, as the corner points are detected inside the block. A block without corner points existing is treated as being non-split when its RD cost is small according the statistics of the previous frames. The proposed fast algorithm achieves nearly 63% encoding time reduction with 3.42%, 2.80%, and 2.53% BD-Rate loss for Y, U, and V components, averagely. The experimental results show that the proposed method is efficient to fast decide the block partitioning in intra coding of HEVC, even though only static parameters are applied to all test sequences.","PeriodicalId":302860,"journal":{"name":"2017 International Conference on Systems, Signals and Image Processing (IWSSIP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114856002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}