Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498945
He Liu, A. Reibman
An image quality estimator (QE) can be used to improve the performance of a system, but only if its scores are easily interpretable. In this paper, we present software, entitled “Stress Testing Image Quality Estimators (STIQE)” that systematically explores the performance of a QE, with the goal of enabling users to interpret the QE's scores. Our software allows consistent and reproducible benchmarks of new QEs as they are developed, so the most effective QE for an application can be chosen. We demonstrate that results produced by the software provide new insights into hidden aspects of existing QEs.
{"title":"Software to Stress Test Image Quality Estimators","authors":"He Liu, A. Reibman","doi":"10.1109/QoMEX.2016.7498945","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498945","url":null,"abstract":"An image quality estimator (QE) can be used to improve the performance of a system, but only if its scores are easily interpretable. In this paper, we present software, entitled “Stress Testing Image Quality Estimators (STIQE)” that systematically explores the performance of a QE, with the goal of enabling users to interpret the QE's scores. Our software allows consistent and reproducible benchmarks of new QEs as they are developed, so the most effective QE for an application can be chosen. We demonstrate that results produced by the software provide new insights into hidden aspects of existing QEs.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"22 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73099100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498966
N. Zacharov, C. Pike, F. Melchior, T. Worch
With the development of so-called next generation audio systems the question of evaluation of such immersive or object-based systems is of large interest for the industry. This paper presents the multiple stimulus ideal profile method for the practical assessment of next generation sound systems. The approach takes best practices from a number of different well-recognised methods as well as some novel ones from other industries. The paper presents the method as well as the initial results of the test using specially designed test items to investigate the characteristics of the method. Practical experiences in two labs with different listener groups and statistical analysis will be discussed in detail.
{"title":"Next generation audio system assessment using the multiple stimulus ideal profile method","authors":"N. Zacharov, C. Pike, F. Melchior, T. Worch","doi":"10.1109/QoMEX.2016.7498966","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498966","url":null,"abstract":"With the development of so-called next generation audio systems the question of evaluation of such immersive or object-based systems is of large interest for the industry. This paper presents the multiple stimulus ideal profile method for the practical assessment of next generation sound systems. The approach takes best practices from a number of different well-recognised methods as well as some novel ones from other industries. The paper presents the method as well as the initial results of the test using specially designed test items to investigate the characteristics of the method. Practical experiences in two labs with different listener groups and statistical analysis will be discussed in detail.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"26 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78970865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498929
Ahmed Aldahdooh, M. Barkowsky, P. Callet
Error resilience (ER) is an important tool in video coding to maximize the quality of Experience (QoE). The prediction process in video coding became complex which yields an unsatisfying video quality when NALunit packets are lost in error-prone channels. There are different ER techniques and multiple description coding (MDC) is one of the promising technique for this problem. MDC is categorized into different types and, in this paper, we focus on temporal MDC techniques. In this paper, a new temporal MDC scheme is proposed. In the encoding process, the encoded descriptions contain primary frames and secondary frames (redundant representations). The secondary frames represent the MVs that are predicted from previous primary frames such that the residual signal is set to zero and is not part of the rate distortion optimization. In the decoding process of the lost frames, a weighted average error concealment (EC) strategy is proposed to conceal these frames. The proposed scheme is subjectively evaluated along with other schemes and the results show that the proposed scheme is significantly different from most of other temporal MDC schemes.
{"title":"Spatio-temporal error concealment technique for high order multiple description coding schemes including subjective assessment","authors":"Ahmed Aldahdooh, M. Barkowsky, P. Callet","doi":"10.1109/QoMEX.2016.7498929","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498929","url":null,"abstract":"Error resilience (ER) is an important tool in video coding to maximize the quality of Experience (QoE). The prediction process in video coding became complex which yields an unsatisfying video quality when NALunit packets are lost in error-prone channels. There are different ER techniques and multiple description coding (MDC) is one of the promising technique for this problem. MDC is categorized into different types and, in this paper, we focus on temporal MDC techniques. In this paper, a new temporal MDC scheme is proposed. In the encoding process, the encoded descriptions contain primary frames and secondary frames (redundant representations). The secondary frames represent the MVs that are predicted from previous primary frames such that the residual signal is set to zero and is not part of the rate distortion optimization. In the decoding process of the lost frames, a weighted average error concealment (EC) strategy is proposed to conceal these frames. The proposed scheme is subjectively evaluated along with other schemes and the results show that the proposed scheme is significantly different from most of other temporal MDC schemes.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"33 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74596314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498936
Lukáš Krasula, K. Fliegel, P. Callet, M. Klima
There are several standard methods for evaluating the performance of models for objective quality assessment with respect to results of subjective tests. However, all of them suffer from one or more of the following drawbacks: They do not consider the uncertainty in the subjective scores, requiring the models to make certain decision where the correct behavior is not known. They are vulnerable to the quality range of the stimuli in the experiments. In order to compare the models, they require a mapping of predicted values to the subjective scores, thus not comparing the models exactly as they are used in the real scenarios. In this paper, new methodology for objective models performance evaluation is proposed. The method is based on determining the classification abilities of the models considering two scenarios inspired by the real applications. It does not suffer from the previously stated drawbacks and enables to easily evaluate the performance on the data from multiple subjective experiments. Moreover, techniques to determine statistical significance of the performance differences are suggested. The proposed framework is tested on several selected metrics and datasets, showing the ability to provide a complementary information about the models' behavior while being in parallel with other state-of-the-art methods.
{"title":"On the accuracy of objective image and video quality models: New methodology for performance evaluation","authors":"Lukáš Krasula, K. Fliegel, P. Callet, M. Klima","doi":"10.1109/QoMEX.2016.7498936","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498936","url":null,"abstract":"There are several standard methods for evaluating the performance of models for objective quality assessment with respect to results of subjective tests. However, all of them suffer from one or more of the following drawbacks: They do not consider the uncertainty in the subjective scores, requiring the models to make certain decision where the correct behavior is not known. They are vulnerable to the quality range of the stimuli in the experiments. In order to compare the models, they require a mapping of predicted values to the subjective scores, thus not comparing the models exactly as they are used in the real scenarios. In this paper, new methodology for objective models performance evaluation is proposed. The method is based on determining the classification abilities of the models considering two scenarios inspired by the real applications. It does not suffer from the previously stated drawbacks and enables to easily evaluate the performance on the data from multiple subjective experiments. Moreover, techniques to determine statistical significance of the performance differences are suggested. The proposed framework is tested on several selected metrics and datasets, showing the ability to provide a complementary information about the models' behavior while being in parallel with other state-of-the-art methods.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"357 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80160490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498930
Amnon Balanov, Arik Schwartz, Y. Moshe
Reduced-reference image quality measures aim to estimate the visual quality of a distorted image with only partial information about the “perfect quality” reference image. In this paper, we present a reduced-reference image quality assessment (IQA) metric based on DCT Subbands Similarity (RR-DSS). According to the assumption that human visual perception is adapted for extracting structural information, the proposed technique measures change in structural information in subbands in the discrete cosine transform (DCT) domain and weights the quality estimates for these subbands. RR-DSS is simple to implement, incurs low computational complexity, and has a flexible tradeoff between the amount of side information and image quality estimation accuracy. RR-DSS was tested with public image databases and shows excellent correlation with human judgments of quality. It outperforms state-of-the-art RR IQA techniques and even several FR IQA techniques.
{"title":"Reduced-reference image quality assessment based on DCT Subband Similarity","authors":"Amnon Balanov, Arik Schwartz, Y. Moshe","doi":"10.1109/QoMEX.2016.7498930","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498930","url":null,"abstract":"Reduced-reference image quality measures aim to estimate the visual quality of a distorted image with only partial information about the “perfect quality” reference image. In this paper, we present a reduced-reference image quality assessment (IQA) metric based on DCT Subbands Similarity (RR-DSS). According to the assumption that human visual perception is adapted for extracting structural information, the proposed technique measures change in structural information in subbands in the discrete cosine transform (DCT) domain and weights the quality estimates for these subbands. RR-DSS is simple to implement, incurs low computational complexity, and has a flexible tradeoff between the amount of side information and image quality estimation accuracy. RR-DSS was tested with public image databases and shows excellent correlation with human judgments of quality. It outperforms state-of-the-art RR IQA techniques and even several FR IQA techniques.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"22 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89674809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498958
Rafael Rodrigues, P. Počta, H. Melvin, Manuela Pereira, A. Pinheiro
The rapid adoption of MPEG-DASH is testament to its core design principles that enable the client to make the informed decision relating to media encoding representations, based on network conditions, device type and preferences. Typically, the focus has mostly been on the different video quality representations rather than audio. However, for device types with small screens, the relative bandwidth budget difference allocated to the two streams may not be that large. This is especially the case if high quality audio is used, and in this scenario, we argue that increased focus should be given to the bit rate representations for audio. Arising from this, we have designed and implemented a subjective experiment to evaluate and analyses the possible effect of using different audio quality levels. In particular, we investigate the possibility of providing reduced audio quality so as to free up bandwidth for video under certain conditions. Thus, the experiment was implemented for live music concert scenarios transmitted over mobile networks, and we suggest that the results will be of significant interest to DASH content creators when considering bandwidth tradeoff between audio and video.
{"title":"MPEG DASH - some QoE-based insights into the tradeoff between audio and video for live music concert streaming under congested network conditions","authors":"Rafael Rodrigues, P. Počta, H. Melvin, Manuela Pereira, A. Pinheiro","doi":"10.1109/QoMEX.2016.7498958","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498958","url":null,"abstract":"The rapid adoption of MPEG-DASH is testament to its core design principles that enable the client to make the informed decision relating to media encoding representations, based on network conditions, device type and preferences. Typically, the focus has mostly been on the different video quality representations rather than audio. However, for device types with small screens, the relative bandwidth budget difference allocated to the two streams may not be that large. This is especially the case if high quality audio is used, and in this scenario, we argue that increased focus should be given to the bit rate representations for audio. Arising from this, we have designed and implemented a subjective experiment to evaluate and analyses the possible effect of using different audio quality levels. In particular, we investigate the possibility of providing reduced audio quality so as to free up bandwidth for video under certain conditions. Thus, the experiment was implemented for live music concert scenarios transmitted over mobile networks, and we suggest that the results will be of significant interest to DASH content creators when considering bandwidth tradeoff between audio and video.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"111 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79190489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498954
Aditee Shrotre, Lina Karam
A clean background image is of great importance in multiple applications such as video surveillance, object tracking and context-based video encoding, but acquiring a clean background image in public areas is seldom possible. Many algorithms have been developed to initialize the background from videos and images. This paper presents a database consisting of 13 different scenes that can be used for benchmarking the performance of background initialization algorithms. We also conducted a subjective study on the perceptual quality of background images that are reconstructed using existing background initialization algorithms. The obtained subjective scores are used to evaluate existing image quality metrics and their capability in predicting the perceived quality of reconstructed background images.
{"title":"Visual quality assessment of reconstructed background images","authors":"Aditee Shrotre, Lina Karam","doi":"10.1109/QoMEX.2016.7498954","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498954","url":null,"abstract":"A clean background image is of great importance in multiple applications such as video surveillance, object tracking and context-based video encoding, but acquiring a clean background image in public areas is seldom possible. Many algorithms have been developed to initialize the background from videos and images. This paper presents a database consisting of 13 different scenes that can be used for benchmarking the performance of background initialization algorithms. We also conducted a subjective study on the perceptual quality of background images that are reconstructed using existing background initialization algorithms. The obtained subjective scores are used to evaluate existing image quality metrics and their capability in predicting the perceived quality of reconstructed background images.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"42 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80555958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498957
Miguel Fidalgo-Fernandes, Marco V. Bernardo, A. Pinheiro
This paper addresses the need to use the knowledge about the human perceived quality, adding machine learning models to the objective quality estimation. A new technique is proposed based on the division of images into several cells where the mean of the SSIM metric is computed. A sliding window over a grid of cells that divide the image will define a set of image descriptors that are aggregated using a bag of words. This model is able to improve the typical values provided by SSIM and defines a new path for the application of machine learning to image quality evaluation.
{"title":"A bag of words description scheme based on SSIM for image quality assessment","authors":"Miguel Fidalgo-Fernandes, Marco V. Bernardo, A. Pinheiro","doi":"10.1109/QoMEX.2016.7498957","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498957","url":null,"abstract":"This paper addresses the need to use the knowledge about the human perceived quality, adding machine learning models to the objective quality estimation. A new technique is proposed based on the division of images into several cells where the mean of the SSIM metric is computed. A sliding window over a grid of cells that divide the image will define a set of image descriptors that are aggregated using a bag of words. This model is able to improve the typical values provided by SSIM and defines a new path for the application of machine learning to image quality evaluation.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"1 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82321600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498946
Wei Zhou, Ning Liao, Zhibo Chen, Weiping Li
Visual Quality Assessment of 3D/stereoscopic video (3D VQA) is significant for both quality monitoring and optimization of the existing 3D video services. In this paper, we build a 3D video database based on the latest 3D-HEVC video coding standard, to investigate the relationship among video quality, depth quality, and overall quality of experience (QoE) of 3D/stereoscopic video. We also analyze the pivotal factors to the video and depth qualities. Moreover, we develop a No-Reference 3D-HEVC bitstream-level objective video quality assessment model, which utilizes the key features extracted from the 3D video bitstreams to assess the perceived quality of the stereoscopic video. The model is verified to be effective on our database as compared with widely used 2D Full-Reference quality metrics as well as a state-of-the-art 3D FR pixel-level video quality metric.
{"title":"3D-HEVC visual quality assessment: Database and bitstream model","authors":"Wei Zhou, Ning Liao, Zhibo Chen, Weiping Li","doi":"10.1109/QoMEX.2016.7498946","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498946","url":null,"abstract":"Visual Quality Assessment of 3D/stereoscopic video (3D VQA) is significant for both quality monitoring and optimization of the existing 3D video services. In this paper, we build a 3D video database based on the latest 3D-HEVC video coding standard, to investigate the relationship among video quality, depth quality, and overall quality of experience (QoE) of 3D/stereoscopic video. We also analyze the pivotal factors to the video and depth qualities. Moreover, we develop a No-Reference 3D-HEVC bitstream-level objective video quality assessment model, which utilizes the key features extracted from the 3D video bitstreams to assess the perceived quality of the stereoscopic video. The model is verified to be effective on our database as compared with widely used 2D Full-Reference quality metrics as well as a state-of-the-art 3D FR pixel-level video quality metric.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"33 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78796369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-06-06DOI: 10.1109/QoMEX.2016.7498927
Yucheng Zhu, Guangtao Zhai, Ke Gu, Zhaohui Che
Most of existing visual quality assessment algorithms are tested on standard databases that are created in controlled viewing conditions (e.g. display device, viewing distance and lighting). This implies that all the recoded subjective scores are only valid for the specific settings used in the database. However, with the prevalence of mobile devices, the practical viewing environments can significantly vary from moment to moment. It is our daily experience that the same image can look drastically different on dissimilar devices under changed viewing distance and/or lighting conditions. In other words, a gap exists between the eyes and the visual contents behind the screen in current research of quality assessment. Therefore, in this work, we perform subjective quality evaluation with varied actual viewing conditions. To make the research reproducible, we build a prototype system to record what the eyes really see from the screen and construct the viewing environment-changed image database. The database will be made available to the public. Meanwhile we design a dedicated effective environment-assessing algorithm. We believe that this work will benefit the research of visual quality assessment towards more practical applications.
{"title":"Closing the gap: Visual quality assessment considering viewing conditions","authors":"Yucheng Zhu, Guangtao Zhai, Ke Gu, Zhaohui Che","doi":"10.1109/QoMEX.2016.7498927","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498927","url":null,"abstract":"Most of existing visual quality assessment algorithms are tested on standard databases that are created in controlled viewing conditions (e.g. display device, viewing distance and lighting). This implies that all the recoded subjective scores are only valid for the specific settings used in the database. However, with the prevalence of mobile devices, the practical viewing environments can significantly vary from moment to moment. It is our daily experience that the same image can look drastically different on dissimilar devices under changed viewing distance and/or lighting conditions. In other words, a gap exists between the eyes and the visual contents behind the screen in current research of quality assessment. Therefore, in this work, we perform subjective quality evaluation with varied actual viewing conditions. To make the research reproducible, we build a prototype system to record what the eyes really see from the screen and construct the viewing environment-changed image database. The database will be made available to the public. Meanwhile we design a dedicated effective environment-assessing algorithm. We believe that this work will benefit the research of visual quality assessment towards more practical applications.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"122 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73754291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}