Nils Rodrigues, Lin Shao, Jiazhen Yan, T. Schreck, D. Weiskopf
We propose a three-step concept and visual design for supporting the visual exploration of high-dimensional data in scatterplots through eye-tracking. First, we extract subsets in the underlying data using existing classifications, automated clustering algorithms, or eye-tracking. For the latter, we map gaze to the underlying data dimensions in the scatterplot. Clusters of data points that have been the focus of the viewers’ gaze are marked as clusters of interest (eye-mind hypothesis). In a second step, our concept extracts various properties from statistics and scagnostics from the clusters. The third step uses these measures to compare the current data clusters from the main scatterplot to the same data in other dimensions. The results enable analysts to retrieve similar or dissimilar views as guidance to explore the entire data set. We provide a proof-of-concept implementation as a test bench and describe a use case to show a practical application and initial results.
{"title":"Eye Gaze on Scatterplot: Concept and First Results of Recommendations for Exploration of SPLOMs Using Implicit Data Selection","authors":"Nils Rodrigues, Lin Shao, Jiazhen Yan, T. Schreck, D. Weiskopf","doi":"10.1145/3517031.3531165","DOIUrl":"https://doi.org/10.1145/3517031.3531165","url":null,"abstract":"We propose a three-step concept and visual design for supporting the visual exploration of high-dimensional data in scatterplots through eye-tracking. First, we extract subsets in the underlying data using existing classifications, automated clustering algorithms, or eye-tracking. For the latter, we map gaze to the underlying data dimensions in the scatterplot. Clusters of data points that have been the focus of the viewers’ gaze are marked as clusters of interest (eye-mind hypothesis). In a second step, our concept extracts various properties from statistics and scagnostics from the clusters. The third step uses these measures to compare the current data clusters from the main scatterplot to the same data in other dimensions. The results enable analysts to retrieve similar or dissimilar views as guidance to explore the entire data set. We provide a proof-of-concept implementation as a test bench and describe a use case to show a practical application and initial results.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brendan David-John, Kevin R. B. Butler, Eakta Jain
Eye-tracking is a critical source of information for understanding human behavior and developing future mixed-reality technology. Eye-tracking enables applications that classify user activity or predict user intent. However, eye-tracking datasets collected during common virtual reality tasks have also been shown to enable unique user identification, which creates a privacy risk. In this paper, we focus on the problem of user re-identification from eye-tracking features. We adapt standardized privacy definitions of k-anonymity and plausible deniability to protect datasets of eye-tracking features, and evaluate performance against re-identification by a standard biometric identification model on seven VR datasets. Our results demonstrate that re-identification goes down to chance levels for the privatized datasets, even as utility is preserved to levels higher than 72% accuracy in document type classification.
{"title":"For Your Eyes Only: Privacy-preserving eye-tracking datasets","authors":"Brendan David-John, Kevin R. B. Butler, Eakta Jain","doi":"10.1145/3517031.3529618","DOIUrl":"https://doi.org/10.1145/3517031.3529618","url":null,"abstract":"Eye-tracking is a critical source of information for understanding human behavior and developing future mixed-reality technology. Eye-tracking enables applications that classify user activity or predict user intent. However, eye-tracking datasets collected during common virtual reality tasks have also been shown to enable unique user identification, which creates a privacy risk. In this paper, we focus on the problem of user re-identification from eye-tracking features. We adapt standardized privacy definitions of k-anonymity and plausible deniability to protect datasets of eye-tracking features, and evaluate performance against re-identification by a standard biometric identification model on seven VR datasets. Our results demonstrate that re-identification goes down to chance levels for the privatized datasets, even as utility is preserved to levels higher than 72% accuracy in document type classification.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127608063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nora Castner, Béla Umlauf, Ard Kastrati, M. Płomecka, William Schaefer, Enkelejda Kasneci, Z. Bylinskii
ACMReference Format: Nora Castner, Béla Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci, and Zoya Bylinskii. 2022. A gaze-based study design to explore how competency evolves during a photo manipulation task. In Symposium on Eye Tracking Research and Applications (ETRA ’22 Technical Abstracts), June 8–11, 2022, Seattle, Washington.ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3379155.3391320
参考格式:Nora Castner, b Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci和Zoya Bylinskii. 2022。一个基于凝视的研究设计,以探索能力如何演变在照片处理任务。在眼动追踪研究与应用研讨会(ETRA ' 22技术摘要),2022年6月8-11日,西雅图,华盛顿。ACM,纽约,美国,2页。https://doi.org/10.1145/3379155.3391320
{"title":"A gaze-based study design to explore how competency evolves during a photo manipulation task","authors":"Nora Castner, Béla Umlauf, Ard Kastrati, M. Płomecka, William Schaefer, Enkelejda Kasneci, Z. Bylinskii","doi":"10.1145/3517031.3531634","DOIUrl":"https://doi.org/10.1145/3517031.3531634","url":null,"abstract":"ACMReference Format: Nora Castner, Béla Umlauf, Ard Kastrati, Martyna Plomecka, William Schaefer, Enkelejda Kasneci, and Zoya Bylinskii. 2022. A gaze-based study design to explore how competency evolves during a photo manipulation task. In Symposium on Eye Tracking Research and Applications (ETRA ’22 Technical Abstracts), June 8–11, 2022, Seattle, Washington.ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/3379155.3391320","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130754902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-Time Advanced Eye Movements Analysis Pipeline (RAEMAP) is an advanced pipeline to analyze traditional positional gaze measurements as well as advanced eye gaze measurements. The proposed implementation of RAEMAP includes real-time analysis of fixations, saccades, gaze transition entropy, and low/high index of pupillary activity. RAEMAP will also provide visualizations of fixations, fixations on AOIs, heatmaps, and dynamic AOI generation in real-time. This paper outlines the proposed architecture of RAEMAP.
{"title":"Introducing a Real-Time Advanced Eye Movements Analysis Pipeline","authors":"Gavindya Jayawardena","doi":"10.1145/3517031.3532196","DOIUrl":"https://doi.org/10.1145/3517031.3532196","url":null,"abstract":"Real-Time Advanced Eye Movements Analysis Pipeline (RAEMAP) is an advanced pipeline to analyze traditional positional gaze measurements as well as advanced eye gaze measurements. The proposed implementation of RAEMAP includes real-time analysis of fixations, saccades, gaze transition entropy, and low/high index of pupillary activity. RAEMAP will also provide visualizations of fixations, fixations on AOIs, heatmaps, and dynamic AOI generation in real-time. This paper outlines the proposed architecture of RAEMAP.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133564078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud gaming (CG) is a new approach to deliver a high-quality gaming experience to gamers anywhere, anytime, and on any device. To achieve this goal, CG requires a high bandwidth, which is still a major challenge. Many existing research pieces have focused on modeling or predicting the players’ Visual Attention Map (VAM) and allocating bitrate accordingly. Although studies indicate that both modalities of audio and video influence human perception, a few studies considered audio impacts in the cloud-based attention models. This paper demonstrates that the audio features in video games change the players’ VAMs in various game scenarios. Our findings indicated that incorporating game audio improves the accuracy of the predicted attention maps by 13% on average compared to the previous VAMs generated based on visual saliency by Game Attention Model for CG. The audio impact is more evident in video games with fewer visual components or indicators on the screen.
{"title":"Game Audio Impacts on Players’ Visual Attention, Model Performance for Cloud Gaming","authors":"Morva Saaty, M. Hashemi","doi":"10.1145/3517031.3529621","DOIUrl":"https://doi.org/10.1145/3517031.3529621","url":null,"abstract":"Cloud gaming (CG) is a new approach to deliver a high-quality gaming experience to gamers anywhere, anytime, and on any device. To achieve this goal, CG requires a high bandwidth, which is still a major challenge. Many existing research pieces have focused on modeling or predicting the players’ Visual Attention Map (VAM) and allocating bitrate accordingly. Although studies indicate that both modalities of audio and video influence human perception, a few studies considered audio impacts in the cloud-based attention models. This paper demonstrates that the audio features in video games change the players’ VAMs in various game scenarios. Our findings indicated that incorporating game audio improves the accuracy of the predicted attention maps by 13% on average compared to the previous VAMs generated based on visual saliency by Game Attention Model for CG. The audio impact is more evident in video games with fewer visual components or indicators on the screen.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134569336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johannes Meyer, Tobias Wilm, Reinhold Fiess, T. Schlebusch, Wilhelm Stork, Enkelejda Kasneci
Eye-tracking is a key technology for future retinal projection based AR glasses as it enables techniques such as foveated rendering or gaze-driven exit pupil steering, which both increases the system’s overall performance. However, two of the major challenges video oculography systems face are robust gaze estimation in the presence of glasses slippage, paired with the necessity of frequent sensor calibration. To overcome these challenges, we propose a novel, calibration-free eye-tracking sensor for AR glasses based on a highly transparent holographic optical element (HOE) and a laser scanner. We fabricate a segmented HOE generating two stereo images of the eye-region. A single-pixel detector in combination with our stereo reconstruction algorithm is used to precisely calculate the gaze position. In our laboratory setup we demonstrate a calibration-free accuracy of 1.35° achieved by our eye-tracking sensor; highlighting the sensor’s suitability for consumer AR glasses.
{"title":"A Holographic Single-Pixel Stereo Camera Sensor for Calibration-free Eye-Tracking in Retinal Projection Augmented Reality Glasses","authors":"Johannes Meyer, Tobias Wilm, Reinhold Fiess, T. Schlebusch, Wilhelm Stork, Enkelejda Kasneci","doi":"10.1145/3517031.3529616","DOIUrl":"https://doi.org/10.1145/3517031.3529616","url":null,"abstract":"Eye-tracking is a key technology for future retinal projection based AR glasses as it enables techniques such as foveated rendering or gaze-driven exit pupil steering, which both increases the system’s overall performance. However, two of the major challenges video oculography systems face are robust gaze estimation in the presence of glasses slippage, paired with the necessity of frequent sensor calibration. To overcome these challenges, we propose a novel, calibration-free eye-tracking sensor for AR glasses based on a highly transparent holographic optical element (HOE) and a laser scanner. We fabricate a segmented HOE generating two stereo images of the eye-region. A single-pixel detector in combination with our stereo reconstruction algorithm is used to precisely calculate the gaze position. In our laboratory setup we demonstrate a calibration-free accuracy of 1.35° achieved by our eye-tracking sensor; highlighting the sensor’s suitability for consumer AR glasses.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123618887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conny Lu, Qian Zhang, K. Krishnakumar, Jixu Chen, H. Fuchs, S. Talathi, Kunlin Liu
Recently, image-to-image translation (I2I) has met with great success in computer vision, but few works have paid attention to the geometric changes that occur during translation. The geometric changes are necessary to reduce the geometric gap between domains at the cost of breaking correspondence between translated images and original ground truth. We propose a novel geometry-aware semi-supervised method to preserve this correspondence while still allowing geometric changes. The proposed method takes a synthetic image-mask pair as input and produces a corresponding real pair. We also utilize an objective function to ensure consistent geometric movement of the image and mask through the translation. Extensive experiments illustrate that our method yields a 11.23% higher mean Intersection-Over-Union than the current methods on the downstream eye segmentation task. The generated image has a 15.9% decrease in Frechet Inception Distance indicating higher image quality.
近年来,图像到图像的翻译(I2I)在计算机视觉中取得了巨大的成功,但很少有研究关注翻译过程中发生的几何变化。为了减小域间的几何间隙,需要进行几何变化,但代价是破坏了翻译图像与原始地面真值之间的对应关系。我们提出了一种新的几何感知半监督方法来保持这种对应关系,同时仍然允许几何变化。该方法以合成的图像掩码对作为输入,生成对应的实图像掩码对。我们还利用目标函数来确保图像和遮罩在平移过程中的几何运动一致。大量的实验表明,我们的方法在下游的眼睛分割任务上比目前的方法产生了11.23%的平均交叉- over - union。生成的图像的Frechet Inception距离降低了15.9%,表明图像质量更高。
{"title":"Geometry-Aware Eye Image-To-Image Translation","authors":"Conny Lu, Qian Zhang, K. Krishnakumar, Jixu Chen, H. Fuchs, S. Talathi, Kunlin Liu","doi":"10.1145/3517031.3532524","DOIUrl":"https://doi.org/10.1145/3517031.3532524","url":null,"abstract":"Recently, image-to-image translation (I2I) has met with great success in computer vision, but few works have paid attention to the geometric changes that occur during translation. The geometric changes are necessary to reduce the geometric gap between domains at the cost of breaking correspondence between translated images and original ground truth. We propose a novel geometry-aware semi-supervised method to preserve this correspondence while still allowing geometric changes. The proposed method takes a synthetic image-mask pair as input and produces a corresponding real pair. We also utilize an objective function to ensure consistent geometric movement of the image and mask through the translation. Extensive experiments illustrate that our method yields a 11.23% higher mean Intersection-Over-Union than the current methods on the downstream eye segmentation task. The generated image has a 15.9% decrease in Frechet Inception Distance indicating higher image quality.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124673093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This pilot study analyzes the reading patterns of 15 German students while receiving instant messages through a smartphone, imitating an online conversation. With this pilot study, we aim to test the eye-tracking setup and methodology employed, in which we analyze specifically the moment in which participants return to the reading after answering the instant messages. We explore the relationships with reading comprehension performance and differences across readers, considering individual differences regarding reading habits and multitasking behavior.
{"title":"Instant messaging multitasking while reading: a pilot eye-tracking study","authors":"L. Altamura, L. Salmerón, Yvonne Kammerer","doi":"10.1145/3517031.3529237","DOIUrl":"https://doi.org/10.1145/3517031.3529237","url":null,"abstract":"This pilot study analyzes the reading patterns of 15 German students while receiving instant messages through a smartphone, imitating an online conversation. With this pilot study, we aim to test the eye-tracking setup and methodology employed, in which we analyze specifically the moment in which participants return to the reading after answering the instant messages. We explore the relationships with reading comprehension performance and differences across readers, considering individual differences regarding reading habits and multitasking behavior.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130064719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nora Castner, Jonas Frankemölle, C. Keutel, F. Huettig, Enkelejda Kasneci
Much of the current expertise literature has found that domain specific tasks evoke different eye movements. However, research has yet to predict optimal image exploration using saccadic information and to identify and quantify differences in the search strategies between learners, intermediates, and expert practitioners. By employing LSTMs for scanpath classification, we found saccade features over time could distinguish all groups at high accuracy. The most distinguishing features were saccade velocity peak (72%), length (70%), and velocity average (68%). These findings promote the holistic theory of expert visual exploration that experts can quickly process the whole scene using longer and more rapid saccade behavior initially. The potential to integrate expertise model development from saccadic scanpath features into intelligent tutoring systems is the ultimate inspiration for our research. Additionally, this model is not confined to visual exploration in dental xrays, rather it can extend to other medical domains.
{"title":"LSTMs can distinguish dental expert saccade behavior with high ”plaque-urracy”","authors":"Nora Castner, Jonas Frankemölle, C. Keutel, F. Huettig, Enkelejda Kasneci","doi":"10.1145/3517031.3529631","DOIUrl":"https://doi.org/10.1145/3517031.3529631","url":null,"abstract":"Much of the current expertise literature has found that domain specific tasks evoke different eye movements. However, research has yet to predict optimal image exploration using saccadic information and to identify and quantify differences in the search strategies between learners, intermediates, and expert practitioners. By employing LSTMs for scanpath classification, we found saccade features over time could distinguish all groups at high accuracy. The most distinguishing features were saccade velocity peak (72%), length (70%), and velocity average (68%). These findings promote the holistic theory of expert visual exploration that experts can quickly process the whole scene using longer and more rapid saccade behavior initially. The potential to integrate expertise model development from saccadic scanpath features into intelligent tutoring systems is the ultimate inspiration for our research. Additionally, this model is not confined to visual exploration in dental xrays, rather it can extend to other medical domains.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128959914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A distinctive characteristic of human driver behavior is the spatial bias of gaze allocation toward the vanishing point of the road. This behavior can be evaluated by comparing fixation maps against a spatial-bias baseline using typical metrics such as the Pearson’s Correlation Coefficient (CC) and the Kullback-Leibler divergence (KL). CC and KL penalize false positives and negatives differently, which implies that they can be affected by the characteristics of the baseline. In this paper, we analyze the use of CC and KL for the evaluation of drivers’ fixation maps against two types of spatial-bias baselines: baselines obtained from recorded fixation maps (data-based) and 2D-Gaussian baselines (function-based). Our results indicate that the use of CC can lead to misleading interpretations due to single fixations outside of the spatial bias area when compared to data-based baselines. Thus, we argue that KL and CC should be considered simultaneously under specific modeling assumptions.
{"title":"On the Use of Distribution-based Metrics for the Evaluation of Drivers’ Fixation Maps Against Spatial Baselines","authors":"Jaime Maldonado, Lino Antoni Giefer","doi":"10.1145/3517031.3529629","DOIUrl":"https://doi.org/10.1145/3517031.3529629","url":null,"abstract":"A distinctive characteristic of human driver behavior is the spatial bias of gaze allocation toward the vanishing point of the road. This behavior can be evaluated by comparing fixation maps against a spatial-bias baseline using typical metrics such as the Pearson’s Correlation Coefficient (CC) and the Kullback-Leibler divergence (KL). CC and KL penalize false positives and negatives differently, which implies that they can be affected by the characteristics of the baseline. In this paper, we analyze the use of CC and KL for the evaluation of drivers’ fixation maps against two types of spatial-bias baselines: baselines obtained from recorded fixation maps (data-based) and 2D-Gaussian baselines (function-based). Our results indicate that the use of CC can lead to misleading interpretations due to single fixations outside of the spatial bias area when compared to data-based baselines. Thus, we argue that KL and CC should be considered simultaneously under specific modeling assumptions.","PeriodicalId":339393,"journal":{"name":"2022 Symposium on Eye Tracking Research and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117178712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}