Michael Burch, K. Kurzhals, Niklas Kleinhans, D. Weiskopf
Eye movement data can be regarded as a set of scan paths, each corresponding to one of the visual scanning strategies of a certain study participant. Finding common subsequences in those scan paths is a challenging task since they are typically not equally temporally long, do not consist of the same number of fixations, or do not lead along similar stimulus regions. In this paper we describe a technique based on pairwise and multiple sequence alignment to support a data analyst to see the most important patterns in the data. To reach this goal the scan paths are first transformed into a sequence of characters based on metrics as well as spatial and temporal aggregations. The result of the algorithmic data transformation is used as input for an interactive consensus matrix visualization. We illustrate the usefulness of the concepts by applying it to formerly recorded eye movement data investigating route finding tasks in public transport maps.
{"title":"EyeMSA","authors":"Michael Burch, K. Kurzhals, Niklas Kleinhans, D. Weiskopf","doi":"10.1145/3204493.3204565","DOIUrl":"https://doi.org/10.1145/3204493.3204565","url":null,"abstract":"Eye movement data can be regarded as a set of scan paths, each corresponding to one of the visual scanning strategies of a certain study participant. Finding common subsequences in those scan paths is a challenging task since they are typically not equally temporally long, do not consist of the same number of fixations, or do not lead along similar stimulus regions. In this paper we describe a technique based on pairwise and multiple sequence alignment to support a data analyst to see the most important patterns in the data. To reach this goal the scan paths are first transformed into a sequence of characters based on metrics as well as spatial and temporal aggregations. The result of the algorithmic data transformation is used as input for an interactive consensus matrix visualization. We illustrate the usefulness of the concepts by applying it to formerly recorded eye movement data investigating route finding tasks in public transport maps.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tim Claudius Stratmann, Uwe Gruenefeld, Susanne C.J. Boll
Mixed Reality devices can either augment reality (AR) or create completely virtual realities (VR). Combined with head-mounted devices and eye-tracking, they enable users to interact with these systems in novel ways. However, current eye-tracking systems are expensive and limited in the interaction with virtual content. In this paper, we present EyeMR, a low-cost system (below 100$) that enables researchers to rapidly prototype new techniques for eye and gaze interactions. Our system supports mono- and binocular tracking (using Pupil Capture) and includes a Unity framework to support the fast development of new interaction techniques. We argue for the usefulness of EyeMR based on results of a user evaluation with HCI experts.
{"title":"EyeMR","authors":"Tim Claudius Stratmann, Uwe Gruenefeld, Susanne C.J. Boll","doi":"10.1145/3204493.3208336","DOIUrl":"https://doi.org/10.1145/3204493.3208336","url":null,"abstract":"Mixed Reality devices can either augment reality (AR) or create completely virtual realities (VR). Combined with head-mounted devices and eye-tracking, they enable users to interact with these systems in novel ways. However, current eye-tracking systems are expensive and limited in the interaction with virtual content. In this paper, we present EyeMR, a low-cost system (below 100$) that enables researchers to rapidly prototype new techniques for eye and gaze interactions. Our system supports mono- and binocular tracking (using Pupil Capture) and includes a Unity framework to support the fast development of new interaction techniques. We argue for the usefulness of EyeMR based on results of a user evaluation with HCI experts.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122237100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vagner Figuerêdo de Santana, J. Ferreira, R. Paula, Renato Cerqueira
Designing systems to offer support to experts during cognitive intensive tasks at the right time is still a challenging endeavor, despite years of research progress in the area. This paper proposes a gaze model based on eye tracking empirical data to identify when a system should proactively interact with the expert during visual inspection tasks. The gaze model derives from the analyses of a user study where 11 seismic interpreters were asked to perform the visual inspection task of seismic images from known and unknown basins. The eye tracking fixation patterns were triangulated with pupil dilations and thinking-aloud data. Results show that cumulative saccadic distances allow identifying when additional information could be offered to support seismic interpreters, changing the visual search behavior from exploratory to goal-directed.
{"title":"An eye gaze model for seismic interpretation support","authors":"Vagner Figuerêdo de Santana, J. Ferreira, R. Paula, Renato Cerqueira","doi":"10.1145/3204493.3204554","DOIUrl":"https://doi.org/10.1145/3204493.3204554","url":null,"abstract":"Designing systems to offer support to experts during cognitive intensive tasks at the right time is still a challenging endeavor, despite years of research progress in the area. This paper proposes a gaze model based on eye tracking empirical data to identify when a system should proactively interact with the expert during visual inspection tasks. The gaze model derives from the analyses of a user study where 11 seismic interpreters were asked to perform the visual inspection task of seismic images from known and unknown basins. The eye tracking fixation patterns were triangulated with pupil dilations and thinking-aloud data. Results show that cumulative saccadic distances allow identifying when additional information could be offered to support seismic interpreters, changing the visual search behavior from exploratory to goal-directed.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129039402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saleh Mozaffari, P. Klein, J. Viiri, Sheraz Ahmed, J. Kuhn, A. Dengel
The competent handling of representations is required for understanding physics' concepts, developing problem-solving skills, and achieving scientific expertise. Using eye-tracking methodology, we present the contributions of this paper as follows: We first investigated the preferences of students with the different levels of knowledge; experts, intermediates, and novices, in representational competence in the domain of physics problem-solving. It reveals that experts more likely prefer to use vector than other representations. Besides, a similar tendency of table representation usage was observed in all groups. Also, diagram representation has been used less than others. Secondly, we evaluated three similarity measures; Levenshtein distance, transition entropy, and Jensen-Shannon divergence. Conducting Recursive Feature Elimination technique suggests Jensen-Shannon divergence is the best discriminating feature among the three. However, investigation on mutual dependency of the features implies transition entropy mutually links between two other features where it has mutual information with Levenshtein distance (Maximal Information Coefficient = 0.44) and has a correlation with Jensen-Shannon divergence (r(18313) = 0.70, p < .001).
理解物理概念、发展解决问题的能力和获得科学专业知识都需要有能力处理表征。采用眼动追踪方法,我们提出了本文的贡献如下:我们首先调查了不同知识水平学生的偏好;专家,中级和新手,在物理问题解决领域的代表性能力。它揭示了专家更倾向于使用向量而不是其他表示法。此外,在所有组中都观察到相似的表表示使用趋势。此外,图表示比其他表示使用得更少。其次,我们评估了三个相似性度量;Levenshtein距离,跃迁熵,和Jensen-Shannon散度。递归特征消去技术表明Jensen-Shannon散度是三者之间最好的判别特征。然而,对特征相互依赖性的研究表明,转移熵与其他两个特征之间相互联系,并且与Levenshtein距离具有互信息(maximum information Coefficient = 0.44),并且与Jensen-Shannon散度具有相关性(r(18313) = 0.70, p < .001)。
{"title":"Evaluating similarity measures for gaze patterns in the context of representational competence in physics education","authors":"Saleh Mozaffari, P. Klein, J. Viiri, Sheraz Ahmed, J. Kuhn, A. Dengel","doi":"10.1145/3204493.3204564","DOIUrl":"https://doi.org/10.1145/3204493.3204564","url":null,"abstract":"The competent handling of representations is required for understanding physics' concepts, developing problem-solving skills, and achieving scientific expertise. Using eye-tracking methodology, we present the contributions of this paper as follows: We first investigated the preferences of students with the different levels of knowledge; experts, intermediates, and novices, in representational competence in the domain of physics problem-solving. It reveals that experts more likely prefer to use vector than other representations. Besides, a similar tendency of table representation usage was observed in all groups. Also, diagram representation has been used less than others. Secondly, we evaluated three similarity measures; Levenshtein distance, transition entropy, and Jensen-Shannon divergence. Conducting Recursive Feature Elimination technique suggests Jensen-Shannon divergence is the best discriminating feature among the three. However, investigation on mutual dependency of the features implies transition entropy mutually links between two other features where it has mutual information with Levenshtein distance (Maximal Information Coefficient = 0.44) and has a correlation with Jensen-Shannon divergence (r(18313) = 0.70, p < .001).","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126468905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this research was to compare visual patterns while examining radiographs in groups of people with different levels and different types of expertise. Introducing the latter comparative base is the original contribution of these studies. The residents and specialists were trained in medical diagnosing of X-Rays and for these two groups it was possible to compare visual patterns between observers with different level of the same expertise type. On the other hand, the radiographers who took part in the examination - due to specific of their daily work - had experience in reading and evaluating X-Rays quality and were not trained in diagnosing. Involving this group created in our research the new opportunity to explore eye movements obtained when examining X-Ray for both medical diagnosing and quality assessment purposes, which may be treated as different types of expertise. We found that, despite the low diagnosing performance, the radiographers eye movement characteristics were more similar to the specialists than eye movement characteristics of the residents. It may be inferred that people with different type of expertise, yet after gaining a certain level of experience (or practise), may develop similar visual patterns which is the original conclusion of the research.
{"title":"Development of diagnostic performance & visual processing in different types of radiological expertise","authors":"P. Kasprowski, Katarzyna Harężlak, S. Kasprowska","doi":"10.1145/3204493.3204562","DOIUrl":"https://doi.org/10.1145/3204493.3204562","url":null,"abstract":"The aim of this research was to compare visual patterns while examining radiographs in groups of people with different levels and different types of expertise. Introducing the latter comparative base is the original contribution of these studies. The residents and specialists were trained in medical diagnosing of X-Rays and for these two groups it was possible to compare visual patterns between observers with different level of the same expertise type. On the other hand, the radiographers who took part in the examination - due to specific of their daily work - had experience in reading and evaluating X-Rays quality and were not trained in diagnosing. Involving this group created in our research the new opportunity to explore eye movements obtained when examining X-Ray for both medical diagnosing and quality assessment purposes, which may be treated as different types of expertise. We found that, despite the low diagnosing performance, the radiographers eye movement characteristics were more similar to the specialists than eye movement characteristics of the residents. It may be inferred that people with different type of expertise, yet after gaining a certain level of experience (or practise), may develop similar visual patterns which is the original conclusion of the research.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"111 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122631845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathias Trefzger, Tanja Blascheck, Michael Raschke, Sarah Hausmann, T. Schlegel
In this paper, we contribute an eye tracking study conducted with pedestrians and cyclists. We apply a visual analytics-based method to inspect pedestrians' and cyclists' gaze behavior as well as video recordings and accelerometer data. This method using multi-modal data allows us to explore patterns and extract common eye movement strategies. Our results are that participants paid most attention to the path itself; advertisements do not distract participants; participants focus more on pedestrians than on cyclists; pedestrians perform more shoulder checks than cyclists do; and we extracted common gaze sequences. Such an experiment in a real-world traffic environment allows us to understand realistic behavior of pedestrians and cyclists better.
{"title":"A visual comparison of gaze behavior from pedestrians and cyclists","authors":"Mathias Trefzger, Tanja Blascheck, Michael Raschke, Sarah Hausmann, T. Schlegel","doi":"10.1145/3204493.3214307","DOIUrl":"https://doi.org/10.1145/3204493.3214307","url":null,"abstract":"In this paper, we contribute an eye tracking study conducted with pedestrians and cyclists. We apply a visual analytics-based method to inspect pedestrians' and cyclists' gaze behavior as well as video recordings and accelerometer data. This method using multi-modal data allows us to explore patterns and extract common eye movement strategies. Our results are that participants paid most attention to the path itself; advertisements do not distract participants; participants focus more on pedestrians than on cyclists; pedestrians perform more shoulder checks than cyclists do; and we extracted common gaze sequences. Such an experiment in a real-world traffic environment allows us to understand realistic behavior of pedestrians and cyclists better.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115731755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To participate in a society of a rapidly changing world, learning fundamentals of programming is important. However, learning to program is challenging for many novices and reading source code is one major obstacle in this challenge. The primary research objective of my dissertation is developing a help system based on historical and interactive eye tracking data to help novices master program comprehension. Helping novices requires detecting problematic situations while solving programming tasks using a classifier to split novices into successful/unsuccessful participants based on the answers given to program comprehension tasks. One set of features of this classifier is the story reading and execution reading order. The first step in my dissertation is creating a classifier for the reading order problem. The current status of this step is analyzing eye tracking datasets of novices and experts.
{"title":"Asynchronous gaze sharing: towards a dynamic help system to support learners during program comprehension","authors":"Fabian Deitelhoff","doi":"10.1145/3204493.3207421","DOIUrl":"https://doi.org/10.1145/3204493.3207421","url":null,"abstract":"To participate in a society of a rapidly changing world, learning fundamentals of programming is important. However, learning to program is challenging for many novices and reading source code is one major obstacle in this challenge. The primary research objective of my dissertation is developing a help system based on historical and interactive eye tracking data to help novices master program comprehension. Helping novices requires detecting problematic situations while solving programming tasks using a classifier to split novices into successful/unsuccessful participants based on the answers given to program comprehension tasks. One set of features of this classifier is the story reading and execution reading order. The first step in my dissertation is creating a classifier for the reading order problem. The current status of this step is analyzing eye tracking datasets of novices and experts.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130942428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Rosengren, M. Nyström, B. Hammar, M. Stridh
Current video-based eye trackers are not suited for calibration of patients who cannot produce stable and accurate fixations. Reliable calibration is crucial in order to make repeatable recordings, which in turn are important to accurately measure the effects of a medical intervention. To test the suitability of different calibration polynomials for such patients, inaccurate calibration data were simulated using a geometric model of the EyeLink 1000 Plus desktop mode setup. This model is used to map eye position features to screen coordinates, creating screen data with known eye tracker data. This allows for objective evaluation of gaze estimation performance over the entire computer screen. Results show that the choice of calibration polynomial is crucial in order to ensure a high repeatability across measurements from patients who are hard to calibrate. Higher order calibration polynomials resulted in poor gaze estimation even for small simulated fixation inaccuracies.
{"title":"Suitability of calibration polynomials for eye-tracking data with simulated fixation inaccuracies","authors":"William Rosengren, M. Nyström, B. Hammar, M. Stridh","doi":"10.1145/3204493.3204586","DOIUrl":"https://doi.org/10.1145/3204493.3204586","url":null,"abstract":"Current video-based eye trackers are not suited for calibration of patients who cannot produce stable and accurate fixations. Reliable calibration is crucial in order to make repeatable recordings, which in turn are important to accurately measure the effects of a medical intervention. To test the suitability of different calibration polynomials for such patients, inaccurate calibration data were simulated using a geometric model of the EyeLink 1000 Plus desktop mode setup. This model is used to map eye position features to screen coordinates, creating screen data with known eye tracker data. This allows for objective evaluation of gaze estimation performance over the entire computer screen. Results show that the choice of calibration polynomial is crucial in order to ensure a high repeatability across measurements from patients who are hard to calibrate. Higher order calibration polynomials resulted in poor gaze estimation even for small simulated fixation inaccuracies.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128449300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Improved technological possibilities continue to increase the significance of operational monitoring in air traffic control (ATC). The role of the air traffic controller (ATCO) will change in that they will have to monitor the operations of an automated system for failures. In order to take over control when automation fails, future ATCOs will need to be trained. While current ATC training is mainly based on performance indicators, this study will focus on the benefit of using eye tracking in future ATC training. Utilizing a low-fidelity operational monitoring task, a model of how attention should be allocated in case of malfunction will be derived. Based on this model, one group of ATC novices will receive training on how to allocate their attention appropriately (treatment). The other group will receive no training (control). Eye movements will be recorded to investigate how attention is allocated and if the training is successful. Performance measures will be used to evaluate the effectiveness of the training.
{"title":"Training operational monitoring in future ATCOs using eye tracking: extended abstract","authors":"Carolina Barzantny","doi":"10.1145/3204493.3207412","DOIUrl":"https://doi.org/10.1145/3204493.3207412","url":null,"abstract":"Improved technological possibilities continue to increase the significance of operational monitoring in air traffic control (ATC). The role of the air traffic controller (ATCO) will change in that they will have to monitor the operations of an automated system for failures. In order to take over control when automation fails, future ATCOs will need to be trained. While current ATC training is mainly based on performance indicators, this study will focus on the benefit of using eye tracking in future ATC training. Utilizing a low-fidelity operational monitoring task, a model of how attention should be allocated in case of malfunction will be derived. Based on this model, one group of ATC novices will receive training on how to allocate their attention appropriately (treatment). The other group will receive no training (control). Eye movements will be recorded to investigate how attention is allocated and if the training is successful. Performance measures will be used to evaluate the effectiveness of the training.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131124378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaitra Yangandul, S. Paryani, Madison Le, Eakta Jain
Cognitive scientists and psychologists have long noted the "picture superiority effect", that is, pictorial content is more likely to be remembered and more likely to lead to an increased understanding of the material. We investigated the relative importance of pictorial regions versus textual regions on a website where pictures and text co-occur in a very structured manner: video content sharing websites. We tracked participants' eye movements as they performed a casual browsing task, that is, selecting a video to watch. We found that participants allocated almost twice as much attention to thumbnails as to title text regions. They also tended to look at the thumbnail images before the title text, as predicted by the picture superiority effect. These results have implications for both user experience designers as well as video content creators.
{"title":"How many words is a picture worth?: attention allocation on thumbnails versus title text regions","authors":"Chaitra Yangandul, S. Paryani, Madison Le, Eakta Jain","doi":"10.1145/3204493.3204571","DOIUrl":"https://doi.org/10.1145/3204493.3204571","url":null,"abstract":"Cognitive scientists and psychologists have long noted the \"picture superiority effect\", that is, pictorial content is more likely to be remembered and more likely to lead to an increased understanding of the material. We investigated the relative importance of pictorial regions versus textual regions on a website where pictures and text co-occur in a very structured manner: video content sharing websites. We tracked participants' eye movements as they performed a casual browsing task, that is, selecting a video to watch. We found that participants allocated almost twice as much attention to thumbnails as to title text regions. They also tended to look at the thumbnail images before the title text, as predicted by the picture superiority effect. These results have implications for both user experience designers as well as video content creators.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125311147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}