Effects of non-driving related activities performed during Level 3 automated driving phases on following takeover behavior have been investigated in multiple studies. If studies refer to a theoretical basis, usually the task switching paradigm is referred to, while at the same time multiple task performance theories are applied to explain effects of previously performed non-driving related activities on following takeover behavior. In this article, we apply task switching theory to explain and predict non-driving related activities’ effects on takeover and following manual driving behavior. Additionally, we report experimental work in progress that investigates the theoretical basis in a real driving setting on a test track using a Wizard-of-Oz vehicle to simulate Level 3 driving automation in traffic jams on highways. We aim to contribute to differentiation approaches for non-driving related activities’ effects on takeover and following manual driving behavior. Furthermore, this study can provide insights into user behavior under real driving situations.
{"title":"Using Task Switching to Explain Effects of Non-Driving Related Activities on Takeover and Manual Driving Behavior Following Level 3 Automated Driving","authors":"Elisabeth Shi, K. Bengler","doi":"10.1145/3544999.3552314","DOIUrl":"https://doi.org/10.1145/3544999.3552314","url":null,"abstract":"Effects of non-driving related activities performed during Level 3 automated driving phases on following takeover behavior have been investigated in multiple studies. If studies refer to a theoretical basis, usually the task switching paradigm is referred to, while at the same time multiple task performance theories are applied to explain effects of previously performed non-driving related activities on following takeover behavior. In this article, we apply task switching theory to explain and predict non-driving related activities’ effects on takeover and following manual driving behavior. Additionally, we report experimental work in progress that investigates the theoretical basis in a real driving setting on a test track using a Wizard-of-Oz vehicle to simulate Level 3 driving automation in traffic jams on highways. We aim to contribute to differentiation approaches for non-driving related activities’ effects on takeover and following manual driving behavior. Furthermore, this study can provide insights into user behavior under real driving situations.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114422998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soyeon Kim, E. V. Grondelle, Ilse M. Van Zeumeren, Alexander G. Mirnig, Kristina Stojmenova
In automated vehicles, the driver and the vehicle make a decision on the driving. However, there is no guarantee that drivers always agree or follow the system's decision. Drivers can reject the system's proposal or regain control, and it reduces the usefulness of automated vehicles. When a decision conflict happens, the vehicle can negotiate with the driver. Human-human communication depends on the individual's attitude and situation. Similarly, the negotiation style needs to differ depending on the context of conflict and the cause of disagreement. In this workshop, we address the negotiation approach to designing HMI and discuss considerations for applying the human-human negotiation style to human-automated vehicle interaction design. HMI design using a negotiation approach can address the decision conflict between humans and automation and expect enhancing trust and acceptance.
{"title":"Let's Negotiate with Automation: How can Humans and HMIs Negotiate Disagreement on Automated Vehicles?","authors":"Soyeon Kim, E. V. Grondelle, Ilse M. Van Zeumeren, Alexander G. Mirnig, Kristina Stojmenova","doi":"10.1145/3544999.3550159","DOIUrl":"https://doi.org/10.1145/3544999.3550159","url":null,"abstract":"In automated vehicles, the driver and the vehicle make a decision on the driving. However, there is no guarantee that drivers always agree or follow the system's decision. Drivers can reject the system's proposal or regain control, and it reduces the usefulness of automated vehicles. When a decision conflict happens, the vehicle can negotiate with the driver. Human-human communication depends on the individual's attitude and situation. Similarly, the negotiation style needs to differ depending on the context of conflict and the cause of disagreement. In this workshop, we address the negotiation approach to designing HMI and discuss considerations for applying the human-human negotiation style to human-automated vehicle interaction design. HMI design using a negotiation approach can address the decision conflict between humans and automation and expect enhancing trust and acceptance.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129976426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Bernhaupt, Bastian Pfleging, Alexander Meschtscherjakov, Debargha Dey, Melanie Berger
The focus of future mobility is slowly shifting from individual car-based mobility to (shared, automated, and electric) mobility (as a service – MaaS). While such novel forms of transportation promise benefits related to environmental impact and economical viability, practical aspects could negatively affect the users’ convenience and comfort. With this workshop, we aim to start a discussion on these challenges of future mobility and initiate an exchange across participants from different disciplines, cultures, and backgrounds to find solutions that support the user experience of smart mobility.
{"title":"Authority vs. Responsibility: Workshop on Revisiting Socio-Technical System Approaches to Design for Convenient Forms of Smart Mobility","authors":"R. Bernhaupt, Bastian Pfleging, Alexander Meschtscherjakov, Debargha Dey, Melanie Berger","doi":"10.1145/3544999.3551346","DOIUrl":"https://doi.org/10.1145/3544999.3551346","url":null,"abstract":"The focus of future mobility is slowly shifting from individual car-based mobility to (shared, automated, and electric) mobility (as a service – MaaS). While such novel forms of transportation promise benefits related to environmental impact and economical viability, practical aspects could negatively affect the users’ convenience and comfort. With this workshop, we aim to start a discussion on these challenges of future mobility and initiate an exchange across participants from different disciplines, cultures, and backgrounds to find solutions that support the user experience of smart mobility.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122759209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Riegler, A. Riener, Philipp Wintersberger, Tamara von Sawitzky, Ye Eun Song
With the increasing development of mixed reality (MR) technology and available devices, the number of its purposes and applications in vehicles and for road users increases. Mixed reality may help to increase road safety, allow drivers to perform non-driving related tasks (NDRTs), and enhance passenger experiences. Additionally, helmet-mounted displays (HMDs) for cyclists could leverage MR to augment cyclists’ vision and contribute to road and pedestrian safety. MR can also be helpful in the transition toward automated driving. However, there are still a number of challenges with the use of MR when applied in vehicles, and also several human factors issues need to be solved. Additionally, virtual reality (VR) has the potential to immerse passengers in virtual worlds and contributing to joyful passenger experiences. Current MR research usually focuses on one user at a time and at one point of the reality-virtuality continuum. In this workshop, we will discuss the potentials and constraints as well as the impact, role, and adequacy of MR in driving applications and simulations, including multi-user MR experiments, transitional interfaces, and HMDs for cyclists. The primary goal of this workshop is to set a research agenda for the use of MR in intelligent vehicles and for cyclists within the next 3 to 5 years and beyond.
{"title":"Workshop on Automotive Mixed Reality Applications: Transitional Interfaces, Multi-User VR, and Helmet-Mounted AR for Cyclists","authors":"Andreas Riegler, A. Riener, Philipp Wintersberger, Tamara von Sawitzky, Ye Eun Song","doi":"10.1145/3544999.3550158","DOIUrl":"https://doi.org/10.1145/3544999.3550158","url":null,"abstract":"With the increasing development of mixed reality (MR) technology and available devices, the number of its purposes and applications in vehicles and for road users increases. Mixed reality may help to increase road safety, allow drivers to perform non-driving related tasks (NDRTs), and enhance passenger experiences. Additionally, helmet-mounted displays (HMDs) for cyclists could leverage MR to augment cyclists’ vision and contribute to road and pedestrian safety. MR can also be helpful in the transition toward automated driving. However, there are still a number of challenges with the use of MR when applied in vehicles, and also several human factors issues need to be solved. Additionally, virtual reality (VR) has the potential to immerse passengers in virtual worlds and contributing to joyful passenger experiences. Current MR research usually focuses on one user at a time and at one point of the reality-virtuality continuum. In this workshop, we will discuss the potentials and constraints as well as the impact, role, and adequacy of MR in driving applications and simulations, including multi-user MR experiments, transitional interfaces, and HMDs for cyclists. The primary goal of this workshop is to set a research agenda for the use of MR in intelligent vehicles and for cyclists within the next 3 to 5 years and beyond.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123116446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chihab Nadri, Jiayuan Dong, Jingyi Li, Ignacio J. Alvarez, M. Jeon
Empathic in-vehicle interfaces can address driver affect and mitigate decreases in driving performance and behavior that are associated with emotional states. Empathic vehicles can detect and employ a variety of intervention modalities to change user affect and improve user experience. Challenges remain in the implementation of such strategies, as a broader established view of practical intervention modalities and strategies is still absent. Therefore, we propose a workshop that aims to bring together researchers and practitioners interested in affective interfaces and in-vehicle technologies as a forum for the development of displays and alternatives suitable to various use case situations in current and future vehicle states. During the workshop, we will focus on a common set of use cases and generate approaches that can suit different user groups. By the end of this workshop, researchers will create a design flowchart for in-vehicle affective display designers when creating displays for an empathic vehicle.
{"title":"Emotion GaRage Vol. III: A Workshop on Affective In-Vehicle Display Applications","authors":"Chihab Nadri, Jiayuan Dong, Jingyi Li, Ignacio J. Alvarez, M. Jeon","doi":"10.1145/3544999.3550161","DOIUrl":"https://doi.org/10.1145/3544999.3550161","url":null,"abstract":"Empathic in-vehicle interfaces can address driver affect and mitigate decreases in driving performance and behavior that are associated with emotional states. Empathic vehicles can detect and employ a variety of intervention modalities to change user affect and improve user experience. Challenges remain in the implementation of such strategies, as a broader established view of practical intervention modalities and strategies is still absent. Therefore, we propose a workshop that aims to bring together researchers and practitioners interested in affective interfaces and in-vehicle technologies as a forum for the development of displays and alternatives suitable to various use case situations in current and future vehicle states. During the workshop, we will focus on a common set of use cases and generate approaches that can suit different user groups. By the end of this workshop, researchers will create a design flowchart for in-vehicle affective display designers when creating displays for an empathic vehicle.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131925259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. The current paper will present novel designs to solve a specific issue with this method of interaction: visual distraction within the car. In this study, a Heads-up display (HUD) will be integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment will be conducted to investigate the potential of this system in improving drivers’ overall safety and gesture interaction accuracy. The experiment will recruit 24 participants to test the system. Participants will provide subjective feedback about the directions for conducting future research and improving the overall experience, as well as objective performance data.
{"title":"Increasing Driving Safety and In-Vehicle Gesture-Based Menu Navigation Accuracy with a Heads-up Display","authors":"Yu Cao, Lingyu Li, Jiehao Yuan, M. Jeon","doi":"10.1145/3544999.3551502","DOIUrl":"https://doi.org/10.1145/3544999.3551502","url":null,"abstract":"More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. The current paper will present novel designs to solve a specific issue with this method of interaction: visual distraction within the car. In this study, a Heads-up display (HUD) will be integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment will be conducted to investigate the potential of this system in improving drivers’ overall safety and gesture interaction accuracy. The experiment will recruit 24 participants to test the system. Participants will provide subjective feedback about the directions for conducting future research and improving the overall experience, as well as objective performance data.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114596714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Myeongkyu Lee, Sehan Kim, Daehyun Jung, Hyo-Geun Lee, Jihun Choi, Hyunseo Han, J. H. Yang
Intersection traffic accidents account for a substantial percentage of all traffic accidents, and they may lead to a high rate of fatalities. In this study, we aim to determine drivers’ responses when faced with unexpected dangers by obtaining data on drivers’ behavioral characteristics. We explored how drivers respond to intersection dangers in a simulated environment. Data including reaction times, vehicle data, occurrence of accidents, and ways to avoid accidents of 155 participants were analyzed. The results revealed a significant difference in steering angle, and speed between men and women, as well as in brake reaction time between 20s and 40s. A significant difference was observed in perception time, and brake reaction time depending on the occurrence of accidents. The findings of this study are expected to be used for predicting the reaction times and vehicle data, and determining the causes of traffic accidents.
{"title":"Simulator-Based Study of the Response Time and Defensive Behavior of Drivers in Unexpected Dangers at an Intersection","authors":"Myeongkyu Lee, Sehan Kim, Daehyun Jung, Hyo-Geun Lee, Jihun Choi, Hyunseo Han, J. H. Yang","doi":"10.1145/3544999.3552322","DOIUrl":"https://doi.org/10.1145/3544999.3552322","url":null,"abstract":"Intersection traffic accidents account for a substantial percentage of all traffic accidents, and they may lead to a high rate of fatalities. In this study, we aim to determine drivers’ responses when faced with unexpected dangers by obtaining data on drivers’ behavioral characteristics. We explored how drivers respond to intersection dangers in a simulated environment. Data including reaction times, vehicle data, occurrence of accidents, and ways to avoid accidents of 155 participants were analyzed. The results revealed a significant difference in steering angle, and speed between men and women, as well as in brake reaction time between 20s and 40s. A significant difference was observed in perception time, and brake reaction time depending on the occurrence of accidents. The findings of this study are expected to be used for predicting the reaction times and vehicle data, and determining the causes of traffic accidents.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114952618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing automation and artificial intelligence in every domain requires resolving complex human machine interactions. In the context of remote reconnaissance, an operator is likely to be supported by computer vision technology, whilst still making the final assessment, with the speed and precision of the classification being key. Within a vast design space with multiple design options, different interaction concepts were explored. One concept is based on gaze, which is tracked using eye tracking technology. It is then indicated if the operator's gaze is on a software identified object and allows an interaction through a secondary input device. To evaluate the concept, a simulator experiment was designed, comparing it to a joystick input. Expert users will complete two equal scenarios using both interaction methods, whilst metrics such as task success are recorded and interviews are conducted, for a balanced analysis of qualitative and quantitative, as well as subjective and objective data.
{"title":"A Gaze- vs. Joystick-Based Interaction Method for a Remote Reconnaissance Task","authors":"Joscha Wasser, M. Baltzer, F. Flemisch","doi":"10.1145/3544999.3554784","DOIUrl":"https://doi.org/10.1145/3544999.3554784","url":null,"abstract":"Increasing automation and artificial intelligence in every domain requires resolving complex human machine interactions. In the context of remote reconnaissance, an operator is likely to be supported by computer vision technology, whilst still making the final assessment, with the speed and precision of the classification being key. Within a vast design space with multiple design options, different interaction concepts were explored. One concept is based on gaze, which is tracked using eye tracking technology. It is then indicated if the operator's gaze is on a software identified object and allows an interaction through a secondary input device. To evaluate the concept, a simulator experiment was designed, comparing it to a joystick input. Expert users will complete two equal scenarios using both interaction methods, whilst metrics such as task success are recorded and interviews are conducted, for a balanced analysis of qualitative and quantitative, as well as subjective and objective data.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131400723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the extant literature on comfort in the autonomous driving research is extensive, no integrative model has been proposed that includes major comfort-influencing factors and their interrelations. In this work, we conduct a literature review and elicit automation-related factors that directly influence a driver’s comfort. The 6 groups of factors include the driving environment, vehicle and automation, and the user’s activity, personality, and understanding of the system. We then structure these in a model comprising environment-, vehicle-, and user-related categories. The resulting framework supports framing research questions about comfort in automation.
{"title":"Toward a High-Level Integrative Comfort Model in Autonomous Driving","authors":"Veronika Domova, Rebecca M. Currano, D. Sirkin","doi":"10.1145/3544999.3555725","DOIUrl":"https://doi.org/10.1145/3544999.3555725","url":null,"abstract":"Although the extant literature on comfort in the autonomous driving research is extensive, no integrative model has been proposed that includes major comfort-influencing factors and their interrelations. In this work, we conduct a literature review and elicit automation-related factors that directly influence a driver’s comfort. The 6 groups of factors include the driving environment, vehicle and automation, and the user’s activity, personality, and understanding of the system. We then structure these in a model comprising environment-, vehicle-, and user-related categories. The resulting framework supports framing research questions about comfort in automation.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"495 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113995684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the influence of road complexity and lead time on driver's situation awareness with three consecutive experiments. 7 participants were asked to watch simulated and real-world driving videos and answer their situation awareness level by SAGAT, SART, and subjective measures. In the first experiment, the effects of road complexity and lead time on the situation awareness level was tested. We found that driver's situation awareness is influenced by road complexity, but lead time had minimal effects. In the second and third experiment, we investigate the effects of traffic density and road type based on the simulated and real-world stimuli. The traffic density affects driver's situation awareness, while the road type did not. This effect was also reproduced in real-world stimuli. In addition, we could find an interesting trend in which people seem to adjust their desired situation awareness level according to the driving contexts.
{"title":"Understanding Driver's Situation Awareness in Highly Automated Driving","authors":"Young Woo Kim, Da Yeong Kim, S. Yoon","doi":"10.1145/3544999.3552321","DOIUrl":"https://doi.org/10.1145/3544999.3552321","url":null,"abstract":"This study investigated the influence of road complexity and lead time on driver's situation awareness with three consecutive experiments. 7 participants were asked to watch simulated and real-world driving videos and answer their situation awareness level by SAGAT, SART, and subjective measures. In the first experiment, the effects of road complexity and lead time on the situation awareness level was tested. We found that driver's situation awareness is influenced by road complexity, but lead time had minimal effects. In the second and third experiment, we investigate the effects of traffic density and road type based on the simulated and real-world stimuli. The traffic density affects driver's situation awareness, while the road type did not. This effect was also reproduced in real-world stimuli. In addition, we could find an interesting trend in which people seem to adjust their desired situation awareness level according to the driving contexts.","PeriodicalId":350782,"journal":{"name":"Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130315926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}