Pub Date : 2024-06-12DOI: 10.1177/15553434241262865
J. Schraagen, Emilie M. Roth
This brief Introduction provides an overview and clustering of the invited commentaries to the Skraaning and Jamieson target article, as well as implications for future research in the area of automation failure.
{"title":"Introduction to the Special Issue on Automation Failure","authors":"J. Schraagen, Emilie M. Roth","doi":"10.1177/15553434241262865","DOIUrl":"https://doi.org/10.1177/15553434241262865","url":null,"abstract":"This brief Introduction provides an overview and clustering of the invited commentaries to the Skraaning and Jamieson target article, as well as implications for future research in the area of automation failure.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141352731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1177/15553434241235770
Stephanie Michailovs, J. Irons, Zach Howard, S. Pond, Megan Schmitt, Matt Stoker, Troy A W Visser, Jason Bell, Gavin Pinniger, Madison J. Fitzgerald, S. Huf, Owen B J Carter, S. Loft
Advances in opto-electronics enable replacement of conventional submarine periscopes which display only a portion of the horizon (low field of view), with digital periscopes, which can potentially display a 360° panoramic digital representation of the horizon (high field of view). Digital periscopes can also provide digitized analysis tools for vessel (contact) range and course estimates. The current research compared the impact of a digital representation of a conventional periscope view with an alternative digital periscope prototype that displayed a larger (360°) field of view and provided digital analysis tools, on performance, perceived workload and system usability. Two experiments were conducted in a simulated submarine control room environment. In Experiment 1, the high field of view periscope yielded faster contact detection times, with no cost to the perceived workload or usability associated with the task of contact detection. In Experiment 2, the digitized analysis tools supported more accurate contact course and range estimates and lowered perceived workload, with no impact on perceived usability. These outcomes indicate that digitally augmenting the periscope is a technological innovation that can potentially facilitate submariner tasks, and highlight the benefits of applying knowledge from perceptual and cognitive science to the design of future digital periscope prototypes.
{"title":"Augmenting Human Cognition With a Digital Submarine Periscope","authors":"Stephanie Michailovs, J. Irons, Zach Howard, S. Pond, Megan Schmitt, Matt Stoker, Troy A W Visser, Jason Bell, Gavin Pinniger, Madison J. Fitzgerald, S. Huf, Owen B J Carter, S. Loft","doi":"10.1177/15553434241235770","DOIUrl":"https://doi.org/10.1177/15553434241235770","url":null,"abstract":"Advances in opto-electronics enable replacement of conventional submarine periscopes which display only a portion of the horizon (low field of view), with digital periscopes, which can potentially display a 360° panoramic digital representation of the horizon (high field of view). Digital periscopes can also provide digitized analysis tools for vessel (contact) range and course estimates. The current research compared the impact of a digital representation of a conventional periscope view with an alternative digital periscope prototype that displayed a larger (360°) field of view and provided digital analysis tools, on performance, perceived workload and system usability. Two experiments were conducted in a simulated submarine control room environment. In Experiment 1, the high field of view periscope yielded faster contact detection times, with no cost to the perceived workload or usability associated with the task of contact detection. In Experiment 2, the digitized analysis tools supported more accurate contact course and range estimates and lowered perceived workload, with no impact on perceived usability. These outcomes indicate that digitally augmenting the periscope is a technological innovation that can potentially facilitate submariner tasks, and highlight the benefits of applying knowledge from perceptual and cognitive science to the design of future digital periscope prototypes.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1177/15553434241234105
L. Militello, Katie Ernst, A. R. Panganiban, Eli Wagner, Michael T. Tolston, Gregory J. Funke
Despite decades of research and training, adverse physiological events continue to present a significant risk to pilots. This paper describes a naturalistic study of fighter pilot experiences with disorienting physiological events such as gravity-induced loss of consciousness (G-LOC), hypoxia, and spatial disorientation. We conducted a cognitive task analysis with 10 experienced fighter pilots to elicit incidents in which the pilot or another pilot in the flight experienced a physiological event, exploring how they recognized something was wrong, assessed the level of impairment, and regained situation awareness. We describe cognitive requirements, specific cues and strategies, as well as complexities associated with managing these disorienting physiological events. Implications for decision support design and training are discussed.
{"title":"Get on the Round Dial: Fighter Pilot Strategies for Recovering Situation Awareness After Disorienting Physiological Events","authors":"L. Militello, Katie Ernst, A. R. Panganiban, Eli Wagner, Michael T. Tolston, Gregory J. Funke","doi":"10.1177/15553434241234105","DOIUrl":"https://doi.org/10.1177/15553434241234105","url":null,"abstract":"Despite decades of research and training, adverse physiological events continue to present a significant risk to pilots. This paper describes a naturalistic study of fighter pilot experiences with disorienting physiological events such as gravity-induced loss of consciousness (G-LOC), hypoxia, and spatial disorientation. We conducted a cognitive task analysis with 10 experienced fighter pilots to elicit incidents in which the pilot or another pilot in the flight experienced a physiological event, exploring how they recognized something was wrong, assessed the level of impairment, and regained situation awareness. We describe cognitive requirements, specific cues and strategies, as well as complexities associated with managing these disorienting physiological events. Implications for decision support design and training are discussed.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1177/15553434241251431
Laura Wilson, Alison M. Stuebe, M. Pearsall, Megan Mansour, K. P. Tully, Jennifer H. Garvin, Kevin Jones, Emily S. Patterson
Distinguishing urgent from non-urgent communication is critical. We aimed to understand how hospital staff choose what communication technologies to use. A mixed methods study was conducted with four focus groups, with staff working on postpartum care units, and log data from a hands-free communication system was analyzed. We found that urgent communications were appropriate for the hands-free device, with less urgent communications sent through secure chat in the electronic health record. Exceptions included when the intended recipient was in the operating room and during sensitive discussions. The most common duration of calls on the hands-free device was 16–60 seconds, with few more than 5 minutes. The most frequent reason for incomplete calls was the user not logging in (36%), which could be reduced by eliminating logging in and training. We recommend an interruptive communication technology for urgent information and an electronic health record chat for less urgent information that does not require an immediate response. We recommend forwarding and sending calls to a hospital-provided cellphone from the hands-free communication system for provider roles that do not align with the intended workflow for hands-free communications.
{"title":"Distinguishing Urgent From Non-urgent Communications: A Mixed Methods Study of Communication Technology Use in Perinatal Care","authors":"Laura Wilson, Alison M. Stuebe, M. Pearsall, Megan Mansour, K. P. Tully, Jennifer H. Garvin, Kevin Jones, Emily S. Patterson","doi":"10.1177/15553434241251431","DOIUrl":"https://doi.org/10.1177/15553434241251431","url":null,"abstract":"Distinguishing urgent from non-urgent communication is critical. We aimed to understand how hospital staff choose what communication technologies to use. A mixed methods study was conducted with four focus groups, with staff working on postpartum care units, and log data from a hands-free communication system was analyzed. We found that urgent communications were appropriate for the hands-free device, with less urgent communications sent through secure chat in the electronic health record. Exceptions included when the intended recipient was in the operating room and during sensitive discussions. The most common duration of calls on the hands-free device was 16–60 seconds, with few more than 5 minutes. The most frequent reason for incomplete calls was the user not logging in (36%), which could be reduced by eliminating logging in and training. We recommend an interruptive communication technology for urgent information and an electronic health record chat for less urgent information that does not require an immediate response. We recommend forwarding and sending calls to a hospital-provided cellphone from the hands-free communication system for provider roles that do not align with the intended workflow for hands-free communications.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141005529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-23DOI: 10.1177/15553434241240849
Sidney W. A. Dekker, David D. Woods
Warnings about the risks of literal-minded automation—a system that can’t tell if its model of the world is the world it is actually in—have been sounded for over 70 years. The risk is that a system will do the “right” thing—its actions are appropriate given its model of the world, but it is actually in a different world—producing unexpected/unintended behavior and potentially harmful effects. This risk—wrong, strong, and silent automation—looms larger today as our ability to deploy increasingly autonomous systems and delegate greater authority to such systems expands. It already produces incidents, outages of valued services, financial losses, and fatal accidents across different settings. This paper explores this general and out-of-control risk by examining a pair of fatal aviation accidents which revolved around wrong, strong and silent automation.
{"title":"Wrong, Strong, and Silent: What Happens when Automated Systems With High Autonomy and High Authority Misbehave?","authors":"Sidney W. A. Dekker, David D. Woods","doi":"10.1177/15553434241240849","DOIUrl":"https://doi.org/10.1177/15553434241240849","url":null,"abstract":"Warnings about the risks of literal-minded automation—a system that can’t tell if its model of the world is the world it is actually in—have been sounded for over 70 years. The risk is that a system will do the “right” thing—its actions are appropriate given its model of the world, but it is actually in a different world—producing unexpected/unintended behavior and potentially harmful effects. This risk—wrong, strong, and silent automation—looms larger today as our ability to deploy increasingly autonomous systems and delegate greater authority to such systems expands. It already produces incidents, outages of valued services, financial losses, and fatal accidents across different settings. This paper explores this general and out-of-control risk by examining a pair of fatal aviation accidents which revolved around wrong, strong and silent automation.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140669526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.1177/15553434241240203
David D. Woods
Two trajectories underway transform human systems. Processes of growth/complexification have accelerated as stakeholders seek advantage from advances in connectivity/autonomy/sensing. Surprising empirical patterns also arise—puzzling collapses of critical valued services occur against a background of growth. In parallel, new scientific foundations have arisen from diverse directions explaining the observed anomalies and breakdowns, highlighting basic weaknesses of automata regardless of technology. Conceptual growth provides laws, theorems, and comprehensive theories that encompass the interplay of autonomy/people and complexity/adaptation across scales. One danger for synchronizing the trajectories is conceptual lag as researchers remain stuck in stale frames unable to keep pace with transformative change. Any approach that does not either build on the new conceptual advances—or provide alternative foundations—is no longer credible to match the scale and stakes of modern distributed layered systems and overcome the limits of automata. The paper examines longstanding challenges by contrasting progress then as the trajectories gathered steam, to situation now as change has accelerated.
{"title":"Limits of Automata—Then and Now: Challenges of Architecture, Brittleness, and Scale","authors":"David D. Woods","doi":"10.1177/15553434241240203","DOIUrl":"https://doi.org/10.1177/15553434241240203","url":null,"abstract":"Two trajectories underway transform human systems. Processes of growth/complexification have accelerated as stakeholders seek advantage from advances in connectivity/autonomy/sensing. Surprising empirical patterns also arise—puzzling collapses of critical valued services occur against a background of growth. In parallel, new scientific foundations have arisen from diverse directions explaining the observed anomalies and breakdowns, highlighting basic weaknesses of automata regardless of technology. Conceptual growth provides laws, theorems, and comprehensive theories that encompass the interplay of autonomy/people and complexity/adaptation across scales. One danger for synchronizing the trajectories is conceptual lag as researchers remain stuck in stale frames unable to keep pace with transformative change. Any approach that does not either build on the new conceptual advances—or provide alternative foundations—is no longer credible to match the scale and stakes of modern distributed layered systems and overcome the limits of automata. The paper examines longstanding challenges by contrasting progress then as the trajectories gathered steam, to situation now as change has accelerated.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140674304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1177/15553434241231056
Jing Xing, Niav Hughes Green
This paper responds to Skraaning and Jamieson’s target paper “The Failure to Grasp Automation Failure.” We acknowledge that the target paper made important contributions to automation research in the human factors community. It analyzed automation failure events in complex operational systems in contrast to the vast majority of laboratory research on human-automation interaction. The paper presented a taxonomy of automation failure. The analysis and taxonomy demonstrate the integration of approaches to grasping automation failures from system instrumentation and controls, human factors engineering, and human reliability analysis. We reviewed the regulatory framework related to use of automation in nuclear power plants and examined whether the framework elements adequately address “Failure to Grasp Automation Failure” using the taxonomy in the target paper. Overall, we believe that the target paper could enhance the consideration for potential automation failures in the design and regulatory review process of automation technologies.
本文是对 Skraaning 和 Jamieson 的目标论文 "The Failure to Grasp Automation Failure" 的回应。我们承认目标论文为人因学界的自动化研究做出了重要贡献。它分析了复杂运行系统中的自动化失效事件,与绝大多数人机交互实验室研究形成鲜明对比。论文提出了自动化故障分类法。该分析和分类法展示了从系统仪表和控制、人因工程和人类可靠性分析等方面把握自动化故障的方法的整合。我们审查了与核电厂使用自动化相关的监管框架,并使用目标文件中的分类标准检查了框架要素是否充分解决了 "无法把握自动化故障 "的问题。总体而言,我们认为目标文件可以在自动化技术的设计和监管审查过程中加强对潜在自动化故障的考虑。
{"title":"A Regulatory Perspective: Have We Done Enough on Grasping Automation Failure?","authors":"Jing Xing, Niav Hughes Green","doi":"10.1177/15553434241231056","DOIUrl":"https://doi.org/10.1177/15553434241231056","url":null,"abstract":"This paper responds to Skraaning and Jamieson’s target paper “The Failure to Grasp Automation Failure.” We acknowledge that the target paper made important contributions to automation research in the human factors community. It analyzed automation failure events in complex operational systems in contrast to the vast majority of laboratory research on human-automation interaction. The paper presented a taxonomy of automation failure. The analysis and taxonomy demonstrate the integration of approaches to grasping automation failures from system instrumentation and controls, human factors engineering, and human reliability analysis. We reviewed the regulatory framework related to use of automation in nuclear power plants and examined whether the framework elements adequately address “Failure to Grasp Automation Failure” using the taxonomy in the target paper. Overall, we believe that the target paper could enhance the consideration for potential automation failures in the design and regulatory review process of automation technologies.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140736247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1177/15553434241234108
S. Loft
Firstly, I comment on the lack of support for the predictions of the lumberjack model to professionally qualified operators in high-fidelity work simulations (Jamieson & Skraaning, 2020a). I highlight the advantages that Bayesian statistics provide for qualifying the degree of evidence for the null hypotheses, issues concerning situation awareness measurement, and the alternative techniques available to study experts. Secondly, I comment on the innovative taxonomy of automation failure presented by Skraaning and Jamieson (2024), pointing out some issues with overlapping definitions and lack of cause-effect relationships. I then discuss the substantial opportunity this taxonomy presents to guide future research, such as the design of transparent automation. To conclude, I identify some other key problems regarding how we currently study human-automation teaming (e.g. presenting randomized automation failure unlinked to task context) and invite discussion from the research community on the relevance of computational modelling to this field of research.
{"title":"Accelerating Understanding of Human Response to Automation Failure","authors":"S. Loft","doi":"10.1177/15553434241234108","DOIUrl":"https://doi.org/10.1177/15553434241234108","url":null,"abstract":"Firstly, I comment on the lack of support for the predictions of the lumberjack model to professionally qualified operators in high-fidelity work simulations (Jamieson & Skraaning, 2020a). I highlight the advantages that Bayesian statistics provide for qualifying the degree of evidence for the null hypotheses, issues concerning situation awareness measurement, and the alternative techniques available to study experts. Secondly, I comment on the innovative taxonomy of automation failure presented by Skraaning and Jamieson (2024), pointing out some issues with overlapping definitions and lack of cause-effect relationships. I then discuss the substantial opportunity this taxonomy presents to guide future research, such as the design of transparent automation. To conclude, I identify some other key problems regarding how we currently study human-automation teaming (e.g. presenting randomized automation failure unlinked to task context) and invite discussion from the research community on the relevance of computational modelling to this field of research.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140737138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-01DOI: 10.1177/15553434241236536
Amy R. Pritchett
How to characterize automation failures? From the point of view of complex, multi-agent operations, I argue that their definition and modeling is the most useful when it accounts for the important drivers of operational performance and safety. This calls us beyond a focus on one-human, one-system performing one task, to a team simultaneously executing many activities in which things are always failing. Much of this activity is defined by the dynamics of the work environment, which can be modeled and predicted. Further, having a human legally responsible for the outcome of the automation’s actions significantly colors the dynamic.
{"title":"Things Go Wrong and the Captain Has to Handle it","authors":"Amy R. Pritchett","doi":"10.1177/15553434241236536","DOIUrl":"https://doi.org/10.1177/15553434241236536","url":null,"abstract":"How to characterize automation failures? From the point of view of complex, multi-agent operations, I argue that their definition and modeling is the most useful when it accounts for the important drivers of operational performance and safety. This calls us beyond a focus on one-human, one-system performing one task, to a team simultaneously executing many activities in which things are always failing. Much of this activity is defined by the dynamics of the work environment, which can be modeled and predicted. Further, having a human legally responsible for the outcome of the automation’s actions significantly colors the dynamic.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140757341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-30DOI: 10.1177/15553434241227534
Shannon C. Roberts
Skraaning and Jamieson’s (2023) article defines, provides examples, and offers a taxonomy for automation failures. They also invite others to apply their concepts to other domains. Driving automation systems along with their myriad of failures provide the perfect test case. This article defines, characterizes, and discusses prevention mechanisms for driving automation system failures. By combining Skraaning and Jamieson’s (2023) original taxonomy with characterizations and prevention mechanisms of driving automation system failures, their work is extended and substantiated.
{"title":"Failures in Driving Automation Systems: Definitions, Taxonomy, and Prevention Mechanisms","authors":"Shannon C. Roberts","doi":"10.1177/15553434241227534","DOIUrl":"https://doi.org/10.1177/15553434241227534","url":null,"abstract":"Skraaning and Jamieson’s (2023) article defines, provides examples, and offers a taxonomy for automation failures. They also invite others to apply their concepts to other domains. Driving automation systems along with their myriad of failures provide the perfect test case. This article defines, characterizes, and discusses prevention mechanisms for driving automation system failures. By combining Skraaning and Jamieson’s (2023) original taxonomy with characterizations and prevention mechanisms of driving automation system failures, their work is extended and substantiated.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140362181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}