Several research groups have demonstrated advantages of extending a mobile device's input vocabulary with in-air gestures. Such gestures show promise but are not yet being integrated onto commercial devices. One reason for this might be the uncertainty about users' perceptions regarding the social acceptance of such around-device gestures. In three studies, performed in public settings, we explore users' and spectators' attitudes about using around-device gestures in public. The results show that people are concerned about others' reactions. They are also sensitive and selective regarding where and in front of whom they would feel comfortable using around-device gestures. However, acceptance and comfort are strongly linked to gesture characteristics, such as, gesture size, duration and in-air position. Based on our findings we present recommendations for around-device input designers and suggest new approaches for evaluating the social acceptability of novel input methods.
{"title":"Are you comfortable doing that?: acceptance studies of around-device gestures in and for public settings","authors":"David Ahlström, Khalad Hasan, Pourang Irani","doi":"10.1145/2628363.2628381","DOIUrl":"https://doi.org/10.1145/2628363.2628381","url":null,"abstract":"Several research groups have demonstrated advantages of extending a mobile device's input vocabulary with in-air gestures. Such gestures show promise but are not yet being integrated onto commercial devices. One reason for this might be the uncertainty about users' perceptions regarding the social acceptance of such around-device gestures. In three studies, performed in public settings, we explore users' and spectators' attitudes about using around-device gestures in public. The results show that people are concerned about others' reactions. They are also sensitive and selective regarding where and in front of whom they would feel comfortable using around-device gestures. However, acceptance and comfort are strongly linked to gesture characteristics, such as, gesture size, duration and in-air position. Based on our findings we present recommendations for around-device input designers and suggest new approaches for evaluating the social acceptability of novel input methods.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"29 20 1","pages":"193-202"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82940644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile HCI research has started to investigate multi-device interaction as we often have more devices at our immediate disposal. We present an application called JuxtaPinch that allows users to share photos while being collocated using several different devices, i.e. mobile phones and tablets, at the same time. JuxtaPinch use pinching to connect devices and it enables flexible physical positioning of devices and supports partial viewing of photos. Our evaluation showed that JuxtaPinch enabled participants to experience their own familiar photos in new ways known as defamiliarization. It further enabled participants to engage jointly in playful interaction with the photos and with each other. However, we also found that multiple device collocated photo sharing challenges aspects of synchronization and coordination.
{"title":"JuxtaPinch: exploring multi-device interaction in collocated photo sharing","authors":"H. S. Nielsen, M. Olsen, M. Skov, J. Kjeldskov","doi":"10.1145/2628363.2628369","DOIUrl":"https://doi.org/10.1145/2628363.2628369","url":null,"abstract":"Mobile HCI research has started to investigate multi-device interaction as we often have more devices at our immediate disposal. We present an application called JuxtaPinch that allows users to share photos while being collocated using several different devices, i.e. mobile phones and tablets, at the same time. JuxtaPinch use pinching to connect devices and it enables flexible physical positioning of devices and supports partial viewing of photos. Our evaluation showed that JuxtaPinch enabled participants to experience their own familiar photos in new ways known as defamiliarization. It further enabled participants to engage jointly in playful interaction with the photos and with each other. However, we also found that multiple device collocated photo sharing challenges aspects of synchronization and coordination.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"57 1","pages":"183-192"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82955819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human-Computer Interaction (HCI) research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines. This is largely due to speech being the highest-bandwidth communication channel we possess. As such, significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent during the past several decades on improving machines' ability to understand speech. Yet, the MobileHCI community (and HCI in general) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that MobileHCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
{"title":"Speech-based interaction: myths, challenges, and opportunities","authors":"Cosmin Munteanu, Gerald Penn","doi":"10.1145/2628363.2645671","DOIUrl":"https://doi.org/10.1145/2628363.2645671","url":null,"abstract":"Human-Computer Interaction (HCI) research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines. This is largely due to speech being the highest-bandwidth communication channel we possess. As such, significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent during the past several decades on improving machines' ability to understand speech. Yet, the MobileHCI community (and HCI in general) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the relatively discouraging levels of accuracy in understanding speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces.\u0000 The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that MobileHCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"8 1","pages":"567-568"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83376332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Louis-Pierre Bergé, M. Serrano, G. Perelman, E. Dubois
As public displays integrate 3D content, Overview+Detail (O+D) interfaces on mobile devices will allow for a personal 3D exploration of the public display. In this paper we study the properties of mobile-based interaction with O+D interfaces on 3D public displays. We evaluate three types of existing interaction techniques for the 3D translation of the Detail view: touchscreen input, mid-air movement of the mobile device (Mid-Air Phone) and mid-air movement of the hand around the device (Mid-Air Hand). In a first experiment, we compare the performance and user preference of these three types of techniques with previous training. In a second experiment, we study how well the two mid-air techniques perform with no training or human help to imitate usual conditions in public context. Results reveal that Mid-Air Phone and Hand perform best with training. However, without training or human help Mid-Air Phone is more intuitive and performs better on the first trial. Interestingly, on both experiments users preferred Mid-Air Hand. We conclude with a discussion on the use of mobile devices to interact with public O+D interfaces.
{"title":"Exploring smartphone-based interaction with overview+detail interfaces on 3D public displays","authors":"Louis-Pierre Bergé, M. Serrano, G. Perelman, E. Dubois","doi":"10.1145/2628363.2628374","DOIUrl":"https://doi.org/10.1145/2628363.2628374","url":null,"abstract":"As public displays integrate 3D content, Overview+Detail (O+D) interfaces on mobile devices will allow for a personal 3D exploration of the public display. In this paper we study the properties of mobile-based interaction with O+D interfaces on 3D public displays. We evaluate three types of existing interaction techniques for the 3D translation of the Detail view: touchscreen input, mid-air movement of the mobile device (Mid-Air Phone) and mid-air movement of the hand around the device (Mid-Air Hand). In a first experiment, we compare the performance and user preference of these three types of techniques with previous training. In a second experiment, we study how well the two mid-air techniques perform with no training or human help to imitate usual conditions in public context. Results reveal that Mid-Air Phone and Hand perform best with training. However, without training or human help Mid-Air Phone is more intuitive and performs better on the first trial. Interestingly, on both experiments users preferred Mid-Air Hand. We conclude with a discussion on the use of mobile devices to interact with public O+D interfaces.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"63 2 1","pages":"125-134"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88223576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kana Muramatsu, H. Kobayashi, Junya Okuno, Akio Fujiwara, Kazuhiko W. Nakamura, Kaoru Saito
Currently, human-computer interaction (HCI) is primarily focused on human-centric interactions; however, people experience many nonhuman-centric interactions during the course of a day. Interactions with nature, such as the experience of walking through a forest or unexpected encounter with wildlife, can imprint the beauty of nature in our memories. In this context, the present paper considers an experimental interface of such nonhuman interface for various PDA application designs through an imaginable interaction with nature. A virtual forest experience environment on PDA is made more realistic through two subsystems: "Panorama-viewer of forest" and "Remote animal sensing". The former is an application by which users can look out over the forest landscape in all directions with gyro-sensor inside PDA. The latter is a virtual system allowing users living in remote urban areas to experience interaction with wild deer in a forest virtually in real time. This novel design means that users can realize a forest experience through the PDA in their hands.
{"title":"The realization of new virtual forest experience environment through PDA","authors":"Kana Muramatsu, H. Kobayashi, Junya Okuno, Akio Fujiwara, Kazuhiko W. Nakamura, Kaoru Saito","doi":"10.1145/2628363.2633570","DOIUrl":"https://doi.org/10.1145/2628363.2633570","url":null,"abstract":"Currently, human-computer interaction (HCI) is primarily focused on human-centric interactions; however, people experience many nonhuman-centric interactions during the course of a day. Interactions with nature, such as the experience of walking through a forest or unexpected encounter with wildlife, can imprint the beauty of nature in our memories. In this context, the present paper considers an experimental interface of such nonhuman interface for various PDA application designs through an imaginable interaction with nature. A virtual forest experience environment on PDA is made more realistic through two subsystems: \"Panorama-viewer of forest\" and \"Remote animal sensing\". The former is an application by which users can look out over the forest landscape in all directions with gyro-sensor inside PDA. The latter is a virtual system allowing users living in remote urban areas to experience interaction with wild deer in a forest virtually in real time. This novel design means that users can realize a forest experience through the PDA in their hands.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"13 1","pages":"421-424"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74768116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Notifications on mobile phones alert users about new messages, emails, social network updates, and other events. However, little is understood about the nature and effect of such notifications on the daily lives of mobile users. We report from a one-week, in-situ study involving 15 mobile phones users, where we collected real-world notifications through a smartphone logging application alongside subjective perceptions of those notifications through an online diary. We found that our participants had to deal with 63.5 notifications on average per day, mostly from messengers and email. Whether the phone is in silent mode or not, notifications were typically viewed within minutes. Social pressure in personal communication was amongst the main reasons given. While an increasing number of notifications was associated with an increase in negative emotions, receiving more messages and social network updates also made our participants feel more connected with others. Our findings imply that avoiding interruptions from notifications may be viable for professional communication, while in personal communication, approaches should focus on managing expectations.
{"title":"An in-situ study of mobile phone notifications","authors":"M. Pielot, K. Church, Rodrigo de Oliveira","doi":"10.1145/2628363.2628364","DOIUrl":"https://doi.org/10.1145/2628363.2628364","url":null,"abstract":"Notifications on mobile phones alert users about new messages, emails, social network updates, and other events. However, little is understood about the nature and effect of such notifications on the daily lives of mobile users. We report from a one-week, in-situ study involving 15 mobile phones users, where we collected real-world notifications through a smartphone logging application alongside subjective perceptions of those notifications through an online diary. We found that our participants had to deal with 63.5 notifications on average per day, mostly from messengers and email. Whether the phone is in silent mode or not, notifications were typically viewed within minutes. Social pressure in personal communication was amongst the main reasons given. While an increasing number of notifications was associated with an increase in negative emotions, receiving more messages and social network updates also made our participants feel more connected with others. Our findings imply that avoiding interruptions from notifications may be viable for professional communication, while in personal communication, approaches should focus on managing expectations.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"84 1","pages":"233-242"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74999221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Veras, Erik Paluka, Meng-Wei Chang, Vivian Tsang, F. Shein, C. Collins
This paper introduces a touch-based reading interface for tablets designed to support vocabulary acquisition, text comprehension, and reduction of reading anxiety. Touch interaction is leveraged to allow direct replacement of words with synonyms, easy access to word definitions and seamless dialogue with a personalized model of the reader's vocabulary. We discuss how fluid interaction and direct manipulation coupled with natural language processing can help address the reading needs of audiences such as school-age children and English as Second Language learners.
{"title":"Interaction for reading comprehension on mobile devices","authors":"Rafael Veras, Erik Paluka, Meng-Wei Chang, Vivian Tsang, F. Shein, C. Collins","doi":"10.1145/2628363.2628387","DOIUrl":"https://doi.org/10.1145/2628363.2628387","url":null,"abstract":"This paper introduces a touch-based reading interface for tablets designed to support vocabulary acquisition, text comprehension, and reduction of reading anxiety. Touch interaction is leveraged to allow direct replacement of words with synonyms, easy access to word definitions and seamless dialogue with a personalized model of the reader's vocabulary. We discuss how fluid interaction and direct manipulation coupled with natural language processing can help address the reading needs of audiences such as school-age children and English as Second Language learners.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"27 1","pages":"157-161"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74075800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, two walking evaluation methods were compared to evaluate the effects of encumbrance while the preferred walking speed (PWS) is controlled. Users frequently carry cumbersome objects (e.g. shopping bags) and use mobile devices at the same time which can cause interaction difficulties and erroneous input. The two methods used to control the PWS were: walking on a treadmill and walking around a predefined route on the ground while following a pacesetter. The results from our target acquisition experiment showed that for ground walking at 100% of PWS, accuracy dropped to 36% when carrying a bag in the dominant hand while accuracy reduced to 34% for holding a box under the dominant arm. We also discuss the advantages and limitations of each evaluation method when examining encumbrance and suggest treadmill walking is not the most suitable approach to use if walking speed is an important factor in future mobile studies.
{"title":"Comparing evaluation methods for encumbrance and walking on interaction with touchscreen mobile devices","authors":"Alexander Ng, John Williamson, S. Brewster","doi":"10.1145/2628363.2628382","DOIUrl":"https://doi.org/10.1145/2628363.2628382","url":null,"abstract":"In this paper, two walking evaluation methods were compared to evaluate the effects of encumbrance while the preferred walking speed (PWS) is controlled. Users frequently carry cumbersome objects (e.g. shopping bags) and use mobile devices at the same time which can cause interaction difficulties and erroneous input. The two methods used to control the PWS were: walking on a treadmill and walking around a predefined route on the ground while following a pacesetter. The results from our target acquisition experiment showed that for ground walking at 100% of PWS, accuracy dropped to 36% when carrying a bag in the dominant hand while accuracy reduced to 34% for holding a box under the dominant arm. We also discuss the advantages and limitations of each evaluation method when examining encumbrance and suggest treadmill walking is not the most suitable approach to use if walking speed is an important factor in future mobile studies.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"17 1","pages":"23-32"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73331332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Body Editing is an interactive installation that combines depth sensing (Kinect 3D camera), biometric sensors, musical performance and abstract drawing software to create a mobile wireless interface that sonically and graphically represents the user's motion in space. The wireless nature of this gesture-controlled interface is an explicit attempt to create embodied experiences that encourages users to be more aware of their body through movement and audio/visual feedback and less focused on technological augmentation.
{"title":"Mobile experience lab: body editing","authors":"Stephen Surlin, Paula Gardner","doi":"10.1145/2628363.2633575","DOIUrl":"https://doi.org/10.1145/2628363.2633575","url":null,"abstract":"Body Editing is an interactive installation that combines depth sensing (Kinect 3D camera), biometric sensors, musical performance and abstract drawing software to create a mobile wireless interface that sonically and graphically represents the user's motion in space. The wireless nature of this gesture-controlled interface is an explicit attempt to create embodied experiences that encourages users to be more aware of their body through movement and audio/visual feedback and less focused on technological augmentation.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"73 1","pages":"439-442"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79592084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyojeong Shin, Taiwoo Park, Seungwoo Kang, Bupjae Lee, Junehwa Song, Yohan Chon, H. Cha
Finding a missing child is an important problem concerning not only parents but also our society. It is essential and natural to use serendipitous clues from neighbors for finding a missing child. In this paper, we explore a new architecture of crowd collaboration to expedite this mission-critical process and propose a crowd-sourced cooperative mobile application, CoSMiC. It helps parents find their missing child quickly on the spot before he or she completely disappears. A key idea lies in constructing the location history of a child via crowd participation, thereby leading parents to their child easily and quickly. We implement a prototype application and conduct extensive user studies to assess the design of the application and investigate its potential for practical use.
{"title":"CoSMiC: designing a mobile crowd-sourced collaborative application to find a missing child in situ","authors":"Hyojeong Shin, Taiwoo Park, Seungwoo Kang, Bupjae Lee, Junehwa Song, Yohan Chon, H. Cha","doi":"10.1145/2628363.2628400","DOIUrl":"https://doi.org/10.1145/2628363.2628400","url":null,"abstract":"Finding a missing child is an important problem concerning not only parents but also our society. It is essential and natural to use serendipitous clues from neighbors for finding a missing child. In this paper, we explore a new architecture of crowd collaboration to expedite this mission-critical process and propose a crowd-sourced cooperative mobile application, CoSMiC. It helps parents find their missing child quickly on the spot before he or she completely disappears. A key idea lies in constructing the location history of a child via crowd participation, thereby leading parents to their child easily and quickly. We implement a prototype application and conduct extensive user studies to assess the design of the application and investigate its potential for practical use.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"23 1","pages":"389-398"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80944529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}