Pub Date : 2017-04-01DOI: 10.4018/IJMHCI.2017040102
D. Large, G. Burnett, A. Bolton
The use of landmarks during the provision of directions can greatly improve drivers' route-following performance. However, the successful integration of landmarks within in-vehicle navigation systems is predicated on the acquisition and deployment of good quality landmarks, as defined by their visibility, uniqueness, permanence, location etc., and their accurate and succinct depiction on in-vehicle displays and during accompanying verbal messages. Notwithstanding the inherent variability in the quality and propensity of landmarks within the driving environment, attending to in-vehicle displays and verbal messages while driving can distract drivers and heighten their visual and cognitive workload. Furthermore, vocal utterances are transient and can be littered with paralinguistic cues that can influence a driver's interpretation of what is said. In this paper, a driving simulator study is described aiming to investigate the augmentation of landmarks during the head up provision of route guidance advice-a potential solution to some of these problems. Twenty participants undertook four drives utilising a navigation system presented on a head up display HUD in which navigational instructions were presented as either: conventional distance-to-turn information; on-road arrows; or augmented landmarks either an arrow pointing to the landmark or a box enclosing the landmark adjacent to the required turning. Participants demonstrated significant performance improvements while using the augmented landmark 'box' compared to the conventional distance-to-turn information, with response times and success rates enhanced by 43.1% and 26.2%, respectively. Moreover, there were significant reductions in eyes off-the-road time when using this approach, and it also attracted the lowest subjective ratings of workload. The authors conclude that there are significant benefits to augmenting landmarks during the head-up provision of in-car navigation advice.
{"title":"Augmenting Landmarks During the Head-Up Provision of In-Vehicle Navigation Advice","authors":"D. Large, G. Burnett, A. Bolton","doi":"10.4018/IJMHCI.2017040102","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017040102","url":null,"abstract":"The use of landmarks during the provision of directions can greatly improve drivers' route-following performance. However, the successful integration of landmarks within in-vehicle navigation systems is predicated on the acquisition and deployment of good quality landmarks, as defined by their visibility, uniqueness, permanence, location etc., and their accurate and succinct depiction on in-vehicle displays and during accompanying verbal messages. Notwithstanding the inherent variability in the quality and propensity of landmarks within the driving environment, attending to in-vehicle displays and verbal messages while driving can distract drivers and heighten their visual and cognitive workload. Furthermore, vocal utterances are transient and can be littered with paralinguistic cues that can influence a driver's interpretation of what is said. In this paper, a driving simulator study is described aiming to investigate the augmentation of landmarks during the head up provision of route guidance advice-a potential solution to some of these problems. Twenty participants undertook four drives utilising a navigation system presented on a head up display HUD in which navigational instructions were presented as either: conventional distance-to-turn information; on-road arrows; or augmented landmarks either an arrow pointing to the landmark or a box enclosing the landmark adjacent to the required turning. Participants demonstrated significant performance improvements while using the augmented landmark 'box' compared to the conventional distance-to-turn information, with response times and success rates enhanced by 43.1% and 26.2%, respectively. Moreover, there were significant reductions in eyes off-the-road time when using this approach, and it also attracted the lowest subjective ratings of workload. The authors conclude that there are significant benefits to augmenting landmarks during the head-up provision of in-car navigation advice.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90682055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.4018/IJMHCI.2017010101
Xu Sun, A. May, Qingfeng Wang
This article describes a field study investigating the impact on user experience of personalisation of content provided on a mobile device. The target population was Chinese spectators and the application was large sports events. A field-based experiment showed that provision of personalised content significantly enhanced the user experience for the spectator. Design implications are discussed, with general support for countermeasures designed to overcome recognised limitations of adaptive systems. The study also highlights the need for culturally sensitive methods for requirements capture, design, and data collection during experimentation.
{"title":"Investigation of the Role of Mobile Personalisation at Large Sports Events","authors":"Xu Sun, A. May, Qingfeng Wang","doi":"10.4018/IJMHCI.2017010101","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017010101","url":null,"abstract":"This article describes a field study investigating the impact on user experience of personalisation of content provided on a mobile device. The target population was Chinese spectators and the application was large sports events. A field-based experiment showed that provision of personalised content significantly enhanced the user experience for the spectator. Design implications are discussed, with general support for countermeasures designed to overcome recognised limitations of adaptive systems. The study also highlights the need for culturally sensitive methods for requirements capture, design, and data collection during experimentation.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90759833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.4018/IJMHCI.2017010103
S. Bhattacharya
Emotion, being important human factor, should be considered to improve user experience of interactive systems. For that, we first need to recognize user's emotional state. In this work, the author proposes a model to predict the affective state of a touch screen user. The prediction is done based on the user's finger strokes. The author defined seven features on the basis of the strokes. The proposed predictor is a linear combination of these features, which the author obtained using a linear regression approach. The predictor assumes three affective states in which a user can be: positive, negative and neutral. The existing works on affective touch interaction are few and rely on many features. Some of the feature values require special sensors, which may not be present in many devices. The seven features we propose do not require any special sensor for computation. Hence, the predictor can be implemented on any device. The model is developed and validated with empirical data involving 57 participants performing 7 touch input tasks. The validation study demonstrates a high prediction accuracy of 90.47%.
{"title":"A Predictive Linear Regression Model for Affective State Detection of Mobile Touch Screen Users","authors":"S. Bhattacharya","doi":"10.4018/IJMHCI.2017010103","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017010103","url":null,"abstract":"Emotion, being important human factor, should be considered to improve user experience of interactive systems. For that, we first need to recognize user's emotional state. In this work, the author proposes a model to predict the affective state of a touch screen user. The prediction is done based on the user's finger strokes. The author defined seven features on the basis of the strokes. The proposed predictor is a linear combination of these features, which the author obtained using a linear regression approach. The predictor assumes three affective states in which a user can be: positive, negative and neutral. The existing works on affective touch interaction are few and rely on many features. Some of the feature values require special sensors, which may not be present in many devices. The seven features we propose do not require any special sensor for computation. Hence, the predictor can be implemented on any device. The model is developed and validated with empirical data involving 57 participants performing 7 touch input tasks. The validation study demonstrates a high prediction accuracy of 90.47%.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79636642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.4018/IJMHCI.2017010102
Wing Ho Andy Li, Kening Zhu, Hongbo Fu
Bezel enables useful gestures supplementary to primary surface gestures for mobile interaction. However, the existing works mainly focus on researcher-designed gestures, which utilized only a subset of the design space. In order to explore the design space, the authors present a modified elicitation study, during which the participants designed bezel-initiated gestures for four sets of tasks. Different from traditional elicitation studies, theirs encourages participants to design new gestures. The authors do not focus on individual tasks or gestures, but perform a detailed analysis of the collected gestures as a whole, and provide findings which could benefit designers of bezel-initiated gestures.
{"title":"Exploring the Design Space of Bezel-Initiated Gestures for Mobile Interaction","authors":"Wing Ho Andy Li, Kening Zhu, Hongbo Fu","doi":"10.4018/IJMHCI.2017010102","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017010102","url":null,"abstract":"Bezel enables useful gestures supplementary to primary surface gestures for mobile interaction. However, the existing works mainly focus on researcher-designed gestures, which utilized only a subset of the design space. In order to explore the design space, the authors present a modified elicitation study, during which the participants designed bezel-initiated gestures for four sets of tasks. Different from traditional elicitation studies, theirs encourages participants to design new gestures. The authors do not focus on individual tasks or gestures, but perform a detailed analysis of the collected gestures as a whole, and provide findings which could benefit designers of bezel-initiated gestures.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74810612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01DOI: 10.4018/IJMHCI.2017010104
F. Gao, P. Rau, Yubo Zhang
The rapid deployment of mobile devices and the development of mobile services and applications require to address the mobile information security from the human side. This study was aimed at identifying factors influencing people's perception of mobile information security, to investigate the impact of these factors and to facilitate related service design. A survey was conducted and analyzed with exploratory factor analysis. Five factors were identified, including perceived familiarity, perceived impact, perceived controllability, perceived awareness and perceived possibility. Thereinto, the impact of controllability, impact and familiarity on the adoption of mobile payment was investigated. Impact significantly affected the intention to use, but not the perceived security of payment systems. Control level significantly affected the intention to use and the perceived security. Familiarity was found to have an effect on neither the intention to use nor the perceived security. Related design implications for mobile payment systems were discussed.
{"title":"Perceived Mobile Information Security and Adoption of Mobile Payment Services in China","authors":"F. Gao, P. Rau, Yubo Zhang","doi":"10.4018/IJMHCI.2017010104","DOIUrl":"https://doi.org/10.4018/IJMHCI.2017010104","url":null,"abstract":"The rapid deployment of mobile devices and the development of mobile services and applications require to address the mobile information security from the human side. This study was aimed at identifying factors influencing people's perception of mobile information security, to investigate the impact of these factors and to facilitate related service design. A survey was conducted and analyzed with exploratory factor analysis. Five factors were identified, including perceived familiarity, perceived impact, perceived controllability, perceived awareness and perceived possibility. Thereinto, the impact of controllability, impact and familiarity on the adoption of mobile payment was investigated. Impact significantly affected the intention to use, but not the perceived security of payment systems. Control level significantly affected the intention to use and the perceived security. Familiarity was found to have an effect on neither the intention to use nor the perceived security. Related design implications for mobile payment systems were discussed.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88361476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.4018/IJMHCI.2016100102
Katrin Wolf, Markus Funk, Pascal Knierim, Markus Löchtefeld
Projectors shrink in size, are embedded in some mobile devices, and with the miniaturization of projection technology truly mobile projected displays became possible. In this paper, the authors present a survey of the current state of the art on such displays. They give a holistic overview of current literature and categorize mobile projected displays based on mobility and different possible interaction techniques. This paper tries to aid fellow researchers to identify areas for future work.
{"title":"Survey of Interactive Displays through Mobile Projections","authors":"Katrin Wolf, Markus Funk, Pascal Knierim, Markus Löchtefeld","doi":"10.4018/IJMHCI.2016100102","DOIUrl":"https://doi.org/10.4018/IJMHCI.2016100102","url":null,"abstract":"Projectors shrink in size, are embedded in some mobile devices, and with the miniaturization of projection technology truly mobile projected displays became possible. In this paper, the authors present a survey of the current state of the art on such displays. They give a holistic overview of current literature and categorize mobile projected displays based on mobility and different possible interaction techniques. This paper tries to aid fellow researchers to identify areas for future work.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80182470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.4018/IJMHCI.2016100101
Christian Sailer, P. Kiefer, Joram Schito, M. Raubal
Location-based mobile learning LBML is a type of mobile learning in which the learning content is related to the location of the learner. The evaluation of LBML concepts and technologies is typically performed using methods known from classical usability engineering, such as questionnaires or interviews. In this paper, the authors argue for applying visual analytics to spatial and spatio-temporal visualizations of learners' trajectories for evaluating LBML. Visual analytics supports the detection and interpretation of spatio-temporal patterns and irregularities in both, single learners' as well as multiple learners' trajectories, thus revealing learners' typical behavior patterns and potential problems with the LBML software, hardware, the didactical concept, or the spatial and temporal embedding of the content.
{"title":"Map-based Visual Analytics of Moving Learners","authors":"Christian Sailer, P. Kiefer, Joram Schito, M. Raubal","doi":"10.4018/IJMHCI.2016100101","DOIUrl":"https://doi.org/10.4018/IJMHCI.2016100101","url":null,"abstract":"Location-based mobile learning LBML is a type of mobile learning in which the learning content is related to the location of the learner. The evaluation of LBML concepts and technologies is typically performed using methods known from classical usability engineering, such as questionnaires or interviews. In this paper, the authors argue for applying visual analytics to spatial and spatio-temporal visualizations of learners' trajectories for evaluating LBML. Visual analytics supports the detection and interpretation of spatio-temporal patterns and irregularities in both, single learners' as well as multiple learners' trajectories, thus revealing learners' typical behavior patterns and potential problems with the LBML software, hardware, the didactical concept, or the spatial and temporal embedding of the content.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82682205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.4018/IJMHCI.2016100104
Héctor A. Caltenco, Charlotte Magnusson, B. Rydeman, S. Finocchietti, G. Cappagli, E. Cocchi, L. B. Porquis, G. Baud-Bovy, M. Gori
This paper presents the process and results of a set of studies within the ABBI EU project, with the general aim to co-design wearable technology an audio bracelet together with visually impaired children, starting at a young age. The authors discuss user preferences related to sounds and tactile materials and present the results of a focus group with very young visually-impaired children under the age of 5, together with their parents. They find that multisensory feedback visual, tactile/haptic, auditory is useful and that preferences vary-also the drastic and potentially unpleasant sounds and materials may have a role. Further studies investigate the possibilities of using the ABBI wearable technology for social contexts and games. In a series of game workshops children with and without visual impairments created games with wearable technology employing very simple interactivity. The authors report the created games, and note that even with this simple interactivity it is possible to create fun, inclusive and rich socially co-located games.
{"title":"Co-Designing Wearable Technology Together with Visually Impaired Children","authors":"Héctor A. Caltenco, Charlotte Magnusson, B. Rydeman, S. Finocchietti, G. Cappagli, E. Cocchi, L. B. Porquis, G. Baud-Bovy, M. Gori","doi":"10.4018/IJMHCI.2016100104","DOIUrl":"https://doi.org/10.4018/IJMHCI.2016100104","url":null,"abstract":"This paper presents the process and results of a set of studies within the ABBI EU project, with the general aim to co-design wearable technology an audio bracelet together with visually impaired children, starting at a young age. The authors discuss user preferences related to sounds and tactile materials and present the results of a focus group with very young visually-impaired children under the age of 5, together with their parents. They find that multisensory feedback visual, tactile/haptic, auditory is useful and that preferences vary-also the drastic and potentially unpleasant sounds and materials may have a role. Further studies investigate the possibilities of using the ABBI wearable technology for social contexts and games. In a series of game workshops children with and without visual impairments created games with wearable technology employing very simple interactivity. The authors report the created games, and note that even with this simple interactivity it is possible to create fun, inclusive and rich socially co-located games.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81166763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.4018/IJMHCI.2016100103
Lone Malmborg, E. Grönvall, Jörn Messeter, Thomas Raben, Katharina Werner
This paper disseminates work from the European GiveT 2 Understanding how ad hoc or loosely coupled infrastructures can define a community rather than a formal, organisational structure; and 3 Understanding the nature of mobilization and motivation for participation as processes that continue, and need to be supported, also after completion of the project. These strategies have emerged in the authors' work on mobilization and service sharing, but may apply to a broader context of infrastructuring and ongoing negotiations.
{"title":"Mobilizing Senior Citizens in Co-Design of Mobile Technology","authors":"Lone Malmborg, E. Grönvall, Jörn Messeter, Thomas Raben, Katharina Werner","doi":"10.4018/IJMHCI.2016100103","DOIUrl":"https://doi.org/10.4018/IJMHCI.2016100103","url":null,"abstract":"This paper disseminates work from the European GiveT 2 Understanding how ad hoc or loosely coupled infrastructures can define a community rather than a formal, organisational structure; and 3 Understanding the nature of mobilization and motivation for participation as processes that continue, and need to be supported, also after completion of the project. These strategies have emerged in the authors' work on mobilization and service sharing, but may apply to a broader context of infrastructuring and ongoing negotiations.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81460255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.4018/IJMHCI.2016100106
Stefan Schneegass, Thomas Olsson, Sven Mayer, Kristof Van Laerhoven
Wearable computing has a huge potential to shape the way we interact with mobile devices in the future. Interaction with mobile devices is still mainly limited to visual output and tactile finger-based input. Despite the visions of next-generation mobile interaction, the hand-held form factor hinders new interaction techniques becoming commonplace. In contrast, wearable devices and sensors are intended for more continuous and close-to-body use. This makes it possible to design novel wearable-augmented mobile interaction methods-both explicit and implicit. For example, the EEG signal from a wearable breast strap could be used to identify user status and change the device state accordingly implicit and the optical tracking with a head-mounted camera could be used to recognize gestural input explicit. In this paper, the authors outline the design space for how the existing and envisioned wearable devices and sensors could augment mobile interaction techniques. Based on designs and discussions in a recently organized workshop on the topic as well as other related work, the authors present an overview of this design space and highlight some use cases that underline the potential therein.
{"title":"Mobile Interactions Augmented by Wearable Computing: A Design Space and Vision","authors":"Stefan Schneegass, Thomas Olsson, Sven Mayer, Kristof Van Laerhoven","doi":"10.4018/IJMHCI.2016100106","DOIUrl":"https://doi.org/10.4018/IJMHCI.2016100106","url":null,"abstract":"Wearable computing has a huge potential to shape the way we interact with mobile devices in the future. Interaction with mobile devices is still mainly limited to visual output and tactile finger-based input. Despite the visions of next-generation mobile interaction, the hand-held form factor hinders new interaction techniques becoming commonplace. In contrast, wearable devices and sensors are intended for more continuous and close-to-body use. This makes it possible to design novel wearable-augmented mobile interaction methods-both explicit and implicit. For example, the EEG signal from a wearable breast strap could be used to identify user status and change the device state accordingly implicit and the optical tracking with a head-mounted camera could be used to recognize gestural input explicit. In this paper, the authors outline the design space for how the existing and envisioned wearable devices and sensors could augment mobile interaction techniques. Based on designs and discussions in a recently organized workshop on the topic as well as other related work, the authors present an overview of this design space and highlight some use cases that underline the potential therein.","PeriodicalId":43100,"journal":{"name":"International Journal of Mobile Human Computer Interaction","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88330903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}