Karola Marky, Sarah Prange, M. Mühlhäuser, Florian Alt
In this paper, we contribute an in-depth study of the mental models of various roles in smart home ecosystems. In particular, we compared mental models regarding data collection among residents (primary users) and visitors of a smart home in a qualitative study (N=30) to better understand how their specific privacy needs can be addressed. Our results suggest that visitors have a limited understanding of how smart devices collect and store sensitive data about them. Misconceptions in visitors’ mental models result in missing awareness and ultimately limit their ability to protect their privacy. We discuss the limitations of existing solutions and challenges for the design of future smart home environments that reflect the privacy concerns of users and visitors alike, meant to inform the design of future privacy interfaces for IoT devices.
{"title":"Roles Matter! Understanding Differences in the Privacy Mental Models of Smart Home Visitors and Residents","authors":"Karola Marky, Sarah Prange, M. Mühlhäuser, Florian Alt","doi":"10.1145/3490632.3490664","DOIUrl":"https://doi.org/10.1145/3490632.3490664","url":null,"abstract":"In this paper, we contribute an in-depth study of the mental models of various roles in smart home ecosystems. In particular, we compared mental models regarding data collection among residents (primary users) and visitors of a smart home in a qualitative study (N=30) to better understand how their specific privacy needs can be addressed. Our results suggest that visitors have a limited understanding of how smart devices collect and store sensitive data about them. Misconceptions in visitors’ mental models result in missing awareness and ultimately limit their ability to protect their privacy. We discuss the limitations of existing solutions and challenges for the design of future smart home environments that reflect the privacy concerns of users and visitors alike, meant to inform the design of future privacy interfaces for IoT devices.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"585 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128154097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is about an interactive and tangible storytelling installation based on the principle of ubiquitous museum. The installation is designed to focus children’s attention on the sailing practice in the Medieval age by engaging children in an experience where they interact with several ancient navigation tools, equipped with sensors and communication capabilities. The networked infrastructure is combined with the approach of tangible narratives to augment the museum experience. We conducted a pilot study with three different groups of participants in order to investigate the interaction with the augmented ancient navigation tools. We used quantitative and qualitative data to get feedback on participants’ engagement while interacting with diegetic and non-diegetic objects and on factors that can impact engagement. Even though preliminary, the results can be useful to designers of ubiquitous museum installations to know possible risks and factors to take into account at design time.
{"title":"Augmented museum experience through Tangible Narrative","authors":"Luca Ciotoli, Mortaza Alinam, Ilaria Torre","doi":"10.1145/3490632.3497837","DOIUrl":"https://doi.org/10.1145/3490632.3497837","url":null,"abstract":"This paper is about an interactive and tangible storytelling installation based on the principle of ubiquitous museum. The installation is designed to focus children’s attention on the sailing practice in the Medieval age by engaging children in an experience where they interact with several ancient navigation tools, equipped with sensors and communication capabilities. The networked infrastructure is combined with the approach of tangible narratives to augment the museum experience. We conducted a pilot study with three different groups of participants in order to investigate the interaction with the augmented ancient navigation tools. We used quantitative and qualitative data to get feedback on participants’ engagement while interacting with diegetic and non-diegetic objects and on factors that can impact engagement. Even though preliminary, the results can be useful to designers of ubiquitous museum installations to know possible risks and factors to take into account at design time.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134011520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the results of an interview study with twelve TikTok users to explore user awareness, perception, and experiences with the app’s algorithm in the context of privacy. The social media entertainment app TikTok collects user data to cater individualized video feeds based on users’ engagement with presented content which is regulated in a complex and overly long privacy policy. Our results demonstrate that participants generally have very little knowledge of the actual privacy regulations which is argued for with the benefit of receiving free entertaining content. However, participants experienced privacy-related downsides when algorithmically catered video content increasingly adapted to their biography, interests, or location and they in turn realized the detail of personal data that TikTok had access to. This illustrates the tradeoff users have to make between allowing TikTok to access their personal data and having favorable video consumption experiences on the app.
{"title":"The TikTok Tradeoff: Compelling Algorithmic Content at the Expense of Personal Privacy","authors":"D. Klug, Maya De Los Santos","doi":"10.1145/3490632.3497864","DOIUrl":"https://doi.org/10.1145/3490632.3497864","url":null,"abstract":"This paper presents the results of an interview study with twelve TikTok users to explore user awareness, perception, and experiences with the app’s algorithm in the context of privacy. The social media entertainment app TikTok collects user data to cater individualized video feeds based on users’ engagement with presented content which is regulated in a complex and overly long privacy policy. Our results demonstrate that participants generally have very little knowledge of the actual privacy regulations which is argued for with the benefit of receiving free entertaining content. However, participants experienced privacy-related downsides when algorithmically catered video content increasingly adapted to their biography, interests, or location and they in turn realized the detail of personal data that TikTok had access to. This illustrates the tradeoff users have to make between allowing TikTok to access their personal data and having favorable video consumption experiences on the app.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114740869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays pedestrian navigation not only provides reliable route planning and efficient wayfinding instructions but also visualizes location-based information for supporting exploratory activities, for example, visualizing and recommending some Point-of-interests (POIs). With the Augmented Reality (AR) technology development, it provides more imagination in navigation field, especially in AR POIs visualization of map interface. However, a user’s experience during navigation activities is easy to suffer from information overload and difficulty of identifying the spatial knowledge from POIs. In this work, we aimed to explore a 3D landmark-based map interface in AR navigation to support location awareness and spatial knowledge acquisition. The results of a preliminary user study indicated that participants had positive attitudes to using 3D landmark-based map. The interviews showed it could help them recognize their locations better and learn spatial knowledge of POIs surrounding them, but it also increased mental and effort pressure during different phase of exploration, which engages us to modify our work in the future.
{"title":"Exploring 3D Landmark-based Map Interface in AR Navigation System for City Exploration","authors":"Yiyi Zhang, Tatsuoki Nakajima","doi":"10.1145/3490632.3497858","DOIUrl":"https://doi.org/10.1145/3490632.3497858","url":null,"abstract":"Nowadays pedestrian navigation not only provides reliable route planning and efficient wayfinding instructions but also visualizes location-based information for supporting exploratory activities, for example, visualizing and recommending some Point-of-interests (POIs). With the Augmented Reality (AR) technology development, it provides more imagination in navigation field, especially in AR POIs visualization of map interface. However, a user’s experience during navigation activities is easy to suffer from information overload and difficulty of identifying the spatial knowledge from POIs. In this work, we aimed to explore a 3D landmark-based map interface in AR navigation to support location awareness and spatial knowledge acquisition. The results of a preliminary user study indicated that participants had positive attitudes to using 3D landmark-based map. The interviews showed it could help them recognize their locations better and learn spatial knowledge of POIs surrounding them, but it also increased mental and effort pressure during different phase of exploration, which engages us to modify our work in the future.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116238339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianming Wu, Donghuo Zeng, Bo Yang, Gen Hattori, Y. Takishima, Yuta Hagio, Marina Kamimura, Yuta Hoshi, Yutaka Kaneko, Yusei Nishimoto
Watching TV once encouraged generations of families and friends [11] to communicate and share empathy. However, the Internet is changing how we watch TV and reducing interaction, leading to problems such as lack of self-control and inadequate communication skills [17]. To understand the conversations while watching TV, we design a scheme based on human conversational behavior [2], and then develop a prototype of TV-watching companion robot supported by the chatbot “KACTUS” [20]. The robot generates a disclosure utterance (e.g., ”I like elephants”) with extracted keywords from the TV program in “TV-watching mode” and uses a cross-topic dialogue management method from “KACTUS” with question utterance to respond with rich conversations in ”Conversation mode”. The robot switches between these two modes at a preset ratio (TV-watching:3, Conversation:1) and behaves like a human enjoying TV-watching. The result of initial experiment shows that three groups of participants enjoyed talking with the robot and the question about their interests in the robot were rated 6.5 (7-levels: ascending from ”extremely disagree” to ”extremely agree”).
{"title":"TV-watching Companion Robot Supported by Open-domain Chatbot “KACTUS”","authors":"Jianming Wu, Donghuo Zeng, Bo Yang, Gen Hattori, Y. Takishima, Yuta Hagio, Marina Kamimura, Yuta Hoshi, Yutaka Kaneko, Yusei Nishimoto","doi":"10.1145/3490632.3497865","DOIUrl":"https://doi.org/10.1145/3490632.3497865","url":null,"abstract":"Watching TV once encouraged generations of families and friends [11] to communicate and share empathy. However, the Internet is changing how we watch TV and reducing interaction, leading to problems such as lack of self-control and inadequate communication skills [17]. To understand the conversations while watching TV, we design a scheme based on human conversational behavior [2], and then develop a prototype of TV-watching companion robot supported by the chatbot “KACTUS” [20]. The robot generates a disclosure utterance (e.g., ”I like elephants”) with extracted keywords from the TV program in “TV-watching mode” and uses a cross-topic dialogue management method from “KACTUS” with question utterance to respond with rich conversations in ”Conversation mode”. The robot switches between these two modes at a preset ratio (TV-watching:3, Conversation:1) and behaves like a human enjoying TV-watching. The result of initial experiment shows that three groups of participants enjoyed talking with the robot and the question about their interests in the robot were rated 6.5 (7-levels: ascending from ”extremely disagree” to ”extremely agree”).","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122072876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dorian Sinclear, Linda Birch Flensborg, Ask Lindblad Fogsgaard, Markus Löchtefeld
Consumer food waste in industrialised countries is becoming an increasing concern as its impact on greenhouse emissions is comparable to that of the aviation industry. In recent years we have accordingly seen a growing interest in HCI to support users getting into more sustainable consumption practices. As part of this movement, we present in this paper a serious game called Face-the-Waste that is meant to increase users food literacy and educate them about the impact and development of food waste. Our serious game comes in the form of a public installation that uses provocative design to engage the users. They had to answer multiple choice questions and if they answered wrongly real food would be disposed into a bin in front of the users eyes. The aim with this was to create a strong emotional response and increase the level of reflection on the topic. In our evaluation we not only found that the users often voiced very strong emotional reactions but also engaged and discussed the question and their content. Furthermore, we demonstrated that such provocations can add a new layer for the design of serious games.
{"title":"Face-the-Waste - Learning about Food Waste through a Serious Game","authors":"Dorian Sinclear, Linda Birch Flensborg, Ask Lindblad Fogsgaard, Markus Löchtefeld","doi":"10.1145/3490632.3505171","DOIUrl":"https://doi.org/10.1145/3490632.3505171","url":null,"abstract":"Consumer food waste in industrialised countries is becoming an increasing concern as its impact on greenhouse emissions is comparable to that of the aviation industry. In recent years we have accordingly seen a growing interest in HCI to support users getting into more sustainable consumption practices. As part of this movement, we present in this paper a serious game called Face-the-Waste that is meant to increase users food literacy and educate them about the impact and development of food waste. Our serious game comes in the form of a public installation that uses provocative design to engage the users. They had to answer multiple choice questions and if they answered wrongly real food would be disposed into a bin in front of the users eyes. The aim with this was to create a strong emotional response and increase the level of reflection on the topic. In our evaluation we not only found that the users often voiced very strong emotional reactions but also engaged and discussed the question and their content. Furthermore, we demonstrated that such provocations can add a new layer for the design of serious games.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130460349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Faltaous, Maximilian Altmeyer, Felix Kosmalla, Pascal Lessel, Florian Daiber, Stefan Schneegass
Over recent decades, with the fast growth of urbanization, new health-related issues have resulted from the changed lifestyles. People have become more sedentary. They suffer from fully-packed calendars, which does not give them enough time to care for their physical and psychological well-being. While this is a growing challenge, advancements made in internet of things (IoT) devices in the private context offer the chance to develop applications that tackle these issues. While many such applications have been envisioned, the user’s views and requirements for them remain unclear. To better understand this, we conducted a literature review in which we identified the four most common health-related scenarios in which such technologies are used. Next, we conducted an online survey (N=80), in which participants envisioned how smart home devices could realize these scenarios. Based on the results, we derive design implications that show that users value a high degree of customization and the ability to select a specific automation level through an all-in-one, easy-to-use platform.
{"title":"Understanding User Requirements for Self-Created IoT Health Assistant Systems","authors":"S. Faltaous, Maximilian Altmeyer, Felix Kosmalla, Pascal Lessel, Florian Daiber, Stefan Schneegass","doi":"10.1145/3490632.3490645","DOIUrl":"https://doi.org/10.1145/3490632.3490645","url":null,"abstract":"Over recent decades, with the fast growth of urbanization, new health-related issues have resulted from the changed lifestyles. People have become more sedentary. They suffer from fully-packed calendars, which does not give them enough time to care for their physical and psychological well-being. While this is a growing challenge, advancements made in internet of things (IoT) devices in the private context offer the chance to develop applications that tackle these issues. While many such applications have been envisioned, the user’s views and requirements for them remain unclear. To better understand this, we conducted a literature review in which we identified the four most common health-related scenarios in which such technologies are used. Next, we conducted an online survey (N=80), in which participants envisioned how smart home devices could realize these scenarios. Based on the results, we derive design implications that show that users value a high degree of customization and the ability to select a specific automation level through an all-in-one, easy-to-use platform.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116558583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ilana Arzis, Moayad Mokatren, Yasmin Felberbaum, T. Kuflik
Body gestures could be used as an intuitive interaction method with computerized systems. Previous studies explored gesture-based interaction mostly with digital displays, thus there is no standard set of gestures for a system that lacks a display. In this work we conducted a pilot study to explore the potential of using gestures to control an eye tracker based mobile museum visitors’ guide. Our objective was to identify a user-defined set of gestures for controlling the mobile guide. In this work we present the preliminary results of the experiment and discuss the participants’ suggestions and concerns for using this type of interaction.
{"title":"Exploring Potential Gestures for Controlling an Eye-Tracker Based System","authors":"Ilana Arzis, Moayad Mokatren, Yasmin Felberbaum, T. Kuflik","doi":"10.1145/3490632.3497836","DOIUrl":"https://doi.org/10.1145/3490632.3497836","url":null,"abstract":"Body gestures could be used as an intuitive interaction method with computerized systems. Previous studies explored gesture-based interaction mostly with digital displays, thus there is no standard set of gestures for a system that lacks a display. In this work we conducted a pilot study to explore the potential of using gestures to control an eye tracker based mobile museum visitors’ guide. Our objective was to identify a user-defined set of gestures for controlling the mobile guide. In this work we present the preliminary results of the experiment and discuss the participants’ suggestions and concerns for using this type of interaction.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121058610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study is to clarify which player perspective (the first-person perspective vs. the third-person perspective) is effective in promoting other-oriented empathy in persuasive virtual reality (VR) games and to provide preliminary insights into the design of persuasive VR games. To verify the hypothesis, participants played a persuasive VR game from each perspective, and their empathy orientation was investigated using a questionnaire.
{"title":"Have the Same Perspective as Someone Else, so Am I the Person?: The Effect of Perspective on Empathic Orientation in Virtual Reality","authors":"Asha Kambe, Tatsuoki Nakajima","doi":"10.1145/3490632.3497823","DOIUrl":"https://doi.org/10.1145/3490632.3497823","url":null,"abstract":"The purpose of this study is to clarify which player perspective (the first-person perspective vs. the third-person perspective) is effective in promoting other-oriented empathy in persuasive virtual reality (VR) games and to provide preliminary insights into the design of persuasive VR games. To verify the hypothesis, participants played a persuasive VR game from each perspective, and their empathy orientation was investigated using a questionnaire.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114822307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deformable surfaces with interactive capabilities provide opportunities for new mobile interfaces such as wearables. Yet current fabrication and prototyping techniques for deformable surfaces, that are both flexible and stretchable, are still limited by complex structural design and mechanical surface rigidity. We propose a simplified rapid fabrication technique that utilizes multi-material 3D printing for developing customizable and stretchable surfaces for mobile wearables with interactive capabilities embedded during the 3D printing process. Our prototype, FlexiWear, is a dynamic surface with embedded electronic components that can adapt to mobile body shape/movement and applied to contexts such as healthcare and sports wearables. We describe our design and fabrication approach using a commercial desktop 3D printer, the interaction techniques supported, and possible application scenarios for wearables and deformable mobile interfaces. Our approach aims to support rapid development and exploration of deformable surfaces that can adapt to body shape/movement.
{"title":"Enabling Multi-Material 3D Printing for Designing and Rapid Prototyping of Deformable and Interactive Wearables","authors":"Aluna Everitt, Alexander Keith Eady, A. Girouard","doi":"10.1145/3490632.3490635","DOIUrl":"https://doi.org/10.1145/3490632.3490635","url":null,"abstract":"Deformable surfaces with interactive capabilities provide opportunities for new mobile interfaces such as wearables. Yet current fabrication and prototyping techniques for deformable surfaces, that are both flexible and stretchable, are still limited by complex structural design and mechanical surface rigidity. We propose a simplified rapid fabrication technique that utilizes multi-material 3D printing for developing customizable and stretchable surfaces for mobile wearables with interactive capabilities embedded during the 3D printing process. Our prototype, FlexiWear, is a dynamic surface with embedded electronic components that can adapt to mobile body shape/movement and applied to contexts such as healthcare and sports wearables. We describe our design and fabrication approach using a commercial desktop 3D printer, the interaction techniques supported, and possible application scenarios for wearables and deformable mobile interfaces. Our approach aims to support rapid development and exploration of deformable surfaces that can adapt to body shape/movement.","PeriodicalId":158762,"journal":{"name":"Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126388371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}