The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place.
{"title":"Reviews of Social Embodiment for Design of Non-Player Characters in Virtual Reality-Based Social Skill Training for Autistic Children","authors":"Jewoong Moon","doi":"10.3390/MTI2030053","DOIUrl":"https://doi.org/10.3390/MTI2030053","url":null,"abstract":"The purpose of this paper is to review the scholarly works regarding social embodiment aligned with the design of non-player characters in virtual reality (VR)-based social skill training for autistic children. VR-based social skill training for autistic children has been a naturalistic environment, which allows autistic children themselves to shape socially-appropriate behaviors in real world. To build up the training environment for autistic children, it is necessary to identify how to simulate social components in the training. In particular, designing non-player characters (NPCs) in the training is essential to determining the quality of the simulated social interactions during the training. Through this literature review, this study proposes multiple design themes that underline the nature of social embodiment in which interactions with NPCs in VR-based social skill training take place.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69756185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area.
{"title":"Design for an Art Therapy Robot: An Explorative Review of the Theoretical Foundations for Engaging in Emotional and Creative Painting with a Robot","authors":"M. Cooney, M. Menezes","doi":"10.3390/MTI2030052","DOIUrl":"https://doi.org/10.3390/MTI2030052","url":null,"abstract":"Social robots are being designed to help support people’s well-being in domestic and public environments. To address increasing incidences of psychological and emotional difficulties such as loneliness, and a shortage of human healthcare workers, we believe that robots will also play a useful role in engaging with people in therapy, on an emotional and creative level, e.g., in music, drama, playing, and art therapy. Here, we focus on the latter case, on an autonomous robot capable of painting with a person. A challenge is that the theoretical foundations are highly complex; we are only just beginning ourselves to understand emotions and creativity in human science, which have been described as highly important challenges in artificial intelligence. To gain insight, we review some of the literature on robots used for therapy and art, potential strategies for interacting, and mechanisms for expressing emotions and creativity. In doing so, we also suggest the usefulness of the responsive art approach as a starting point for art therapy robots, describe a perceived gap between our understanding of emotions in human science and what is currently typically being addressed in engineering studies, and identify some potential ethical pitfalls and solutions for avoiding them. Based on our arguments, we propose a design for an art therapy robot, also discussing a simplified prototype implementation, toward informing future work in the area.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030052","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible benefits and risks for the animals involved. Animal–Computer Interaction is a novel field of research that offers a framework (ACI manifesto) for implementing interactive technology for animals. Based on this framework, we have examined several projects focusing on the interplay between animals and music technology in order to arrive at a better understanding of animal-based musical projects. Building on this, we will discuss how the implementation of new musical instruments and interfaces could provide new opportunities for improving the quality of life for grey parrots living in captivity.
{"title":"Animals Make Music: A Look at Non-Human Musical Expression","authors":"Reinhard Gupfinger, Martin Kaltenbrunner","doi":"10.3390/MTI2030051","DOIUrl":"https://doi.org/10.3390/MTI2030051","url":null,"abstract":"The use of musical instruments and interfaces that involve animals in the interaction process is an emerging, yet not widespread practice. The projects that have been implemented in this unusual field are raising questions concerning ethical principles, animal-centered design processes, and the possible benefits and risks for the animals involved. Animal–Computer Interaction is a novel field of research that offers a framework (ACI manifesto) for implementing interactive technology for animals. Based on this framework, we have examined several projects focusing on the interplay between animals and music technology in order to arrive at a better understanding of animal-based musical projects. Building on this, we will discuss how the implementation of new musical instruments and interfaces could provide new opportunities for improving the quality of life for grey parrots living in captivity.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.
{"title":"Deep Learning and Medical Diagnosis: A Review of Literature","authors":"Mihalj Bakator, D. Radosav","doi":"10.3390/MTI2030047","DOIUrl":"https://doi.org/10.3390/MTI2030047","url":null,"abstract":"In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over monitoring their own health. Importantly, mobile devices such as smartphones and tablets can allow older adults to have “on the go” access to health-related information. This paper explores mobile health apps that enable older adults and those who care for them to track health-related factors such as body readings and medication adherence, and it serves as a review of the literature on the usability and acceptance of mobile health apps in an older population.
{"title":"Technology for Remote Health Monitoring in an Older Population: A Role for Mobile Devices","authors":"Kate Dupuis, L. Tsotsos","doi":"10.3390/MTI2030043","DOIUrl":"https://doi.org/10.3390/MTI2030043","url":null,"abstract":"The impact of an aging population on healthcare and the sustainability of our healthcare system are pressing issues in contemporary society. Technology has the potential to address these challenges, alleviating pressures on the healthcare system and empowering individuals to have greater control over monitoring their own health. Importantly, mobile devices such as smartphones and tablets can allow older adults to have “on the go” access to health-related information. This paper explores mobile health apps that enable older adults and those who care for them to track health-related factors such as body readings and medication adherence, and it serves as a review of the literature on the usability and acceptance of mobile health apps in an older population.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body.
{"title":"Opportunities and Challenges of Bodily Interaction for Geometry Learning to Inform Technology Design","authors":"S. Price, S. Duffy","doi":"10.3390/MTI2030041","DOIUrl":"https://doi.org/10.3390/MTI2030041","url":null,"abstract":"An increasing body of work provides evidence of the importance of bodily experience for cognition and the learning of mathematics. Sensor-based technologies have potential for guiding sensori-motor engagement with challenging mathematical ideas in new ways. Yet, designing environments that promote an appropriate sensori-motoric interaction that effectively supports salient foundations of mathematical concepts is challenging and requires understanding of opportunities and challenges that bodily interaction offers. This study aimed to better understand how young children can, and do, use their bodies to explore geometrical concepts of angle and shape, and what contribution the different sensori-motor experiences make to the comprehension of mathematical ideas. Twenty-nine students aged 6–10 years participated in an exploratory study, with paired and group activities designed to elicit intuitive bodily enactment of angles and shape. Our analysis, focusing on moment-by-moment bodily interactions, attended to gesture, action, facial expression, body posture and talk, illustrated the ‘realms of possibilities’ of bodily interaction, and highlighted challenges around ‘felt’ experience and egocentric vs. allocentric perception of the body during collaborative bodily enactment. These findings inform digital designs for sensory interaction to foreground salient geometric features and effectively support relevant forms of enactment to enhance the learning experience, supporting challenging aspects of interaction and exploiting the opportunities of the body.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030041","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Kobayashi, Keijiro Nakagawa, K. Makiyama, Yuta Sasaki, Hiromi Kudo, Baburam Niraula, K. Sezaki
We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research.
{"title":"Animal-to-Animal Data Sharing Mechanism for Wildlife Monitoring in Fukushima Exclusion Zone","authors":"H. Kobayashi, Keijiro Nakagawa, K. Makiyama, Yuta Sasaki, Hiromi Kudo, Baburam Niraula, K. Sezaki","doi":"10.3390/MTI2030040","DOIUrl":"https://doi.org/10.3390/MTI2030040","url":null,"abstract":"We propose an animal-to-animal data sharing mechanism that employs wildlife-borne sensing devices to expand the size of monitoring areas in which electricity, information, and road infrastructures are either limited or nonexistent. With the proposed approach, monitoring information can be collected from remote areas in a safe and cost-effective manner. To substantially prolong the life of a sensor node, the proposed mechanism activates the communication capabilities only when there is a plurality of animals; otherwise, the sensor node remains in a sleep state. This study aimed to achieve three objectives. First, we intend to obtain knowledge based on the actual field operations within the Fukushima exclusion zone. Second, we attempt to realize an objective evaluation of the power supply and work base that is required to properly evaluate the proposed mechanism. Third, we intend to acquire data to support wildlife research, which is the objective of both our present (and future) research.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed.
{"title":"Exploring Emergent Features of Student Interaction within an Embodied Science Learning Simulation","authors":"Jina Kang, Robb Lindgren, James Planey","doi":"10.3390/MTI2030039","DOIUrl":"https://doi.org/10.3390/MTI2030039","url":null,"abstract":"Theories of embodied cognition argue that human processes of thinking and reasoning are deeply connected with the actions and perceptions of the body. Recent research suggests that these theories can be successfully applied to the design of learning environments, and new technologies enable multimodal platforms that respond to students’ natural physical activity such as their gestures. This study examines how students engaged with an embodied mixed-reality science learning simulation using advanced gesture recognition techniques to support full-body interaction. The simulation environment acts as a communication platform for students to articulate their understanding of non-linear growth within different science contexts. In particular, this study investigates the different multimodal interaction metrics that were generated as students attempted to make sense of cross-cutting science concepts through using a personalized gesture scheme. Starting with video recordings of students’ full-body gestures, we examined the relationship between these embodied expressions and their subsequent success reasoning about non-linear growth. We report the patterns that we identified, and explicate our findings by detailing a few insightful cases of student interactions. Implications for the design of multimodal interaction technologies and the metrics that were used to investigate different types of students’ interactions while learning are discussed.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard interactions. However, it has since been adapted to predict interactions with smartphones, in-vehicle information systems, and natural user interfaces. The simplicity of the KLM and its extensions, along with their resource- and time-saving capabilities, has driven their adoption. In recent years, the popularity of smartwatches has grown, introducing new design challenges due to the small touch screens and bimanual interactions involved, which make current extensions to the KLM unsuitable for modelling smartwatches. Therefore, it is necessary to study these interfaces and interactions. This paper reports on three studies performed to modify the original KLM and its extensions for smartwatch interaction. First, an observational study was conducted to characterise smartwatch interactions. Second, the unit times for the observed interactions were derived through another study, in which the times required to perform the relevant physical actions were measured. Finally, a third study was carried out to validate the model for interactions with the Apple Watch and Samsung Gear S3. The results show that the new model can accurately predict the performance of smartwatch users with a percentage error of 12.07%; a value that falls below the acceptable percentage dictated by the original KLM ~21%.
{"title":"A Predictive Fingerstroke-Level Model for Smartwatch Interaction","authors":"Shiroq Al-Megren","doi":"10.3390/MTI2030038","DOIUrl":"https://doi.org/10.3390/MTI2030038","url":null,"abstract":"The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard interactions. However, it has since been adapted to predict interactions with smartphones, in-vehicle information systems, and natural user interfaces. The simplicity of the KLM and its extensions, along with their resource- and time-saving capabilities, has driven their adoption. In recent years, the popularity of smartwatches has grown, introducing new design challenges due to the small touch screens and bimanual interactions involved, which make current extensions to the KLM unsuitable for modelling smartwatches. Therefore, it is necessary to study these interfaces and interactions. This paper reports on three studies performed to modify the original KLM and its extensions for smartwatch interaction. First, an observational study was conducted to characterise smartwatch interactions. Second, the unit times for the observed interactions were derived through another study, in which the times required to perform the relevant physical actions were measured. Finally, a third study was carried out to validate the model for interactions with the Apple Watch and Samsung Gear S3. The results show that the new model can accurately predict the performance of smartwatch users with a percentage error of 12.07%; a value that falls below the acceptable percentage dictated by the original KLM ~21%.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2030038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69755578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Documenting the Elusive and Ephemeral in Embodied Design Ideation Activities","authors":"Laia Turmo Vidal, Elena Márquez Segura","doi":"10.3390/mti2030035","DOIUrl":"https://doi.org/10.3390/mti2030035","url":null,"abstract":"","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2018-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85032392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}