Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333566
Ning Tan, R. E. Mohan, Yoke Ying Wong, R. Sosa
Ergonomics is the study of designing more human-friendly products, systems or processes for human. By extending this concept to robotics field, we propose robot ergonomics which is a transdisciplinary approach that brings together roboticists, product designers, and architects to solve numerous unsettled research problems or technology bottlenecks in robotics community through designing products for robots. This paper focuses on a case study of chair design for Roomba. Seven design criteria are proposed and intensive experiments are performed to validate the criteria using 22 chairs. Based on such empirical design strategy, three generic principles (i.e., observability, accessibility, and safety) of chair design are extracted for Roomba.
{"title":"Robot ergonomics: A case study of chair design for Roomba","authors":"Ning Tan, R. E. Mohan, Yoke Ying Wong, R. Sosa","doi":"10.1109/ROMAN.2015.7333566","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333566","url":null,"abstract":"Ergonomics is the study of designing more human-friendly products, systems or processes for human. By extending this concept to robotics field, we propose robot ergonomics which is a transdisciplinary approach that brings together roboticists, product designers, and architects to solve numerous unsettled research problems or technology bottlenecks in robotics community through designing products for robots. This paper focuses on a case study of chair design for Roomba. Seven design criteria are proposed and intensive experiments are performed to validate the criteria using 22 chairs. Based on such empirical design strategy, three generic principles (i.e., observability, accessibility, and safety) of chair design are extracted for Roomba.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134365433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333590
J. Paulo, P. Peixoto, U. Nunes
This paper presents an innovative Human-Machine Interface (HMI) for a robotic walker. Robotic walkers offer their users an aid for sustaining mobility and the potential to rehabilitate their lower limbs. Mobility is a crucial function for a human being and these aids are paramount for improving the independence and quality of life of those who suffer from some form of mobility impairment. However, important factors like cost and safety make them either frequently inaccessible to the public or discarded due to a lack of confidence in their operation. The approach adopted in this work offers an intuitive human-machine interface combined with innovative safety measures. This result was possible due to the resourceful use of the low-cost Leap Motion sensor. Experimental evaluation was divided into two stages. First the system was tested using a simulated environment which served to validate the principle of operation of the HMI. In the second stage the proposed HMI was tested on board a robotic platform to evaluate its performance in a real-world scenario. Experiments performed with healthy volunteers revealed an intuitive user interaction and accurate user intention determination.
{"title":"A novel vision-based human-machine interface for a robotic walker framework","authors":"J. Paulo, P. Peixoto, U. Nunes","doi":"10.1109/ROMAN.2015.7333590","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333590","url":null,"abstract":"This paper presents an innovative Human-Machine Interface (HMI) for a robotic walker. Robotic walkers offer their users an aid for sustaining mobility and the potential to rehabilitate their lower limbs. Mobility is a crucial function for a human being and these aids are paramount for improving the independence and quality of life of those who suffer from some form of mobility impairment. However, important factors like cost and safety make them either frequently inaccessible to the public or discarded due to a lack of confidence in their operation. The approach adopted in this work offers an intuitive human-machine interface combined with innovative safety measures. This result was possible due to the resourceful use of the low-cost Leap Motion sensor. Experimental evaluation was divided into two stages. First the system was tested using a simulated environment which served to validate the principle of operation of the HMI. In the second stage the proposed HMI was tested on board a robotic platform to evaluate its performance in a real-world scenario. Experiments performed with healthy volunteers revealed an intuitive user interaction and accurate user intention determination.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132844291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333696
Lorin Dole, D. Sirkin, R. Murphy, C. Nass
In this exploratory study, participants taking cover from a simulated earthquake interacted with a search-and-rescue robot that assumed one of four different identities, distinguished by varying degrees of autonomy, and whose communications were either clear or noisy. Results showed that identities with low autonomy elicited greater hopefulness from participants than identities with high autonomy. Discussion focuses on design recommendations for search-and-rescue robots, and on the design of immersive HRI experiments.
{"title":"Robots need humans in the loop to improve the hopefulness of disaster survivors","authors":"Lorin Dole, D. Sirkin, R. Murphy, C. Nass","doi":"10.1109/ROMAN.2015.7333696","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333696","url":null,"abstract":"In this exploratory study, participants taking cover from a simulated earthquake interacted with a search-and-rescue robot that assumed one of four different identities, distinguished by varying degrees of autonomy, and whose communications were either clear or noisy. Results showed that identities with low autonomy elicited greater hopefulness from participants than identities with high autonomy. Discussion focuses on design recommendations for search-and-rescue robots, and on the design of immersive HRI experiments.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132282990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333664
I. Kajitani, T. Sakaguchi, Y. Matsumoto, T. Tanikawa, T. Kotoku
This paper reports the results of a survey on introducing robots and other new technologies to care services, such as care for older adults. Demographic data show that the ratio of older adults in Japan has already exceeded 23.3%. Japan also is facing a severe shortage of the working-age population, which could lead to a reduction in the quality of care services for older adults; therefore, robots and the provision of new forms of information technologies are expected to be utilized in such care services. The purpose of this study is to help facilitate the introduction of novel technologies in care services, such as in long-term care facilities. We focus on information-related costs that are incurred prior to the purchase of technologies such as attending exhibitions and conducting Internet searches. This report shows the results of an investigation into such information-related costs for three different products, and discusses the importance of the appropriate management of information costs from the development phase through to the introduction phase of such products.
{"title":"A survey report on information costs in introducing technology to care services for older adults","authors":"I. Kajitani, T. Sakaguchi, Y. Matsumoto, T. Tanikawa, T. Kotoku","doi":"10.1109/ROMAN.2015.7333664","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333664","url":null,"abstract":"This paper reports the results of a survey on introducing robots and other new technologies to care services, such as care for older adults. Demographic data show that the ratio of older adults in Japan has already exceeded 23.3%. Japan also is facing a severe shortage of the working-age population, which could lead to a reduction in the quality of care services for older adults; therefore, robots and the provision of new forms of information technologies are expected to be utilized in such care services. The purpose of this study is to help facilitate the introduction of novel technologies in care services, such as in long-term care facilities. We focus on information-related costs that are incurred prior to the purchase of technologies such as attending exhibitions and conducting Internet searches. This report shows the results of an investigation into such information-related costs for three different products, and discusses the importance of the appropriate management of information costs from the development phase through to the introduction phase of such products.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115055222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333661
Matthias Scheutz, B. Malle, Gordon Briggs
Autonomous social robots embedded in human societies have to be sensitive to human social interactions and thus to moral norms and principles guiding these interactions. Actions that violate norms can lead to the violator being blamed. Robots thus need to be able to anticipate possible norm violations and attempt to prevent them while they execute actions. If norm violations cannot be prevented (e.g., in a moral dilemma situation in which every action leads to a norm violation), then the robot needs to be able to justify the action to address any potential blame. In this paper, we present a first attempt at an action execution system for social robots that can (a) detect (some) norm violations, (b) consult an ethical reasoner for guidance on what to do in moral dilemma situations, and (c) it can keep track of execution traces and any resulting states that might have violated norms in order to produce justifications.
{"title":"Towards morally sensitive action selection for autonomous social robots","authors":"Matthias Scheutz, B. Malle, Gordon Briggs","doi":"10.1109/ROMAN.2015.7333661","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333661","url":null,"abstract":"Autonomous social robots embedded in human societies have to be sensitive to human social interactions and thus to moral norms and principles guiding these interactions. Actions that violate norms can lead to the violator being blamed. Robots thus need to be able to anticipate possible norm violations and attempt to prevent them while they execute actions. If norm violations cannot be prevented (e.g., in a moral dilemma situation in which every action leads to a norm violation), then the robot needs to be able to justify the action to address any potential blame. In this paper, we present a first attempt at an action execution system for social robots that can (a) detect (some) norm violations, (b) consult an ethical reasoner for guidance on what to do in moral dilemma situations, and (c) it can keep track of execution traces and any resulting states that might have violated norms in order to produce justifications.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116012610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ultimate goal of this study is to develop a method that can accomplish dexterous manipulation of various non-rigid objects by a robotic hand. In this paper, we propose a novel model-free approach using reinforcement learning to learn a shared control policy for dexterous telemanipulation by a human operator. A shared control policy is a probabilistic mapping from the human operator's (master) action and complementary sensor data to the robot (slave) control input for robot actuators. Through the learning process, our method can optimize the shared control policy so that it cooperates to the operator's policy and compensates the lack of sensory information of the operator using complementary sensor data to enhance the dexterity. To validate our method, we adopted a page turning task by telemanipulation and developed an experimental platform with a paper page model and a robot fingertip in simulation. Since the human operator cannot perceive the tactile information of the robot, it may not be as easy as humans do directly. Experimental results suggest that our method is able to learn task-relevant shared control for flexible and enhanced dexterous manipulation by a teleoperated robotic fingertip without tactile feedback to the operator.
{"title":"Reinforcement learning of shared control for dexterous telemanipulation: Application to a page turning skill","authors":"Takamitsu Matsubara, Takahiro Hasegawa, Kenji Sugimoto","doi":"10.1109/ROMAN.2015.7333587","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333587","url":null,"abstract":"The ultimate goal of this study is to develop a method that can accomplish dexterous manipulation of various non-rigid objects by a robotic hand. In this paper, we propose a novel model-free approach using reinforcement learning to learn a shared control policy for dexterous telemanipulation by a human operator. A shared control policy is a probabilistic mapping from the human operator's (master) action and complementary sensor data to the robot (slave) control input for robot actuators. Through the learning process, our method can optimize the shared control policy so that it cooperates to the operator's policy and compensates the lack of sensory information of the operator using complementary sensor data to enhance the dexterity. To validate our method, we adopted a page turning task by telemanipulation and developed an experimental platform with a paper page model and a robot fingertip in simulation. Since the human operator cannot perceive the tactile information of the robot, it may not be as easy as humans do directly. Experimental results suggest that our method is able to learn task-relevant shared control for flexible and enhanced dexterous manipulation by a teleoperated robotic fingertip without tactile feedback to the operator.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115715193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The issue of the coexistence of people and robots has been gaining increasing attention with the advancements in robotic technology in recent years. In our previous study, we proposed a personal robot - kiroPi - that can wear equipment to fit various conditions and support group communication. In this paper, we detail the development of a prototype of a life-log function for kiroPi by installing embodied hardware on a tablet. Through recording and playback experiments, we confirmed that kiroPi can provide lively communication and a sense of unity among users. The participants of our experiments found kiroPi to be effective in recording life-log as well as characterized playback.
{"title":"KiroPi: A life-log robot by installing embodied hardware on a tablet","authors":"Michiya Yamamoto, Saizo Aoyagi, Satoshi Fukumori, Tomio Watanabe","doi":"10.1109/ROMAN.2015.7333616","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333616","url":null,"abstract":"The issue of the coexistence of people and robots has been gaining increasing attention with the advancements in robotic technology in recent years. In our previous study, we proposed a personal robot - kiroPi - that can wear equipment to fit various conditions and support group communication. In this paper, we detail the development of a prototype of a life-log function for kiroPi by installing embodied hardware on a tablet. Through recording and playback experiments, we confirmed that kiroPi can provide lively communication and a sense of unity among users. The participants of our experiments found kiroPi to be effective in recording life-log as well as characterized playback.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117233711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333658
M. Shiomi, N. Hagita
This paper investigates people's social acceptance of a childcare support robot system and compares their attitudes to two childcare technologies: anesthesia during labor and baby food (processed food and formula milk), which includes powdered milk and instant food for babies and toddlers. To investigate their social acceptance, we developed scales from three points of view: safety and trustworthy, diligence, and decreasing workload. For this paper, our participants were comprised of 412 people located through a web-based survey and 14 people who experienced the prototype of our childcare support robot system. They answered questionnaires about our three developed scales and an intention to use scale to investigate their social acceptance toward childcare support technologies. The web-based survey results indicate that our system's concept was evaluated lower than current childcare support technologies, but people who experienced our system prototype evaluated it higher than those who filled out web-based surveys.
{"title":"Social acceptance of a childcare support robot system","authors":"M. Shiomi, N. Hagita","doi":"10.1109/ROMAN.2015.7333658","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333658","url":null,"abstract":"This paper investigates people's social acceptance of a childcare support robot system and compares their attitudes to two childcare technologies: anesthesia during labor and baby food (processed food and formula milk), which includes powdered milk and instant food for babies and toddlers. To investigate their social acceptance, we developed scales from three points of view: safety and trustworthy, diligence, and decreasing workload. For this paper, our participants were comprised of 412 people located through a web-based survey and 14 people who experienced the prototype of our childcare support robot system. They answered questionnaires about our three developed scales and an intention to use scale to investigate their social acceptance toward childcare support technologies. The web-based survey results indicate that our system's concept was evaluated lower than current childcare support technologies, but people who experienced our system prototype evaluated it higher than those who filled out web-based surveys.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122074908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333579
Yuma Fujiwara, Y. Hiroi, Yuki Tanaka, A. Ito
In this paper, we propose a unique robot that moves on a handrail installed at corridors of buildings like hospitals. The robot guides the guests of the building. Because it moves on the handrail along the corridor, it naturally moves along the corridor without conflicting other people and needs less cost for estimating its position compared with human-sized mobile robots. We propose a method to control this robot so that it precedes a few steps before the person to be guided. We examine two methods: one is a method to keep a fixed distance from the person, and the other one is a method where the robot keeps moving slowly even when the robot and the person is distant. From the experiment, the latter method was more efficient than the former method.
{"title":"Development of a mobile robot moving on a handrail —Control for preceding a person keeping a distance","authors":"Yuma Fujiwara, Y. Hiroi, Yuki Tanaka, A. Ito","doi":"10.1109/ROMAN.2015.7333579","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333579","url":null,"abstract":"In this paper, we propose a unique robot that moves on a handrail installed at corridors of buildings like hospitals. The robot guides the guests of the building. Because it moves on the handrail along the corridor, it naturally moves along the corridor without conflicting other people and needs less cost for estimating its position compared with human-sized mobile robots. We propose a method to control this robot so that it precedes a few steps before the person to be guided. We examine two methods: one is a method to keep a fixed distance from the person, and the other one is a method where the robot keeps moving slowly even when the robot and the person is distant. From the experiment, the latter method was more efficient than the former method.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127533873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333629
Takahisa Tani, S. Yamada
The use of mobile devices that utilize touch panels as interfaces, such as smartphones and tablet PCs, has spread in recent years, and these have many advantages. For example, panels can be operated more intuitively than those with conventional physical buttons, and the devices are much more flexible than those that use traditional fixed UIs. However, mistakes frequently occur when inputting with a touch panel because the buttons have no physical boundaries and users cannot get tactile feedback from their fingers. Thus, the input accuracy of touch-panel devices is lower than that of devices with physical buttons. There have been studies on improving input accuracy. Most of them have used language models for typing natural language or probabilistic models to describe the errors made when users tap panels with their fingers. However, these models are not practical because they deal with kinematic errors, not cognitive errors. Thus, we propose a more practical model for improving input accuracy in this paper, in which the tap model includes cognitive errors to avoid tapping neighboring objects to a target object. We consider that our model can describe important properties for designing various UIs depending on practical applications. We also conducted experiments to build our model in a calibrated way and discussed our evaluation of the model and revision of the model.
{"title":"Tap model that considers key arrangement to improve input accuracy of touch panels","authors":"Takahisa Tani, S. Yamada","doi":"10.1109/ROMAN.2015.7333629","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333629","url":null,"abstract":"The use of mobile devices that utilize touch panels as interfaces, such as smartphones and tablet PCs, has spread in recent years, and these have many advantages. For example, panels can be operated more intuitively than those with conventional physical buttons, and the devices are much more flexible than those that use traditional fixed UIs. However, mistakes frequently occur when inputting with a touch panel because the buttons have no physical boundaries and users cannot get tactile feedback from their fingers. Thus, the input accuracy of touch-panel devices is lower than that of devices with physical buttons. There have been studies on improving input accuracy. Most of them have used language models for typing natural language or probabilistic models to describe the errors made when users tap panels with their fingers. However, these models are not practical because they deal with kinematic errors, not cognitive errors. Thus, we propose a more practical model for improving input accuracy in this paper, in which the tap model includes cognitive errors to avoid tapping neighboring objects to a target object. We consider that our model can describe important properties for designing various UIs depending on practical applications. We also conducted experiments to build our model in a calibrated way and discussed our evaluation of the model and revision of the model.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127251840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}