A ball camera is a high tech ball equipped with cameras to obtain images from the ball's point of view. To capture 360° images with a minimum number of cameras, we embedded two ultra-wide-angle cameras in the ball. However, the two cameras were set diametrically opposite to each other; therefore, a blind spot was present, which could not be captured by the two cameras. Consequently, the obtained 360° image had a discontinuous part. In this paper, we describe the design and implementation of the ball camera and propose a method for completing the blind spot using image completion. We applied our method to the videos captured by the ball camera and proved this method successfully completes the blind spot and provides more real-life images.
{"title":"Vision Extension for a Ball Camera by Using Image Completion","authors":"Tsubasa Kitayama, Shio Miyafuji, H. Koike","doi":"10.1145/3384657.3384802","DOIUrl":"https://doi.org/10.1145/3384657.3384802","url":null,"abstract":"A ball camera is a high tech ball equipped with cameras to obtain images from the ball's point of view. To capture 360° images with a minimum number of cameras, we embedded two ultra-wide-angle cameras in the ball. However, the two cameras were set diametrically opposite to each other; therefore, a blind spot was present, which could not be captured by the two cameras. Consequently, the obtained 360° image had a discontinuous part. In this paper, we describe the design and implementation of the ball camera and propose a method for completing the blind spot using image completion. We applied our method to the videos captured by the ball camera and proved this method successfully completes the blind spot and provides more real-life images.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123969223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Shirota, Makoto Uju, Yurike Chandra, Elaine Czech, R. Peiris, K. Minamizawa
In this research, we aimed to expand humans' physical ability by reshaping natural physicalities that allow body transformation. By modifying the existing body parts, we proposed a system that changed humans' cognition of reality and their environment, and presented its use case. We conducted a study that explored the use of actions: (1) opening and closing of the pinnas (ears) and (2) pulling of the nose to change humans' body schema and humans' cognition of the environment. The results showed that people were able to perceive changes in the position of the pinnas in three different conditions (100% Open, 50% Open, 0% Open) with high accuracy. We also observed that opening and closing the pinnas could alter the cognition of the sound field, as opposed to the normal ears. Furthermore, we found that the feeling of a nose is extending was felt more by using the system. This finding implied that the modified body schema improved our recognition of the odor source and its position.
{"title":"Design of Altered Cognition with Reshaped Bodies","authors":"K. Shirota, Makoto Uju, Yurike Chandra, Elaine Czech, R. Peiris, K. Minamizawa","doi":"10.1145/3384657.3384773","DOIUrl":"https://doi.org/10.1145/3384657.3384773","url":null,"abstract":"In this research, we aimed to expand humans' physical ability by reshaping natural physicalities that allow body transformation. By modifying the existing body parts, we proposed a system that changed humans' cognition of reality and their environment, and presented its use case. We conducted a study that explored the use of actions: (1) opening and closing of the pinnas (ears) and (2) pulling of the nose to change humans' body schema and humans' cognition of the environment. The results showed that people were able to perceive changes in the position of the pinnas in three different conditions (100% Open, 50% Open, 0% Open) with high accuracy. We also observed that opening and closing the pinnas could alter the cognition of the sound field, as opposed to the normal ears. Furthermore, we found that the feeling of a nose is extending was felt more by using the system. This finding implied that the modified body schema improved our recognition of the odor source and its position.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126739256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Röddiger, Michael Beigl, Daniel Wolffram, M. Budde, Hongye Sun
Innovative enabling technologies are key drivers of human augmentation. In this paper, we explore a new, conductive, and configurable material made from Polydimethylsiloxane (PDMS) that is capillary doped with silver particles (Ag) using an immiscible secondary fluid to build ultra-stretchable, soft electronics. Bonding silver particles directly with PDMS enables inherently stretchable Ag-PDMS circuits. Compared to previous work, the reduced silver consumption creates significant advantages, e.g., better stretchability and lower costs. The secondary fluid ensures self-assembling conductivity networks. Sensors are 3D-printed ultra-thin (<100μm) onto a pure PDMS substrate in one step and only require a PDMS cover-layer. They exhibit almost stable electrical properties even for an intense stretching of >200%. Therefore, printed circuits can attach tightly onto the body. Due to biocompatibility, devices can be implanted (e.g., open wounds treatment). We present a proof of concept on-skin interface that uses the new material to provide six distinct input gestures. Our quantitative evaluation with ten participants shows that we can successfully classify the gestures with a low spatial-resolution circuit. With few training data and a gradient boosting classifier, we yield 83% overall accuracy. Our qualitative material study with twelve participants shows that usability and comfort are well perceived; however, the smooth but easy to adapt surface does not feel tissue-equivalent. For future work, the new material will likely serve to build robust and skin-like electronics.
{"title":"PDMSkin","authors":"Tobias Röddiger, Michael Beigl, Daniel Wolffram, M. Budde, Hongye Sun","doi":"10.1145/3384657.3384789","DOIUrl":"https://doi.org/10.1145/3384657.3384789","url":null,"abstract":"Innovative enabling technologies are key drivers of human augmentation. In this paper, we explore a new, conductive, and configurable material made from Polydimethylsiloxane (PDMS) that is capillary doped with silver particles (Ag) using an immiscible secondary fluid to build ultra-stretchable, soft electronics. Bonding silver particles directly with PDMS enables inherently stretchable Ag-PDMS circuits. Compared to previous work, the reduced silver consumption creates significant advantages, e.g., better stretchability and lower costs. The secondary fluid ensures self-assembling conductivity networks. Sensors are 3D-printed ultra-thin (<100μm) onto a pure PDMS substrate in one step and only require a PDMS cover-layer. They exhibit almost stable electrical properties even for an intense stretching of >200%. Therefore, printed circuits can attach tightly onto the body. Due to biocompatibility, devices can be implanted (e.g., open wounds treatment). We present a proof of concept on-skin interface that uses the new material to provide six distinct input gestures. Our quantitative evaluation with ten participants shows that we can successfully classify the gestures with a low spatial-resolution circuit. With few training data and a gradient boosting classifier, we yield 83% overall accuracy. Our qualitative material study with twelve participants shows that usability and comfort are well perceived; however, the smooth but easy to adapt surface does not feel tissue-equivalent. For future work, the new material will likely serve to build robust and skin-like electronics.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124661311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In individual competitions consisting of repetitive movement sports, it is necessary to increase the reproducibility of movements by recognizing and correcting movement changes per second. Since it is difficult to obtain sufficient awareness only by subjectivity, a mechanism that can objectively confirm the movement is required. In this paper, we propose a system that can easily search for differences in multiple trial motions by the same person for archery movements. The proposed system uses Dynamic Time Warping to determine the similarity of multiple shots of one competitor from the time-series data from the angular velocity sensor attached to the competitor's bow. Based on the similarity distance, K-means Clustering is performed. In addition, the video corresponding to the time at which there is a difference is cut out from the video recorded simultaneously to the sensor data, and the two images are superimposed and presented to visualize the difference. When the proposed system was tested with five intermediate- and advanced-level archers, it was possible to detect differences such as minor shaking, the posture, and the motion speed for approximately 0.5 seconds. These differences can be found by advanced-level archers by carefully comparing the videos for many times, but are difficult to identify by intermediate-level archers.Feedback from interviews with the instructor suggested that the differences detected were meaningful to find out the points for improve archery skill.
{"title":"Archery shots visualization by clustering and comparing from angular velocities of bows","authors":"Midori Kawaguchi, Hironori Mitake, S. Hasegawa","doi":"10.1145/3384657.3384782","DOIUrl":"https://doi.org/10.1145/3384657.3384782","url":null,"abstract":"In individual competitions consisting of repetitive movement sports, it is necessary to increase the reproducibility of movements by recognizing and correcting movement changes per second. Since it is difficult to obtain sufficient awareness only by subjectivity, a mechanism that can objectively confirm the movement is required. In this paper, we propose a system that can easily search for differences in multiple trial motions by the same person for archery movements. The proposed system uses Dynamic Time Warping to determine the similarity of multiple shots of one competitor from the time-series data from the angular velocity sensor attached to the competitor's bow. Based on the similarity distance, K-means Clustering is performed. In addition, the video corresponding to the time at which there is a difference is cut out from the video recorded simultaneously to the sensor data, and the two images are superimposed and presented to visualize the difference. When the proposed system was tested with five intermediate- and advanced-level archers, it was possible to detect differences such as minor shaking, the posture, and the motion speed for approximately 0.5 seconds. These differences can be found by advanced-level archers by carefully comparing the videos for many times, but are difficult to identify by intermediate-level archers.Feedback from interviews with the instructor suggested that the differences detected were meaningful to find out the points for improve archery skill.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116543222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mutual understanding via sharing and interpreting inner states is socially rewarding. Prior research shows that people find Brain-Computer Interfaces (BCIs) a suitable tool to implicitly communicate their cognitive states. In this paper, we conduct an online survey (N=43) to identify design parameters for systems that implicitly share cognitive states. We achieve this by designing a research probe called "SpotlessMind" to artistically share brain occupancy with another while considering the bystanders' experience to elicit user responses. Our results show that 98% would like to see the installation. People would use it as a gesture of openness and as a communication mediator. Abstracting visual, auditory, and somatosensory depictions is a good trade-off between understandability and users' privacy protection. Our work supports designing engaging prototypes that promote empathy, cognitive awareness and convergence between individuals.
{"title":"SpotlessMind","authors":"Passant Elagroudy, Xiyue Wang, Evgeny Stemasov, Teresa Hirzle, Svetlana Shishkovets, Siddharth Mehrotra, A. Schmidt","doi":"10.1145/3384657.3384800","DOIUrl":"https://doi.org/10.1145/3384657.3384800","url":null,"abstract":"Mutual understanding via sharing and interpreting inner states is socially rewarding. Prior research shows that people find Brain-Computer Interfaces (BCIs) a suitable tool to implicitly communicate their cognitive states. In this paper, we conduct an online survey (N=43) to identify design parameters for systems that implicitly share cognitive states. We achieve this by designing a research probe called \"SpotlessMind\" to artistically share brain occupancy with another while considering the bystanders' experience to elicit user responses. Our results show that 98% would like to see the installation. People would use it as a gesture of openness and as a communication mediator. Abstracting visual, auditory, and somatosensory depictions is a good trade-off between understandability and users' privacy protection. Our work supports designing engaging prototypes that promote empathy, cognitive awareness and convergence between individuals.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133862237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akira Matsuda, K. Nozawa, Kazuki Takata, Atsushi Izumihara, J. Rekimoto
We designed a necklace-style device named HapticPointer for presenting a direction as pointing cues in remote collaboration tasks. The device has 16 vibration motors placed along a line of flexible string. Our vibration algorithm represents horizontal and vertical directions by changing the position and intensity of each vibration. In our experiment, participants attempted to find a specific target, and the accuracy of successful trials reached 90.65%. Moreover, the participants found the targets in 6 seconds on average. Furthermore, our user study implies that the device can simulates the sensation of walking together. It is assumed that the sensation improves engagement between the local and remote users.
{"title":"HapticPointer","authors":"Akira Matsuda, K. Nozawa, Kazuki Takata, Atsushi Izumihara, J. Rekimoto","doi":"10.1145/3384657.3384777","DOIUrl":"https://doi.org/10.1145/3384657.3384777","url":null,"abstract":"We designed a necklace-style device named HapticPointer for presenting a direction as pointing cues in remote collaboration tasks. The device has 16 vibration motors placed along a line of flexible string. Our vibration algorithm represents horizontal and vertical directions by changing the position and intensity of each vibration. In our experiment, participants attempted to find a specific target, and the accuracy of successful trials reached 90.65%. Moreover, the participants found the targets in 6 seconds on average. Furthermore, our user study implies that the device can simulates the sensation of walking together. It is assumed that the sensation improves engagement between the local and remote users.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114983420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we proposed and implemented an augmented workplace where humans and deployed sensors have some interaction. Through the experiment, CO2 sensor asks humans to open the window when CO2 level exceeds the threshold, we found that a human has obeyed a message from a sensor almost every time.
{"title":"Augmented Workplace: Human-Sensor Interaction for Improving the Work Environment","authors":"Y. Arakawa","doi":"10.1145/3384657.3385334","DOIUrl":"https://doi.org/10.1145/3384657.3385334","url":null,"abstract":"In this paper, we proposed and implemented an augmented workplace where humans and deployed sensors have some interaction. Through the experiment, CO2 sensor asks humans to open the window when CO2 level exceeds the threshold, we found that a human has obeyed a message from a sensor almost every time.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115890885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichi Hiroi, Takumi Kaminokado, Atsushi Mori, Yuta Itoh
We present DehazeGlasses, a see-through visual haze removal system that optically dehazes the user's field of vision. Human vision suffers from a degraded view due to aspects of the scene environment, such as haze. Such degradation may interfere with our behavior or judgement in daily tasks. We focus on hazy scenes as one common degradation source, which whitens the view due to certain atmospheric conditions. Unlike typical computer vision systems that process recorded images, we aim to realize a see-through glasses system that can optically manipulate our field of view to dehaze the perceived scene. Our system selectively modulates the intensity of the light entering the eyes via occlusion-capable optical see-through head-mounted displays (OST-HMD). We built a proof-of-concept system to evaluate the feasibility of our haze removal method by combining a digital micromirror device (DMD) and an OST-HMD, and tested it with a user-perspective viewpoint camera. A quantitative evaluation with 80 scenes from a haze removal dataset shows that our system realizes a dehazed view that is significantly closer to the ground truth scene compared to the native view under a perceptual image similarity metric. This evaluation shows that our system achieves perceptually natural haze removal while maintaining the see-through view of actual scenes.
{"title":"DehazeGlasses","authors":"Yuichi Hiroi, Takumi Kaminokado, Atsushi Mori, Yuta Itoh","doi":"10.1145/3384657.3384781","DOIUrl":"https://doi.org/10.1145/3384657.3384781","url":null,"abstract":"We present DehazeGlasses, a see-through visual haze removal system that optically dehazes the user's field of vision. Human vision suffers from a degraded view due to aspects of the scene environment, such as haze. Such degradation may interfere with our behavior or judgement in daily tasks. We focus on hazy scenes as one common degradation source, which whitens the view due to certain atmospheric conditions. Unlike typical computer vision systems that process recorded images, we aim to realize a see-through glasses system that can optically manipulate our field of view to dehaze the perceived scene. Our system selectively modulates the intensity of the light entering the eyes via occlusion-capable optical see-through head-mounted displays (OST-HMD). We built a proof-of-concept system to evaluate the feasibility of our haze removal method by combining a digital micromirror device (DMD) and an OST-HMD, and tested it with a user-perspective viewpoint camera. A quantitative evaluation with 80 scenes from a haze removal dataset shows that our system realizes a dehazed view that is significantly closer to the ground truth scene compared to the native view under a perceptual image similarity metric. This evaluation shows that our system achieves perceptually natural haze removal while maintaining the see-through view of actual scenes.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123113254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kenta Saito, Atsushi Okada, Yu Matsumura, J. Rekimoto
The phenomenon that amputees feel pain at the position of their lost limb is called phantom limb pain. Although the cause of phantom limb pain has not yet been medically elucidated, several hypotheses have been established from past studies, and some treatment systems of phantom limb pain have been developed based on these hypotheses. In these treatments, the instructions of the therapist who is familiar with phantom limb pain are essential. However, there is a problem that the number of therapists is insufficient and that existing treatment systems require the therapist to be next to the patient. In order to solve this problem, realizing remote treatment is expected. Therefore, in this research, we propose a phantom limb pain remote treatment system in which a therapist can give instructions by presenting not only voice information but also body movements to a remote patient in a shared VR space. In this study, we conducted a user study to investigate the difference between the case where the therapist gives instructions to the patient in the same place and the case where the therapist gives the patient instructions remotely by using our system. As a result, it was suggested that our proposed system would improve the efficiency of movement transmission. This study is thought to solve the problem of insufficiency of therapists for phantom limb pain.
{"title":"Remote Treatment System of Phantom Limb Pain by Displaying Body Movement in Shared VR Space","authors":"Kenta Saito, Atsushi Okada, Yu Matsumura, J. Rekimoto","doi":"10.1145/3384657.3384795","DOIUrl":"https://doi.org/10.1145/3384657.3384795","url":null,"abstract":"The phenomenon that amputees feel pain at the position of their lost limb is called phantom limb pain. Although the cause of phantom limb pain has not yet been medically elucidated, several hypotheses have been established from past studies, and some treatment systems of phantom limb pain have been developed based on these hypotheses. In these treatments, the instructions of the therapist who is familiar with phantom limb pain are essential. However, there is a problem that the number of therapists is insufficient and that existing treatment systems require the therapist to be next to the patient. In order to solve this problem, realizing remote treatment is expected. Therefore, in this research, we propose a phantom limb pain remote treatment system in which a therapist can give instructions by presenting not only voice information but also body movements to a remote patient in a shared VR space. In this study, we conducted a user study to investigate the difference between the case where the therapist gives instructions to the patient in the same place and the case where the therapist gives the patient instructions remotely by using our system. As a result, it was suggested that our proposed system would improve the efficiency of movement transmission. This study is thought to solve the problem of insufficiency of therapists for phantom limb pain.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127088584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this demonstration we showcase our recent work on conformal wearable devices that can enable expressive interaction on skin. Our interactive exhibits demonstrate a variety of functionalities including ultrathin wearable devices for high-resolution touch input, expressive interaction on body landmarks, visual output, and physiological sensing. Our work furthermore demonstrates fabrication techniques to offer customization and personalization of epidermal devices, ultimately providing an intimate coupling with the human body.
{"title":"Conformal Wearable Devices for Expressive On-Skin Interaction","authors":"A. Nittala, Arshad Khan, Jürgen Steimle","doi":"10.1145/3384657.3384776","DOIUrl":"https://doi.org/10.1145/3384657.3384776","url":null,"abstract":"In this demonstration we showcase our recent work on conformal wearable devices that can enable expressive interaction on skin. Our interactive exhibits demonstrate a variety of functionalities including ultrathin wearable devices for high-resolution touch input, expressive interaction on body landmarks, visual output, and physiological sensing. Our work furthermore demonstrates fabrication techniques to offer customization and personalization of epidermal devices, ultimately providing an intimate coupling with the human body.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127410060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}