Daiki Matsuda, K. Uemura, Nobuchika Sakata, S. Nishida
Due to the prevalence of cell phones many people view information on small handheld LCD screens. However, these mobile devices require the use of one hand, the user needs to keep a close watch on a small display, and they have to be retrieved from a pocket or a bag. To overcome these problems, we focus on wearable projection systems that enable hands-free viewing via large projected screens, eliminating the need to retrieve and hold devices. In this paper, we present a toe input system that can realize haptic interaction, direct manipulation, and floor projection using a wearable projection system with a large projection surface. It is composed of a mobile projector, Kinect depth camera, and a gyro sensor. It is attached to the user's chest and can detect when the users foot touches or rises from the floor. To evaluate the system we conducted experiments investigating object selection by foot motion.
{"title":"Toe Input Using a Mobile Projector and Kinect Sensor","authors":"Daiki Matsuda, K. Uemura, Nobuchika Sakata, S. Nishida","doi":"10.1109/ISWC.2012.11","DOIUrl":"https://doi.org/10.1109/ISWC.2012.11","url":null,"abstract":"Due to the prevalence of cell phones many people view information on small handheld LCD screens. However, these mobile devices require the use of one hand, the user needs to keep a close watch on a small display, and they have to be retrieved from a pocket or a bag. To overcome these problems, we focus on wearable projection systems that enable hands-free viewing via large projected screens, eliminating the need to retrieve and hold devices. In this paper, we present a toe input system that can realize haptic interaction, direct manipulation, and floor projection using a wearable projection system with a large projection surface. It is composed of a mobile projector, Kinect depth camera, and a gyro sensor. It is attached to the user's chest and can detect when the users foot touches or rises from the floor. To evaluate the system we conducted experiments investigating object selection by foot motion.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122040546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable sensing platforms like modern smart phones have proven to be effective means in the complexity and computational social sciences. This paper draws from explicit (phone calls, SMS messaging) and implicit (proximity sensing based on Bluetooth radio signals) interaction patterns collected via smart phones and reality mining techniques to explain the dynamics of personal interactions and relationships. We consider three real human to human interaction networks, namely physical proximity, phone communication and instant messaging. We analyze a real undergraduate community's social circles and consider various topologies, such as the interaction patterns of users with the entire community, and the interaction patterns of users within their own community. We fit distributions of various interactions, for example, showing that the distribution of users that have been in physical proximity but have never communicated by phone fits a gaussian. Finally, we consider five types of relationships, for example friendships, to see whether significant differences exist in their interaction patterns. We find statistically significant differences in the physical proximity patterns of people who are mutual friends and people who are non-mutual (or asymmetric) friends, though this difference does not exist between mutual friends and never friends, nor does it exist in their phone communication patterns. Our findings impact a wide range of data-driven applications in socio-technical systems by providing an overview of community interaction patterns which can be used for applications such as epidemiology, or in understanding the diffusion of opinions and relationships.
{"title":"Socio-Technical Network Analysis from Wearable Interactions","authors":"K. Farrahi, R. Emonet, A. Ferscha","doi":"10.1109/ISWC.2012.19","DOIUrl":"https://doi.org/10.1109/ISWC.2012.19","url":null,"abstract":"Wearable sensing platforms like modern smart phones have proven to be effective means in the complexity and computational social sciences. This paper draws from explicit (phone calls, SMS messaging) and implicit (proximity sensing based on Bluetooth radio signals) interaction patterns collected via smart phones and reality mining techniques to explain the dynamics of personal interactions and relationships. We consider three real human to human interaction networks, namely physical proximity, phone communication and instant messaging. We analyze a real undergraduate community's social circles and consider various topologies, such as the interaction patterns of users with the entire community, and the interaction patterns of users within their own community. We fit distributions of various interactions, for example, showing that the distribution of users that have been in physical proximity but have never communicated by phone fits a gaussian. Finally, we consider five types of relationships, for example friendships, to see whether significant differences exist in their interaction patterns. We find statistically significant differences in the physical proximity patterns of people who are mutual friends and people who are non-mutual (or asymmetric) friends, though this difference does not exist between mutual friends and never friends, nor does it exist in their phone communication patterns. Our findings impact a wide range of data-driven applications in socio-technical systems by providing an overview of community interaction patterns which can be used for applications such as epidemiology, or in understanding the diffusion of opinions and relationships.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129626105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Baños, Alberto Calatroni, M. Damas, H. Pomares, I. Rojas, Hesam Sagha, J. Millán, G. Tröster, Ricardo Chavarriaga, D. Roggen
We propose a method to automatically translate a preexisting activity recognition system, devised for a source sensor domain S, so that it can operate on a newly discovered target sensor domain T, possibly of different modality. First, we use MIMO system identification techniques to obtain a function that maps the signals of S to T. This mapping is then used to translate the recognition system across the sensor domains. We demonstrate the approach in a 5-class gesture recognition problem translating between a vision-based skeleton tracking system (Kinect), and inertial measurement units (IMUs). An adequate mapping can be learned in as few as a single gesture (3 seconds) in this scenario. The accuracy after Kinect → IMU or IMU → Kinect translation is 4% below the baseline for the same limb. Translating across modalities and also to an adjacent limb yields an accuracy 8% below baseline. We discuss the sources of errors and means for improvement. The approach is independent of the sensor modalities. It supports multimodal activity recognition and more flexible real-world activity recognition system deployments.
{"title":"Kinect=IMU? Learning MIMO Signal Mappings to Automatically Translate Activity Recognition Systems across Sensor Modalities","authors":"O. Baños, Alberto Calatroni, M. Damas, H. Pomares, I. Rojas, Hesam Sagha, J. Millán, G. Tröster, Ricardo Chavarriaga, D. Roggen","doi":"10.1109/ISWC.2012.17","DOIUrl":"https://doi.org/10.1109/ISWC.2012.17","url":null,"abstract":"We propose a method to automatically translate a preexisting activity recognition system, devised for a source sensor domain S, so that it can operate on a newly discovered target sensor domain T, possibly of different modality. First, we use MIMO system identification techniques to obtain a function that maps the signals of S to T. This mapping is then used to translate the recognition system across the sensor domains. We demonstrate the approach in a 5-class gesture recognition problem translating between a vision-based skeleton tracking system (Kinect), and inertial measurement units (IMUs). An adequate mapping can be learned in as few as a single gesture (3 seconds) in this scenario. The accuracy after Kinect → IMU or IMU → Kinect translation is 4% below the baseline for the same limb. Translating across modalities and also to an adjacent limb yields an accuracy 8% below baseline. We discuss the sources of errors and means for improvement. The approach is independent of the sensor modalities. It supports multimodal activity recognition and more flexible real-world activity recognition system deployments.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130689859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a provocative view of wearable computer research over the years, starting with the first IEEE International Symposium on Wearable Computers in 1997. The goal of this paper is to reflect on the original research challenges from the first few years. With this goal in mind, two questions can be examined: 1) have we achieved the goals we set out? and 2) how has the direction of research changed in the past fifteen years? This is not a survey paper, but a platform to stimulate discussion.
{"title":"Have We Achieved the Ultimate Wearable Computer?","authors":"B. Thomas","doi":"10.1109/ISWC.2012.26","DOIUrl":"https://doi.org/10.1109/ISWC.2012.26","url":null,"abstract":"This paper provides a provocative view of wearable computer research over the years, starting with the first IEEE International Symposium on Wearable Computers in 1997. The goal of this paper is to reflect on the original research challenges from the first few years. With this goal in mind, two questions can be examined: 1) have we achieved the goals we set out? and 2) how has the direction of research changed in the past fifteen years? This is not a survey paper, but a platform to stimulate discussion.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130945862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the lack of a commonly used, standard dataset and established benchmarking problems for physical activity monitoring. A new dataset - recorded from 18 activities performed by 9 subjects, wearing 3 IMUs and a HR-monitor - is created and made publicly available. Moreover, 4 classification problems are benchmarked on the dataset, using a standard data processing chain and 5 different classifiers. The benchmark shows the difficulty of the classification tasks and exposes new challenges for physical activity monitoring.
{"title":"Introducing a New Benchmarked Dataset for Activity Monitoring","authors":"Attila Reiss, D. Stricker","doi":"10.1109/ISWC.2012.13","DOIUrl":"https://doi.org/10.1109/ISWC.2012.13","url":null,"abstract":"This paper addresses the lack of a commonly used, standard dataset and established benchmarking problems for physical activity monitoring. A new dataset - recorded from 18 activities performed by 9 subjects, wearing 3 IMUs and a HR-monitor - is created and made publicly available. Moreover, 4 classification problems are benchmarked on the dataset, using a standard data processing chain and 5 different classifiers. The benchmark shows the difficulty of the classification tasks and exposes new challenges for physical activity monitoring.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125500049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Zeagler, Scott M. Gilliland, Halley P. Profita, Thad Starner
Electronic textiles (or e-textiles) attempt to integrate electronics and computing into fabric. In our efforts to create new e-textile interfaces and construction techniques for our Electronic Textile Interface Swatch Book (an e-textile toolkit), we have created a multi-use jog wheel using multilayer embroidery, sound sequins from PVDF film and a tilt sensor using a hanging bead, embroidery and capacitive sensing. In order to make capacitive sensing over long leads possible on the body, we have constructed twisted pair ribbon and demonstrated its effectiveness over more typical sensing techniques. We detail construction techniques and lessons learned from this technology exploration.
{"title":"Textile Interfaces: Embroidered Jog-Wheel, Beaded Tilt Sensor, Twisted Pair Ribbon, and Sound Sequins","authors":"C. Zeagler, Scott M. Gilliland, Halley P. Profita, Thad Starner","doi":"10.1109/ISWC.2012.29","DOIUrl":"https://doi.org/10.1109/ISWC.2012.29","url":null,"abstract":"Electronic textiles (or e-textiles) attempt to integrate electronics and computing into fabric. In our efforts to create new e-textile interfaces and construction techniques for our Electronic Textile Interface Swatch Book (an e-textile toolkit), we have created a multi-use jog wheel using multilayer embroidery, sound sequins from PVDF film and a tilt sensor using a hanging bead, embroidery and capacitive sensing. In order to make capacitive sensing over long leads possible on the body, we have constructed twisted pair ribbon and demonstrated its effectiveness over more typical sensing techniques. We detail construction techniques and lessons learned from this technology exploration.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nimesha Ranasinghe, R. Nakatsu, Hideaki Nii, G. Ponnampalam
Most of the systems for generating taste sensations are based on blending different chemicals appropriately, and there are less proven approaches to stimulate the sense of taste digitally. In this paper, a method to digitally stimulate the sense of taste is introduced and demonstrated based on electrical and thermal stimulation on human tongue. Thus, two digital control systems are presented to control taste sensations and their intensities effectively on the tongue. The effects of most persuading factors such as current, frequency, and temperature have been accounted to noninvasively stimulate the tongue. The initial experimental results indicate that sour (strong), bitter (mild), and salty(mild) are the main sensations, which can be evoked while there are evidences of sweet sensation too. Based on the results of the Tongue Mounted Digital Taste Interface, we have then developed another system which named as the Digital Sour Lollipop to effectively control the sour taste digitally. Initial experimental results of this system show the controllability of sour taste up to three levels of intensities using the electrical stimulation on human tongue.
{"title":"Tongue Mounted Interface for Digitally Actuating the Sense of Taste","authors":"Nimesha Ranasinghe, R. Nakatsu, Hideaki Nii, G. Ponnampalam","doi":"10.1109/ISWC.2012.16","DOIUrl":"https://doi.org/10.1109/ISWC.2012.16","url":null,"abstract":"Most of the systems for generating taste sensations are based on blending different chemicals appropriately, and there are less proven approaches to stimulate the sense of taste digitally. In this paper, a method to digitally stimulate the sense of taste is introduced and demonstrated based on electrical and thermal stimulation on human tongue. Thus, two digital control systems are presented to control taste sensations and their intensities effectively on the tongue. The effects of most persuading factors such as current, frequency, and temperature have been accounted to noninvasively stimulate the tongue. The initial experimental results indicate that sour (strong), bitter (mild), and salty(mild) are the main sensations, which can be evoked while there are evidences of sweet sensation too. Based on the results of the Tongue Mounted Digital Taste Interface, we have then developed another system which named as the Digital Sour Lollipop to effectively control the sour taste digitally. Initial experimental results of this system show the controllability of sour taste up to three levels of intensities using the electrical stimulation on human tongue.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114538231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.
{"title":"Airwriting: Hands-Free Mobile Text Input by Spotting and Continuous Recognition of 3d-Space Handwriting with Inertial Sensors","authors":"C. Amma, Marcus Georgi, Tanja Schultz","doi":"10.1109/ISWC.2012.21","DOIUrl":"https://doi.org/10.1109/ISWC.2012.21","url":null,"abstract":"We present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122742890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a method for information extraction and presentation using recorded eye gaze data, i.e., life-log video data. We call our method Gaze Cloud, which essentially uses gaze information for the generation of thumbnail images. One of the usages of wearable computing, personal life-logs are becoming increasingly possible. However, an aspect that needs to be addressed is information retrieval through different browsing methods. It is also well known that human memory recall is aided by effective presentation of information. Our propose method Gaze Cloud calculates the importance of information from gaze data that is consequently used for the generation of thumbnail images. This method performs the calculation using the eye gaze duration and hot spot information. Additionally, we construct a prototype daily-use wearable eye tracker system.
{"title":"GazeCloud: A Thumbnail Extraction Method Using Gaze Log Data for Video Life-Log","authors":"Yoshio Ishiguro, J. Rekimoto","doi":"10.1109/ISWC.2012.32","DOIUrl":"https://doi.org/10.1109/ISWC.2012.32","url":null,"abstract":"We propose a method for information extraction and presentation using recorded eye gaze data, i.e., life-log video data. We call our method Gaze Cloud, which essentially uses gaze information for the generation of thumbnail images. One of the usages of wearable computing, personal life-logs are becoming increasingly possible. However, an aspect that needs to be addressed is information retrieval through different browsing methods. It is also well known that human memory recall is aided by effective presentation of information. Our propose method Gaze Cloud calculates the importance of information from gaze data that is consequently used for the generation of thumbnail images. This method performs the calculation using the eye gaze duration and hot spot information. Additionally, we construct a prototype daily-use wearable eye tracker system.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129987417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe a field study conducted with a wearable vibration belt where we test to determine the vibration intensity sensitivity ranges on a large diverse group of participants with evenly distributed ages and gender, ranging from seven to 79 years. We test for alterations in sensitivity in the field by introducing an escalating level of distraction in increasingly busy environments. The findings on sensitivity detection range differ from previous lab studies in that we found a decreased detection rate in busy environments. Here we test with a much larger sample and age range, and contribute with the first vibration sensitivity testing outside the lab in an urban public environment.
{"title":"Urban Vibrations: Sensitivities in the Field with a Broad Demographic","authors":"A. Morrison, Lars Knudsen, H. J. Andersen","doi":"10.1109/ISWC.2012.10","DOIUrl":"https://doi.org/10.1109/ISWC.2012.10","url":null,"abstract":"In this paper we describe a field study conducted with a wearable vibration belt where we test to determine the vibration intensity sensitivity ranges on a large diverse group of participants with evenly distributed ages and gender, ranging from seven to 79 years. We test for alterations in sensitivity in the field by introducing an escalating level of distraction in increasingly busy environments. The findings on sensitivity detection range differ from previous lab studies in that we found a decreased detection rate in busy environments. Here we test with a much larger sample and age range, and contribute with the first vibration sensitivity testing outside the lab in an urban public environment.","PeriodicalId":190627,"journal":{"name":"2012 16th International Symposium on Wearable Computers","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128741378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}