Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685291
T. McDaniel, S. Krishna, V. Balasubramanian, D. Colbry, S. Panchanathan
Good social skills are important and provide for a healthy, successful life; however, individuals with visual impairments are at a disadvantage when interacting with sighted peers due to inaccessible non-verbal cues. This paper presents a haptic (vibrotactile) belt to assist individuals who are blind or visually impaired by communicating non-verbal cues during social interactions. We focus on non-verbal communication pertaining to the relative location of the communicators with respect to the user in terms of direction and distance. Results from two experiments show that the haptic belt is effective in using vibration location and duration to communicate the relative direction and distance, respectively, of an individual in the userpsilas visual field.
{"title":"Using a haptic belt to convey non-verbal communication cues during social interactions to individuals who are blind","authors":"T. McDaniel, S. Krishna, V. Balasubramanian, D. Colbry, S. Panchanathan","doi":"10.1109/HAVE.2008.4685291","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685291","url":null,"abstract":"Good social skills are important and provide for a healthy, successful life; however, individuals with visual impairments are at a disadvantage when interacting with sighted peers due to inaccessible non-verbal cues. This paper presents a haptic (vibrotactile) belt to assist individuals who are blind or visually impaired by communicating non-verbal cues during social interactions. We focus on non-verbal communication pertaining to the relative location of the communicators with respect to the user in terms of direction and distance. Results from two experiments show that the haptic belt is effective in using vibration location and duration to communicate the relative direction and distance, respectively, of an individual in the userpsilas visual field.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116026888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685293
R. Velázquez, E. Preza, H. Hernández
This paper presents the design and preliminary prototype of the TactoBook, a novel reading assistive device for the blind that is able to reproduce electronic books (eBooks) in portable electronic tactile displays. The TactoBook consists of a computer-based system that translates fast and automatically any eBook into Braille. The Braille version of the eBook is then stored in a USB memory drive which is later inserted and reproduced in a compact, lightweight and highly-portable tactile display. Braille readers can access published information immediately and store multiple eBooks in the same device without carrying the burdensome tactile print versions.
{"title":"Making eBooks accessible to blind Braille readers","authors":"R. Velázquez, E. Preza, H. Hernández","doi":"10.1109/HAVE.2008.4685293","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685293","url":null,"abstract":"This paper presents the design and preliminary prototype of the TactoBook, a novel reading assistive device for the blind that is able to reproduce electronic books (eBooks) in portable electronic tactile displays. The TactoBook consists of a computer-based system that translates fast and automatically any eBook into Braille. The Braille version of the eBook is then stored in a USB memory drive which is later inserted and reproduced in a compact, lightweight and highly-portable tactile display. Braille readers can access published information immediately and store multiple eBooks in the same device without carrying the burdensome tactile print versions.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123514041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685303
C. Leong, Y. Xing, N. Georganas
In this paper we will describe how we establish an experimental framework of tele-immersive system based on the polyhedral visual hull algorithm. The system is divided into two sites: the local and the remote sites. At the local site, images of a participant are taken from a set of synchronized cameras. A polyhedral visual hull algorithm is then used to reconstruct the 3D mesh. The texture information together with the mesh model is then sent to the remote site for rendering on immersive walls. We will describe each of the system components, including segmentation, post-process filtering, visual hull reconstruction at the local site; view dependent texture mapping at the remote site. We will present some preliminary results and observations we make during initial investigations, and propose possible future work that will add further functionalities to the system.
{"title":"Tele-immersive systems","authors":"C. Leong, Y. Xing, N. Georganas","doi":"10.1109/HAVE.2008.4685303","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685303","url":null,"abstract":"In this paper we will describe how we establish an experimental framework of tele-immersive system based on the polyhedral visual hull algorithm. The system is divided into two sites: the local and the remote sites. At the local site, images of a participant are taken from a set of synchronized cameras. A polyhedral visual hull algorithm is then used to reconstruct the 3D mesh. The texture information together with the mesh model is then sent to the remote site for rendering on immersive walls. We will describe each of the system components, including segmentation, post-process filtering, visual hull reconstruction at the local site; view dependent texture mapping at the remote site. We will present some preliminary results and observations we make during initial investigations, and propose possible future work that will add further functionalities to the system.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"57 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124353084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685314
Jongeun Cha, L. Rahal, Abdulmotaleb El Saddik
This paper presents a pilot study to present a continuous touch sensation with low-resolution array of a vibrotactile device on human skin using funneling illusion. The funneling illusion occurs when two loud stimuli presented simultaneously to adjacent locations on the skin, and they are funneled to form a sensation between the two stimulators rather than felt separately. This sensation is affected by the separation of the stimuli, their relative amplitudes, and their temporal order. In this paper, the continuous touch sensation is simulated by changing the perceived intensities of two adjacent vibrating motors continuously on the skin. First of all, we obtain the relationship between the control intensity that affects Pulse-Width Modulation (PWM) duration to actuate the vibrating motors and the perceived intensity of the sensation. Then, the continuous sensation is examined with two control conditions of the distance between two stimuli and the velocity of the simulated moving sensation for feasibility check. The results show that the continuously moving sensations can be presented by changing the perceived intensities of two vibrating motors opposite way in the condition of around 60 mm distance and 60 mm/s velocity.
{"title":"A pilot study on simulating continuous sensation with two vibrating motors","authors":"Jongeun Cha, L. Rahal, Abdulmotaleb El Saddik","doi":"10.1109/HAVE.2008.4685314","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685314","url":null,"abstract":"This paper presents a pilot study to present a continuous touch sensation with low-resolution array of a vibrotactile device on human skin using funneling illusion. The funneling illusion occurs when two loud stimuli presented simultaneously to adjacent locations on the skin, and they are funneled to form a sensation between the two stimulators rather than felt separately. This sensation is affected by the separation of the stimuli, their relative amplitudes, and their temporal order. In this paper, the continuous touch sensation is simulated by changing the perceived intensities of two adjacent vibrating motors continuously on the skin. First of all, we obtain the relationship between the control intensity that affects Pulse-Width Modulation (PWM) duration to actuate the vibrating motors and the perceived intensity of the sensation. Then, the continuous sensation is examined with two control conditions of the distance between two stimuli and the velocity of the simulated moving sensation for feasibility check. The results show that the continuously moving sensations can be presented by changing the perceived intensities of two vibrating motors opposite way in the condition of around 60 mm distance and 60 mm/s velocity.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131210205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685316
Miguel Bruns Alonso, Simon Shelley, D. Hermes, A. Kohlrausch
This papers presents some of our research about the use of sound in a multimodal interface. The aim of this interface is to support product design, where the designer is able to physically interact with a virtual object. The requirements of the system include the interactive sonification of geometrical data relating to the virtual object. In this paper we present three alternative sonification approaches designed to satisfy this condition. We also outline a user evaluation strategy aimed at measuring the performance and added value of the different sonification approaches.
{"title":"Evaluating geometrical properties of virtual shapes using interactive sonification","authors":"Miguel Bruns Alonso, Simon Shelley, D. Hermes, A. Kohlrausch","doi":"10.1109/HAVE.2008.4685316","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685316","url":null,"abstract":"This papers presents some of our research about the use of sound in a multimodal interface. The aim of this interface is to support product design, where the designer is able to physically interact with a virtual object. The requirements of the system include the interactive sonification of geometrical data relating to the virtual object. In this paper we present three alternative sonification approaches designed to satisfy this condition. We also outline a user evaluation strategy aimed at measuring the performance and added value of the different sonification approaches.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114925374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685306
Jongeun Cha, Mohamad Eid, L. Rahal, A. E. Saddik
Traditional teleconferencing multimedia systems have been limited to audio and video information. However, human touch, in the form of handshake, encouraging pat, comforting hug, among other physical contacts, is fundamental to physical and emotional development between persons. This paper presents the motivation and design of a synchronous haptic teleconferencing system with touch interaction to convey affection and nurture. We present a preliminary prototype for an interpersonal haptic communication system called HugMe. Examples of potential applications for HugMe include the domains of physical and/or emotional therapy, understaffed hospitals, remote children caring and distant loverspsila communication. This paper is submitted for demonstration.
{"title":"HugMe: An interpersonal haptic communication system","authors":"Jongeun Cha, Mohamad Eid, L. Rahal, A. E. Saddik","doi":"10.1109/HAVE.2008.4685306","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685306","url":null,"abstract":"Traditional teleconferencing multimedia systems have been limited to audio and video information. However, human touch, in the form of handshake, encouraging pat, comforting hug, among other physical contacts, is fundamental to physical and emotional development between persons. This paper presents the motivation and design of a synchronous haptic teleconferencing system with touch interaction to convey affection and nurture. We present a preliminary prototype for an interpersonal haptic communication system called HugMe. Examples of potential applications for HugMe include the domains of physical and/or emotional therapy, understaffed hospitals, remote children caring and distant loverspsila communication. This paper is submitted for demonstration.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133716366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685317
F. Bellocchio, N. A. Borghese, S. Ferrari, Vincenzo Piuri
The Hierarchical Radial Basis Function (HRBF) Network is a neural model that proved its suitability in the surface reconstruction problem. Its non-iterative configuration algorithm requires an estimate of the surface in the centers of the units of the network. In this paper, we analyze the effect of different estimators in training HRBF networks, in terms of accuracy, required units, and computational time.
{"title":"Kernel regression in HRBF networks for surface reconstruction","authors":"F. Bellocchio, N. A. Borghese, S. Ferrari, Vincenzo Piuri","doi":"10.1109/HAVE.2008.4685317","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685317","url":null,"abstract":"The Hierarchical Radial Basis Function (HRBF) Network is a neural model that proved its suitability in the surface reconstruction problem. Its non-iterative configuration algorithm requires an estimate of the surface in the centers of the units of the network. In this paper, we analyze the effect of different estimators in training HRBF networks, in terms of accuracy, required units, and computational time.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115692251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685315
M. Bordegoni, F. Ferrise, S. Shelley, M. Alonso, D. Hermes
One of the recent research topics in the area of design and virtual prototyping is offering designers tools for creating and modifying shapes in a natural and interactive way. Multimodal interaction is part of this research. It allows conveying to the users information through different sensory channels. The use of more modalities than touch and vision augments the sense of presence in the virtual environment and can be useful to present the same information in various ways. In addition, multimodal interaction can sometimes be used to augment the perception of the user by transferring information that is not generally perceived in the real world, but which can be emulated by the virtual environment. The paper presents a prototype of a system that allows designers to evaluate the quality of a shape with the aid of touch, vision and sound. Sound is used to communicate geometrical data, relating to the virtual object, which are practically undetectable through touch and vision. In addition, the paper presents the preliminary work carried out on this prototype and the results of the first tests made in order to demonstrate the feasibility. The problems related to the development of this kind of application and the realization of the prototype itself are highlighted. This paper also focuses on the potentialities and the problems relating to the use of multimodal interaction, in particular the auditory channel.
{"title":"Sound and tangible interface for shape evaluation and modification","authors":"M. Bordegoni, F. Ferrise, S. Shelley, M. Alonso, D. Hermes","doi":"10.1109/HAVE.2008.4685315","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685315","url":null,"abstract":"One of the recent research topics in the area of design and virtual prototyping is offering designers tools for creating and modifying shapes in a natural and interactive way. Multimodal interaction is part of this research. It allows conveying to the users information through different sensory channels. The use of more modalities than touch and vision augments the sense of presence in the virtual environment and can be useful to present the same information in various ways. In addition, multimodal interaction can sometimes be used to augment the perception of the user by transferring information that is not generally perceived in the real world, but which can be emulated by the virtual environment. The paper presents a prototype of a system that allows designers to evaluate the quality of a shape with the aid of touch, vision and sound. Sound is used to communicate geometrical data, relating to the virtual object, which are practically undetectable through touch and vision. In addition, the paper presents the preliminary work carried out on this prototype and the results of the first tests made in order to demonstrate the feasibility. The problems related to the development of this kind of application and the realization of the prototype itself are highlighted. This paper also focuses on the potentialities and the problems relating to the use of multimodal interaction, in particular the auditory channel.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124656027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685290
M. Vankipuram, K. Kahol, A. Ashby, J. Hamilton, J. Ferrara, M. Smith
An issue that complicates movement training, specifically in minimally invasive surgery, is that often there is no one to one correlation between the visual feedback provided on a screen and the movement required to perform the given task. This paper presents a simulator that specifically addresses the intermodal conflict between motor actuation and visual feedback. We developed a virtual reality visio-haptic simulator to assist surgical residents in training to resolve visio-motor conflict. The developed simulator offers individuals the flexibility to train in various scenarios with different levels of visio-motor conflicts. The levels of conflict were simulated by creating a linear functional relation between movement in the real environment and the virtual environment. The haptic rendering was consistent with the visual feedback. Experiments were conducted with expert pediatric surgeons and general surgery residents. Baseline data on performance in conditions of visio-motor conflict were assimilated from expert surgeons. Residents were divided into experimental group that was exposed to visio-motor conflict and the control group which wasnpsilat exposed to visio-motor conflict training. When the performance was compared on a standard surgical suturing task, the residents with inter-modal conflict training performed better than the control group suggesting the construct validity of the training and that visio-motor training can accelerate learning.
{"title":"Virtual reality based training to resolve visio-motor conflicts in surgical environments","authors":"M. Vankipuram, K. Kahol, A. Ashby, J. Hamilton, J. Ferrara, M. Smith","doi":"10.1109/HAVE.2008.4685290","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685290","url":null,"abstract":"An issue that complicates movement training, specifically in minimally invasive surgery, is that often there is no one to one correlation between the visual feedback provided on a screen and the movement required to perform the given task. This paper presents a simulator that specifically addresses the intermodal conflict between motor actuation and visual feedback. We developed a virtual reality visio-haptic simulator to assist surgical residents in training to resolve visio-motor conflict. The developed simulator offers individuals the flexibility to train in various scenarios with different levels of visio-motor conflicts. The levels of conflict were simulated by creating a linear functional relation between movement in the real environment and the virtual environment. The haptic rendering was consistent with the visual feedback. Experiments were conducted with expert pediatric surgeons and general surgery residents. Baseline data on performance in conditions of visio-motor conflict were assimilated from expert surgeons. Residents were divided into experimental group that was exposed to visio-motor conflict and the control group which wasnpsilat exposed to visio-motor conflict training. When the performance was compared on a standard surgical suturing task, the residents with inter-modal conflict training performed better than the control group suggesting the construct validity of the training and that visio-motor training can accelerate learning.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121102950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-21DOI: 10.1109/HAVE.2008.4685313
Jilin Zhou, Xiaojun Shen, E. Petriu, N. Georganas
Velocity and acceleration of the end effector of a haptic interface are required for haptic rendering in many aspects such as software damping, friction force rendering, and position control, etc. However, due to limited sensor resolution, non-linearity of forward kinematics, high maneuverability of human arm/hand, and high sampling rate requirement, getting a precise and robust velocity and acceleration estimation is very challenging. In this paper, an adaptive 4-state Kalman filter to estimate the velocity and the acceleration of the end effector is proposed considering that the human arm/hand trajectory has at least 5 non-zero derivatives and the skilled movements follows the constrained minimum jerk trajectory planning. The preliminary simulation results show the effectiveness of the proposed method.
{"title":"Linear velocity and acceleration estimation of 3 DOF haptic interfaces","authors":"Jilin Zhou, Xiaojun Shen, E. Petriu, N. Georganas","doi":"10.1109/HAVE.2008.4685313","DOIUrl":"https://doi.org/10.1109/HAVE.2008.4685313","url":null,"abstract":"Velocity and acceleration of the end effector of a haptic interface are required for haptic rendering in many aspects such as software damping, friction force rendering, and position control, etc. However, due to limited sensor resolution, non-linearity of forward kinematics, high maneuverability of human arm/hand, and high sampling rate requirement, getting a precise and robust velocity and acceleration estimation is very challenging. In this paper, an adaptive 4-state Kalman filter to estimate the velocity and the acceleration of the end effector is proposed considering that the human arm/hand trajectory has at least 5 non-zero derivatives and the skilled movements follows the constrained minimum jerk trajectory planning. The preliminary simulation results show the effectiveness of the proposed method.","PeriodicalId":113594,"journal":{"name":"2008 IEEE International Workshop on Haptic Audio visual Environments and Games","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133794425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}