Building haptic interfaces for human-in-the-loop applications is a profound scientific and technological challenge. It requires developing methods to intuitively channel sensorimotor information between afferent and efferent neural pathways of a human user and inputs and outputs of an external system. In such applications, artificial touch may serve as a virtual extension of the human body to a remote location (e.g., teleoperation) or it can create a perception that an external system is a part of the body (e.g., prosthetics).
{"title":"Editorial: Special Issue: Towards a Transdisciplinary Approach to the Development and Control of Haptic Devices for Human-in-the-Loop Applications","authors":"Lucia Seminara;Strahinja Dosen;Giovanni Berselli;Gerald E. Loeb;Salvatore Pirozzi;Roberta Klatzky;Silvano Zipoli Caiani;Mengjia Zhu","doi":"10.1109/TOH.2025.3546751","DOIUrl":"https://doi.org/10.1109/TOH.2025.3546751","url":null,"abstract":"Building haptic interfaces for human-in-the-loop applications is a profound scientific and technological challenge. It requires developing methods to intuitively channel sensorimotor information between afferent and efferent neural pathways of a human user and inputs and outputs of an external system. In such applications, artificial touch may serve as a virtual extension of the human body to a remote location (e.g., teleoperation) or it can create a perception that an external system is a part of the body (e.g., prosthetics).","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 1","pages":"3-5"},"PeriodicalIF":2.4,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10937296","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143667324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-19DOI: 10.1109/TOH.2025.3552992
Thomas Pietrzak;Rahul Kumar Ray
Tactile animation illusions are used to display dynamic information with haptic cues. In this study, we investigate two forms of tactile animation illusions that leverage the Funneling effect and Apparent Haptic Motion (AHM) on a one-dimensional circular tactile display. We define new parameters for the description of AHM that describe both the temporal and spatial aspects of these animations: Angle per Actuator (APA) and Revolution Duration (RD). We present three user studies about the perception of angular animations produced with these effects. Our results show that people can interpret AHM animations regardless of the APA value and that they can interpret tactile animation illusions slower than one degree per second. We also showed that the participants' ability to discriminate angular animations improves proportionally with the angle presented.
{"title":"Comparing Apparent Haptic Motion and Funneling for the Perception of Tactile Animation Illusions On a Circular Tactile Display","authors":"Thomas Pietrzak;Rahul Kumar Ray","doi":"10.1109/TOH.2025.3552992","DOIUrl":"10.1109/TOH.2025.3552992","url":null,"abstract":"Tactile animation illusions are used to display dynamic information with haptic cues. In this study, we investigate two forms of tactile animation illusions that leverage the Funneling effect and Apparent Haptic Motion (AHM) on a one-dimensional circular tactile display. We define new parameters for the description of AHM that describe both the temporal and spatial aspects of these animations: Angle per Actuator (APA) and Revolution Duration (RD). We present three user studies about the perception of angular animations produced with these effects. Our results show that people can interpret AHM animations regardless of the APA value and that they can interpret tactile animation illusions slower than one degree per second. We also showed that the participants' ability to discriminate angular animations improves proportionally with the angle presented.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 2","pages":"398-407"},"PeriodicalIF":2.4,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143663576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-16DOI: 10.1109/TOH.2025.3570810
Øystein Bjelland;Bismi Rasheed;Intissar Cherif;Andreas Fagerhaug Dalen;Amine Chellali;Martin Steinert;Robin T. Bye
This paper presents a novel method for simplifying kinesthetic haptic rendering of complex contact interactions in arthroscopic surgery training simulators using reality-based force profiles. We demonstrate continuous kinesthetic feedback for applications to arthroscopic knee portal creation and diagnostic meniscus examination. This involves measuring characteristic force profiles in ex vivo experiments, simulator implementation in SOFA, and performing user validation experiments. When comparing the method with linear-elastic-based haptic feedback for meniscus stiffness discrimination, novices had difference thresholds of 1.80 MPa (linear-elastic) and 1.47 MPa (reality-based), while experts showed thresholds of 0.99 MPa and 1.39 MPa, respectively, indicating finer sensitivity among experts. Experts also used significantly less force (${mathit{p}}mathbf {< 0.05}$) and had shorter decision times (${mathit{p}}mathbf {< 0.05}$) than novices across both methods, indicating construct validity. Although kinesthetic feedback was verified with ex vivo experiments for portal creation, user validation was here inconclusive due to minor inconsistencies in the integration of visual and haptic feedback. Limitations include triggering material removal via instrument penetration instead of haptic force limits, as well as omitting contact vibrations. The method gives only a minor reduction in computation speed. Examples are available on GitHub.
{"title":"Haptic Rendering Using Reality-Based Force Profiles in Surgical Simulation","authors":"Øystein Bjelland;Bismi Rasheed;Intissar Cherif;Andreas Fagerhaug Dalen;Amine Chellali;Martin Steinert;Robin T. Bye","doi":"10.1109/TOH.2025.3570810","DOIUrl":"10.1109/TOH.2025.3570810","url":null,"abstract":"This paper presents a novel method for simplifying kinesthetic haptic rendering of complex contact interactions in arthroscopic surgery training simulators using reality-based force profiles. We demonstrate continuous kinesthetic feedback for applications to arthroscopic knee portal creation and diagnostic meniscus examination. This involves measuring characteristic force profiles in ex vivo experiments, simulator implementation in SOFA, and performing user validation experiments. When comparing the method with linear-elastic-based haptic feedback for meniscus stiffness discrimination, novices had difference thresholds of 1.80 MPa (linear-elastic) and 1.47 MPa (reality-based), while experts showed thresholds of 0.99 MPa and 1.39 MPa, respectively, indicating finer sensitivity among experts. Experts also used significantly less force (<inline-formula><tex-math>${mathit{p}}mathbf {< 0.05}$</tex-math></inline-formula>) and had shorter decision times (<inline-formula><tex-math>${mathit{p}}mathbf {< 0.05}$</tex-math></inline-formula>) than novices across both methods, indicating construct validity. Although kinesthetic feedback was verified with ex vivo experiments for portal creation, user validation was here inconclusive due to minor inconsistencies in the integration of visual and haptic feedback. Limitations include triggering material removal via instrument penetration instead of haptic force limits, as well as omitting contact vibrations. The method gives only a minor reduction in computation speed. Examples are available on GitHub.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"569-581"},"PeriodicalIF":2.8,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-16DOI: 10.1109/TOH.2025.3570795
Christophe van der Walt;Sara Falcone;Jan van Erp;Stefano Stramigioli;Douwe Dresscher
Model Mediated Teleoperation (MMT) is a method of teleoperation by which a model of the environment is displayed to the operator for delay-free feedback. The choice of the model is important to the performance of the system. A more descriptive model will give the operator more accurate feedback, but this can cause problems for the estimator and the renderer required to make MMT function. However, if certain environmental dynamics are not used by the operator to effectively manipulate the environment, they could be excluded from the feedback, thus mitigating thew problems caused for the estimator and renderer. This work investigates whether mass and friction modelling influence an operator's effectiveness at accomplishing teleoperated tasks as measured by a subjective Sense of Embodiment, environmental interaction force, and task completion time. It was found that mass had a significant influence (p<0.001) on task completion time and interaction force, whereas static and dynamic friction only had an influence on completion time when mass feedback was absent (p<0.001). In all cases, the presence of a dynamic effect increased interaction force and task completion time.
{"title":"The Influence of Mass and Friction in Teleoperated Tasks","authors":"Christophe van der Walt;Sara Falcone;Jan van Erp;Stefano Stramigioli;Douwe Dresscher","doi":"10.1109/TOH.2025.3570795","DOIUrl":"10.1109/TOH.2025.3570795","url":null,"abstract":"Model Mediated Teleoperation (MMT) is a method of teleoperation by which a model of the environment is displayed to the operator for delay-free feedback. The choice of the model is important to the performance of the system. A more descriptive model will give the operator more accurate feedback, but this can cause problems for the estimator and the renderer required to make MMT function. However, if certain environmental dynamics are not used by the operator to effectively manipulate the environment, they could be excluded from the feedback, thus mitigating thew problems caused for the estimator and renderer. This work investigates whether mass and friction modelling influence an operator's effectiveness at accomplishing teleoperated tasks as measured by a subjective Sense of Embodiment, environmental interaction force, and task completion time. It was found that mass had a significant influence (p<0.001) on task completion time and interaction force, whereas static and dynamic friction only had an influence on completion time when mass feedback was absent (p<0.001). In all cases, the presence of a dynamic effect increased interaction force and task completion time.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"783-788"},"PeriodicalIF":2.8,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-13DOI: 10.1109/TOH.2025.3569724
Xiaohan Zhao;Mengwei Pang;Ping Ji;Aimin Hao;Dangxiao Wang
Tooth extraction simulation with force feedback can provide a valuable training tool for dental students, familiarizing them with the detailed motion and force patterns involved in this procedure. This simulation encounters two major challenges - replicating the forceps' 7-DoF motion and accurately simulating the distinct phases of tooth extraction. This paper presents a comprehensive haptic simulation framework for simulating tooth extraction with force feedback, combining both hardware and software solutions. A pivotal feature of this system is the 7-DoF haptic rendering algorithm capable of simulating the 7-DoF motion of forceps. Additionally, a haptic handle resembling the extraction forceps and offering robust connectivity is developed. Furthermore, a multi-phase tooth extraction framework is proposed to simulate the entire tooth extraction process. This framework incorporates physical models to emulate the haptic characteristics of different extraction phases and includes predefined entry criteria for each phase to achieve accurate identification and seamless transitions. The system's effectiveness is validated through objective and subjective experiments, confirming its ability to faithfully replicate the unique haptic features of each extraction phase. Feedback from dental novices and experts indicates that this system could make a significant contribution to tooth extraction training, providing distinct advantages over traditional oral model practices.
{"title":"Haptic Rendering for Multi-Phase Tooth Extraction Process","authors":"Xiaohan Zhao;Mengwei Pang;Ping Ji;Aimin Hao;Dangxiao Wang","doi":"10.1109/TOH.2025.3569724","DOIUrl":"10.1109/TOH.2025.3569724","url":null,"abstract":"Tooth extraction simulation with force feedback can provide a valuable training tool for dental students, familiarizing them with the detailed motion and force patterns involved in this procedure. This simulation encounters two major challenges - replicating the forceps' 7-DoF motion and accurately simulating the distinct phases of tooth extraction. This paper presents a comprehensive haptic simulation framework for simulating tooth extraction with force feedback, combining both hardware and software solutions. A pivotal feature of this system is the 7-DoF haptic rendering algorithm capable of simulating the 7-DoF motion of forceps. Additionally, a haptic handle resembling the extraction forceps and offering robust connectivity is developed. Furthermore, a multi-phase tooth extraction framework is proposed to simulate the entire tooth extraction process. This framework incorporates physical models to emulate the haptic characteristics of different extraction phases and includes predefined entry criteria for each phase to achieve accurate identification and seamless transitions. The system's effectiveness is validated through objective and subjective experiments, confirming its ability to faithfully replicate the unique haptic features of each extraction phase. Feedback from dental novices and experts indicates that this system could make a significant contribution to tooth extraction training, providing distinct advantages over traditional oral model practices.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"556-568"},"PeriodicalIF":2.8,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TOH.2025.3549677
Lynette A. Jones;Hsin-Ni Ho
This review focuses on the interactions between the cutaneous senses, and in particular touch and temperature, as these are the most relevant for developing skin-based display technologies for use in virtual reality (VR) and for designing multimodal haptic devices. A broad spectrum of research is reviewed ranging from studies that have examined the mechanisms involved in thermal intensification and tactile masking, to more applied work that has focused on implementing thermal-tactile illusions such as thermal referral and illusory wetness in VR environments. Research on these tactile-thermal illusions has identified the differences between the senses of cold and warmth in terms of their effects on the perception of object properties and the prevalence of the perceptual experiences elicited. They have also underscored the fundamental spatial and temporal differences between the tactile and thermal senses. The wide-ranging body of research on compound sensations such as wetness and stickiness has highlighted the mechanisms involved in sensing moisture and provided a framework for measuring these sensations in a variety of contexts. Although the interactions between the two senses are complex, it is clear that the addition of thermal inputs to a tactile display enhances both user experience and enables novel sensory experiences.
{"title":"Tactile–Thermal Interactions: Cooperation and Competition","authors":"Lynette A. Jones;Hsin-Ni Ho","doi":"10.1109/TOH.2025.3549677","DOIUrl":"10.1109/TOH.2025.3549677","url":null,"abstract":"This review focuses on the interactions between the cutaneous senses, and in particular touch and temperature, as these are the most relevant for developing skin-based display technologies for use in virtual reality (VR) and for designing multimodal haptic devices. A broad spectrum of research is reviewed ranging from studies that have examined the mechanisms involved in thermal intensification and tactile masking, to more applied work that has focused on implementing thermal-tactile illusions such as thermal referral and illusory wetness in VR environments. Research on these tactile-thermal illusions has identified the differences between the senses of cold and warmth in terms of their effects on the perception of object properties and the prevalence of the perceptual experiences elicited. They have also underscored the fundamental spatial and temporal differences between the tactile and thermal senses. The wide-ranging body of research on compound sensations such as wetness and stickiness has highlighted the mechanisms involved in sensing moisture and provided a framework for measuring these sensations in a variety of contexts. Although the interactions between the two senses are complex, it is clear that the addition of thermal inputs to a tactile display enhances both user experience and enables novel sensory experiences.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"456-469"},"PeriodicalIF":2.8,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143596871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High density (HD) haptic interfaces have become increasingly common for entertainment thanks to advancements in virtual reality technology, however their flexibility may make them a useful sensory substitution interface for motor rehabilitation. Yet little research has explored how users interpret different haptic feedback encoding methods. Therefore, this study's objective was to evaluate the effectiveness of various encoding methods for conveying information based on existing sensory substitution strategies, one being a line motion tracking task and the other a direction tracking task. The first encoding method was Perceived Position Encoding (PPE), where information was encoded into the perceived position of stimulation. The second was Perceived Intensity Encoding (PIE), encoded information into the perceived amplitude of the stimuli. Twenty-one participants performed tracking tasks using both the PIE and PPE methods. The results showed similar performance in line motion tracking between the PIE and PPE methods, although the extra motors used in the PPE method appear to introduce uncertainty in users. Nevertheless, users were significantly more accurate with direction tracking when using PPE. These findings highlight the need for task-specific encoding methods, and showcase the versatility of the HD haptic vest as a tool for augmented feedback in motor rehabilitation.
{"title":"Evaluation on Human Perception of Various Vibrotactile Encoding Methods Through a High Density Haptic Feedback Interface","authors":"Brendan Driscoll;Nita Prabhu;I-Chieh Lee;Ming Liu;He Huang","doi":"10.1109/TOH.2025.3568705","DOIUrl":"10.1109/TOH.2025.3568705","url":null,"abstract":"High density (HD) haptic interfaces have become increasingly common for entertainment thanks to advancements in virtual reality technology, however their flexibility may make them a useful sensory substitution interface for motor rehabilitation. Yet little research has explored how users interpret different haptic feedback encoding methods. Therefore, this study's objective was to evaluate the effectiveness of various encoding methods for conveying information based on existing sensory substitution strategies, one being a line motion tracking task and the other a direction tracking task. The first encoding method was Perceived Position Encoding (PPE), where information was encoded into the perceived position of stimulation. The second was Perceived Intensity Encoding (PIE), encoded information into the perceived amplitude of the stimuli. Twenty-one participants performed tracking tasks using both the PIE and PPE methods. The results showed similar performance in line motion tracking between the PIE and PPE methods, although the extra motors used in the PPE method appear to introduce uncertainty in users. Nevertheless, users were significantly more accurate with direction tracking when using PPE. These findings highlight the need for task-specific encoding methods, and showcase the versatility of the HD haptic vest as a tool for augmented feedback in motor rehabilitation.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"531-541"},"PeriodicalIF":2.8,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143965088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-09DOI: 10.1109/TOH.2025.3568804
Chungman Lim;Gyeongdeok Kim;Su-Yeon Kang;Hasti Seifi;Gunhyuk Park
Vibrotactile signals offer new possibilities for conveying sensations and emotions in various applications. Yet, designing vibrotactile tactile icons (i.e., Tactons) to evoke specific feelings often requires a trial-and-error process and user studies. To support haptic design, we propose a framework for predicting roughness and emotional ratings from vibration signals. We created 154 Tactons and conducted a study to collect acceleration data from smartphones and roughness, valence, and arousal user ratings (n = 36).We converted the Tacton signals into two-channel spectrograms reflecting the spectral sensitivities of mechanoreceptors, then input them into VibNet, our dual-stream neural network. The first stream captures sequential features using recurrent networks, while the second captures temporal-spectral features using 2D convolutional networks. VibNet outperformed baseline models, with 82% of its predictions falling within the standard deviations of ground truth user ratings for two new Tacton sets. We discuss the efficacy of our mechanoreceptive processing and dual-stream neural network and present future research directions.
{"title":"Can a Machine Feel Vibrations?: Predicting Roughness and Emotional Responses to Vibration Tactons via a Neural Network","authors":"Chungman Lim;Gyeongdeok Kim;Su-Yeon Kang;Hasti Seifi;Gunhyuk Park","doi":"10.1109/TOH.2025.3568804","DOIUrl":"10.1109/TOH.2025.3568804","url":null,"abstract":"Vibrotactile signals offer new possibilities for conveying sensations and emotions in various applications. Yet, designing vibrotactile tactile icons (i.e., Tactons) to evoke specific feelings often requires a trial-and-error process and user studies. To support haptic design, we propose a framework for predicting roughness and emotional ratings from vibration signals. We created 154 Tactons and conducted a study to collect acceleration data from smartphones and roughness, valence, and arousal user ratings (n = 36).We converted the Tacton signals into two-channel spectrograms reflecting the spectral sensitivities of mechanoreceptors, then input them into VibNet, our dual-stream neural network. The first stream captures sequential features using recurrent networks, while the second captures temporal-spectral features using 2D convolutional networks. VibNet outperformed baseline models, with 82% of its predictions falling within the standard deviations of ground truth user ratings for two new Tacton sets. We discuss the efficacy of our mechanoreceptive processing and dual-stream neural network and present future research directions.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"542-555"},"PeriodicalIF":2.8,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144003359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TOH.2025.3549036
Bryan A. MacGavin;Jennifer L. Tennison;Terra Edwards;Jenna L. Gorlewicz
Despite the richness of the human tactile capacity, remote communication practices often lack touch-based interactions. This leads to overtaxing our visual and auditory channels, a lack of connection and engagement, and inaccessibility for diverse sensory groups. In this paper, we learn from haptic intuitions of the blind and low vision (BLV) and the Protactile DeafBlind (PT-DB) community to investigate how core functions of communication can be routed through tactile channels. We investigate this re-routing by designing the Conversational Haptic Technology (CHAT) system, a wearable haptic system to explore the feasibility of language recreation through core functions of communication and emotional expression via touch. We contribute the design evolution of an input (sensing) pad and an output (actuation) pad, which enable a bidirectional, wireless system to support remote, touch-based communication. These systems were iteratively evaluated through a series of user studies with sighted-hearing (N=20), BLV (N=4), and PT-DB (N=7) participants to uncover touch profiles for relaying specific communication functions and emotional responses. Results indicate trends and similarities in the touch-based cues organically employed across the diverse groups and provide an initial framework for demonstrating the feasibility of communicating core functions through touch in a wearable form factor.
{"title":"The CHAT System: A Wearable Haptic System for Facilitating Tactile Communication","authors":"Bryan A. MacGavin;Jennifer L. Tennison;Terra Edwards;Jenna L. Gorlewicz","doi":"10.1109/TOH.2025.3549036","DOIUrl":"10.1109/TOH.2025.3549036","url":null,"abstract":"Despite the richness of the human tactile capacity, remote communication practices often lack touch-based interactions. This leads to overtaxing our visual and auditory channels, a lack of connection and engagement, and inaccessibility for diverse sensory groups. In this paper, we learn from haptic intuitions of the blind and low vision (BLV) and the Protactile DeafBlind (PT-DB) community to investigate how core functions of communication can be routed through tactile channels. We investigate this re-routing by designing the Conversational Haptic Technology (CHAT) system, a wearable haptic system to explore the feasibility of language recreation through core functions of communication and emotional expression via touch. We contribute the design evolution of an input (sensing) pad and an output (actuation) pad, which enable a bidirectional, wireless system to support remote, touch-based communication. These systems were iteratively evaluated through a series of user studies with sighted-hearing (N=20), BLV (N=4), and PT-DB (N=7) participants to uncover touch profiles for relaying specific communication functions and emotional responses. Results indicate trends and similarities in the touch-based cues organically employed across the diverse groups and provide an initial framework for demonstrating the feasibility of communicating core functions through touch in a wearable form factor.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 2","pages":"374-386"},"PeriodicalIF":2.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TOH.2025.3548880
Romain Le Magueresse;Fabrice Casset;Frédéric Giraud;Munique Kazar Mendes;Daniel Mermin;Rémi Franiatte;Anis Kaci;Mikael Colin
Current flexible haptic technologies struggle to render textures as effectively as rigid surfaces with friction reduction due to poor propagation of elastic waves in flexible substrates. Alternative solutions using different actuators have been explored, but their low density hampers fine renderings, and so texture rendering. To overcome these limits, we propose in this paper the development, the characterization, and the evaluation of an innovative haptic solution enabling localized or continuous texture rendering on a flexible surface. On the basis of previous work, the developed surface is composed of several haptic resonators vibrating at an ultrasonic frequency, driven by piezoelectric actuators, and associated with a polymer matrix. The solution combines the advantages of a rigid haptic surface, implementing friction modulation to obtain texture stimulation, and the conformability of a 75 $mathrm{mu }$m thick polymer sheet. By powering or not the actuators, it is possible to display simple tactile shapes. Tribological measurements confirm that the friction reduction matches the desired shape. Two studies demonstrated the device's effectiveness: participants identified simple geometric shapes with a 96$%$ success rate and 14 s detection time, and two users simultaneously recognized independent tactile patterns, achieving 89$%$ accuracy. This flexible device supports simple geometric shape display with texture rendering, multi-touch and multi-user interaction, offering potential for various applications.
{"title":"Reconfigurable Flexible Haptic Interface Using Localized Friction Modulation","authors":"Romain Le Magueresse;Fabrice Casset;Frédéric Giraud;Munique Kazar Mendes;Daniel Mermin;Rémi Franiatte;Anis Kaci;Mikael Colin","doi":"10.1109/TOH.2025.3548880","DOIUrl":"10.1109/TOH.2025.3548880","url":null,"abstract":"Current flexible haptic technologies struggle to render textures as effectively as rigid surfaces with friction reduction due to poor propagation of elastic waves in flexible substrates. Alternative solutions using different actuators have been explored, but their low density hampers fine renderings, and so texture rendering. To overcome these limits, we propose in this paper the development, the characterization, and the evaluation of an innovative haptic solution enabling localized or continuous texture rendering on a flexible surface. On the basis of previous work, the developed surface is composed of several haptic resonators vibrating at an ultrasonic frequency, driven by piezoelectric actuators, and associated with a polymer matrix. The solution combines the advantages of a rigid haptic surface, implementing friction modulation to obtain texture stimulation, and the conformability of a 75 <inline-formula><tex-math>$mathrm{mu }$</tex-math></inline-formula>m thick polymer sheet. By powering or not the actuators, it is possible to display simple tactile shapes. Tribological measurements confirm that the friction reduction matches the desired shape. Two studies demonstrated the device's effectiveness: participants identified simple geometric shapes with a 96<inline-formula><tex-math>$%$</tex-math></inline-formula> success rate and 14 s detection time, and two users simultaneously recognized independent tactile patterns, achieving 89<inline-formula><tex-math>$%$</tex-math></inline-formula> accuracy. This flexible device supports simple geometric shape display with texture rendering, multi-touch and multi-user interaction, offering potential for various applications.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 2","pages":"387-397"},"PeriodicalIF":2.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}