Pub Date : 2025-03-16DOI: 10.1109/TOH.2025.3570810
Øystein Bjelland;Bismi Rasheed;Intissar Cherif;Andreas Fagerhaug Dalen;Amine Chellali;Martin Steinert;Robin T. Bye
This paper presents a novel method for simplifying kinesthetic haptic rendering of complex contact interactions in arthroscopic surgery training simulators using reality-based force profiles. We demonstrate continuous kinesthetic feedback for applications to arthroscopic knee portal creation and diagnostic meniscus examination. This involves measuring characteristic force profiles in ex vivo experiments, simulator implementation in SOFA, and performing user validation experiments. When comparing the method with linear-elastic-based haptic feedback for meniscus stiffness discrimination, novices had difference thresholds of 1.80 MPa (linear-elastic) and 1.47 MPa (reality-based), while experts showed thresholds of 0.99 MPa and 1.39 MPa, respectively, indicating finer sensitivity among experts. Experts also used significantly less force (${mathit{p}}mathbf {< 0.05}$) and had shorter decision times (${mathit{p}}mathbf {< 0.05}$) than novices across both methods, indicating construct validity. Although kinesthetic feedback was verified with ex vivo experiments for portal creation, user validation was here inconclusive due to minor inconsistencies in the integration of visual and haptic feedback. Limitations include triggering material removal via instrument penetration instead of haptic force limits, as well as omitting contact vibrations. The method gives only a minor reduction in computation speed. Examples are available on GitHub.
{"title":"Haptic Rendering Using Reality-Based Force Profiles in Surgical Simulation","authors":"Øystein Bjelland;Bismi Rasheed;Intissar Cherif;Andreas Fagerhaug Dalen;Amine Chellali;Martin Steinert;Robin T. Bye","doi":"10.1109/TOH.2025.3570810","DOIUrl":"10.1109/TOH.2025.3570810","url":null,"abstract":"This paper presents a novel method for simplifying kinesthetic haptic rendering of complex contact interactions in arthroscopic surgery training simulators using reality-based force profiles. We demonstrate continuous kinesthetic feedback for applications to arthroscopic knee portal creation and diagnostic meniscus examination. This involves measuring characteristic force profiles in ex vivo experiments, simulator implementation in SOFA, and performing user validation experiments. When comparing the method with linear-elastic-based haptic feedback for meniscus stiffness discrimination, novices had difference thresholds of 1.80 MPa (linear-elastic) and 1.47 MPa (reality-based), while experts showed thresholds of 0.99 MPa and 1.39 MPa, respectively, indicating finer sensitivity among experts. Experts also used significantly less force (<inline-formula><tex-math>${mathit{p}}mathbf {< 0.05}$</tex-math></inline-formula>) and had shorter decision times (<inline-formula><tex-math>${mathit{p}}mathbf {< 0.05}$</tex-math></inline-formula>) than novices across both methods, indicating construct validity. Although kinesthetic feedback was verified with ex vivo experiments for portal creation, user validation was here inconclusive due to minor inconsistencies in the integration of visual and haptic feedback. Limitations include triggering material removal via instrument penetration instead of haptic force limits, as well as omitting contact vibrations. The method gives only a minor reduction in computation speed. Examples are available on GitHub.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"569-581"},"PeriodicalIF":2.8,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-16DOI: 10.1109/TOH.2025.3570795
Christophe van der Walt;Sara Falcone;Jan van Erp;Stefano Stramigioli;Douwe Dresscher
Model Mediated Teleoperation (MMT) is a method of teleoperation by which a model of the environment is displayed to the operator for delay-free feedback. The choice of the model is important to the performance of the system. A more descriptive model will give the operator more accurate feedback, but this can cause problems for the estimator and the renderer required to make MMT function. However, if certain environmental dynamics are not used by the operator to effectively manipulate the environment, they could be excluded from the feedback, thus mitigating thew problems caused for the estimator and renderer. This work investigates whether mass and friction modelling influence an operator's effectiveness at accomplishing teleoperated tasks as measured by a subjective Sense of Embodiment, environmental interaction force, and task completion time. It was found that mass had a significant influence (p<0.001) on task completion time and interaction force, whereas static and dynamic friction only had an influence on completion time when mass feedback was absent (p<0.001). In all cases, the presence of a dynamic effect increased interaction force and task completion time.
{"title":"The Influence of Mass and Friction in Teleoperated Tasks","authors":"Christophe van der Walt;Sara Falcone;Jan van Erp;Stefano Stramigioli;Douwe Dresscher","doi":"10.1109/TOH.2025.3570795","DOIUrl":"10.1109/TOH.2025.3570795","url":null,"abstract":"Model Mediated Teleoperation (MMT) is a method of teleoperation by which a model of the environment is displayed to the operator for delay-free feedback. The choice of the model is important to the performance of the system. A more descriptive model will give the operator more accurate feedback, but this can cause problems for the estimator and the renderer required to make MMT function. However, if certain environmental dynamics are not used by the operator to effectively manipulate the environment, they could be excluded from the feedback, thus mitigating thew problems caused for the estimator and renderer. This work investigates whether mass and friction modelling influence an operator's effectiveness at accomplishing teleoperated tasks as measured by a subjective Sense of Embodiment, environmental interaction force, and task completion time. It was found that mass had a significant influence (p<0.001) on task completion time and interaction force, whereas static and dynamic friction only had an influence on completion time when mass feedback was absent (p<0.001). In all cases, the presence of a dynamic effect increased interaction force and task completion time.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"783-788"},"PeriodicalIF":2.8,"publicationDate":"2025-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-13DOI: 10.1109/TOH.2025.3569724
Xiaohan Zhao;Mengwei Pang;Ping Ji;Aimin Hao;Dangxiao Wang
Tooth extraction simulation with force feedback can provide a valuable training tool for dental students, familiarizing them with the detailed motion and force patterns involved in this procedure. This simulation encounters two major challenges - replicating the forceps' 7-DoF motion and accurately simulating the distinct phases of tooth extraction. This paper presents a comprehensive haptic simulation framework for simulating tooth extraction with force feedback, combining both hardware and software solutions. A pivotal feature of this system is the 7-DoF haptic rendering algorithm capable of simulating the 7-DoF motion of forceps. Additionally, a haptic handle resembling the extraction forceps and offering robust connectivity is developed. Furthermore, a multi-phase tooth extraction framework is proposed to simulate the entire tooth extraction process. This framework incorporates physical models to emulate the haptic characteristics of different extraction phases and includes predefined entry criteria for each phase to achieve accurate identification and seamless transitions. The system's effectiveness is validated through objective and subjective experiments, confirming its ability to faithfully replicate the unique haptic features of each extraction phase. Feedback from dental novices and experts indicates that this system could make a significant contribution to tooth extraction training, providing distinct advantages over traditional oral model practices.
{"title":"Haptic Rendering for Multi-Phase Tooth Extraction Process","authors":"Xiaohan Zhao;Mengwei Pang;Ping Ji;Aimin Hao;Dangxiao Wang","doi":"10.1109/TOH.2025.3569724","DOIUrl":"10.1109/TOH.2025.3569724","url":null,"abstract":"Tooth extraction simulation with force feedback can provide a valuable training tool for dental students, familiarizing them with the detailed motion and force patterns involved in this procedure. This simulation encounters two major challenges - replicating the forceps' 7-DoF motion and accurately simulating the distinct phases of tooth extraction. This paper presents a comprehensive haptic simulation framework for simulating tooth extraction with force feedback, combining both hardware and software solutions. A pivotal feature of this system is the 7-DoF haptic rendering algorithm capable of simulating the 7-DoF motion of forceps. Additionally, a haptic handle resembling the extraction forceps and offering robust connectivity is developed. Furthermore, a multi-phase tooth extraction framework is proposed to simulate the entire tooth extraction process. This framework incorporates physical models to emulate the haptic characteristics of different extraction phases and includes predefined entry criteria for each phase to achieve accurate identification and seamless transitions. The system's effectiveness is validated through objective and subjective experiments, confirming its ability to faithfully replicate the unique haptic features of each extraction phase. Feedback from dental novices and experts indicates that this system could make a significant contribution to tooth extraction training, providing distinct advantages over traditional oral model practices.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"556-568"},"PeriodicalIF":2.8,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1109/TOH.2025.3549677
Lynette A. Jones;Hsin-Ni Ho
This review focuses on the interactions between the cutaneous senses, and in particular touch and temperature, as these are the most relevant for developing skin-based display technologies for use in virtual reality (VR) and for designing multimodal haptic devices. A broad spectrum of research is reviewed ranging from studies that have examined the mechanisms involved in thermal intensification and tactile masking, to more applied work that has focused on implementing thermal-tactile illusions such as thermal referral and illusory wetness in VR environments. Research on these tactile-thermal illusions has identified the differences between the senses of cold and warmth in terms of their effects on the perception of object properties and the prevalence of the perceptual experiences elicited. They have also underscored the fundamental spatial and temporal differences between the tactile and thermal senses. The wide-ranging body of research on compound sensations such as wetness and stickiness has highlighted the mechanisms involved in sensing moisture and provided a framework for measuring these sensations in a variety of contexts. Although the interactions between the two senses are complex, it is clear that the addition of thermal inputs to a tactile display enhances both user experience and enables novel sensory experiences.
{"title":"Tactile–Thermal Interactions: Cooperation and Competition","authors":"Lynette A. Jones;Hsin-Ni Ho","doi":"10.1109/TOH.2025.3549677","DOIUrl":"10.1109/TOH.2025.3549677","url":null,"abstract":"This review focuses on the interactions between the cutaneous senses, and in particular touch and temperature, as these are the most relevant for developing skin-based display technologies for use in virtual reality (VR) and for designing multimodal haptic devices. A broad spectrum of research is reviewed ranging from studies that have examined the mechanisms involved in thermal intensification and tactile masking, to more applied work that has focused on implementing thermal-tactile illusions such as thermal referral and illusory wetness in VR environments. Research on these tactile-thermal illusions has identified the differences between the senses of cold and warmth in terms of their effects on the perception of object properties and the prevalence of the perceptual experiences elicited. They have also underscored the fundamental spatial and temporal differences between the tactile and thermal senses. The wide-ranging body of research on compound sensations such as wetness and stickiness has highlighted the mechanisms involved in sensing moisture and provided a framework for measuring these sensations in a variety of contexts. Although the interactions between the two senses are complex, it is clear that the addition of thermal inputs to a tactile display enhances both user experience and enables novel sensory experiences.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"456-469"},"PeriodicalIF":2.8,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143596871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High density (HD) haptic interfaces have become increasingly common for entertainment thanks to advancements in virtual reality technology, however their flexibility may make them a useful sensory substitution interface for motor rehabilitation. Yet little research has explored how users interpret different haptic feedback encoding methods. Therefore, this study's objective was to evaluate the effectiveness of various encoding methods for conveying information based on existing sensory substitution strategies, one being a line motion tracking task and the other a direction tracking task. The first encoding method was Perceived Position Encoding (PPE), where information was encoded into the perceived position of stimulation. The second was Perceived Intensity Encoding (PIE), encoded information into the perceived amplitude of the stimuli. Twenty-one participants performed tracking tasks using both the PIE and PPE methods. The results showed similar performance in line motion tracking between the PIE and PPE methods, although the extra motors used in the PPE method appear to introduce uncertainty in users. Nevertheless, users were significantly more accurate with direction tracking when using PPE. These findings highlight the need for task-specific encoding methods, and showcase the versatility of the HD haptic vest as a tool for augmented feedback in motor rehabilitation.
{"title":"Evaluation on Human Perception of Various Vibrotactile Encoding Methods Through a High Density Haptic Feedback Interface","authors":"Brendan Driscoll;Nita Prabhu;I-Chieh Lee;Ming Liu;He Huang","doi":"10.1109/TOH.2025.3568705","DOIUrl":"10.1109/TOH.2025.3568705","url":null,"abstract":"High density (HD) haptic interfaces have become increasingly common for entertainment thanks to advancements in virtual reality technology, however their flexibility may make them a useful sensory substitution interface for motor rehabilitation. Yet little research has explored how users interpret different haptic feedback encoding methods. Therefore, this study's objective was to evaluate the effectiveness of various encoding methods for conveying information based on existing sensory substitution strategies, one being a line motion tracking task and the other a direction tracking task. The first encoding method was Perceived Position Encoding (PPE), where information was encoded into the perceived position of stimulation. The second was Perceived Intensity Encoding (PIE), encoded information into the perceived amplitude of the stimuli. Twenty-one participants performed tracking tasks using both the PIE and PPE methods. The results showed similar performance in line motion tracking between the PIE and PPE methods, although the extra motors used in the PPE method appear to introduce uncertainty in users. Nevertheless, users were significantly more accurate with direction tracking when using PPE. These findings highlight the need for task-specific encoding methods, and showcase the versatility of the HD haptic vest as a tool for augmented feedback in motor rehabilitation.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"531-541"},"PeriodicalIF":2.8,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143965088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-09DOI: 10.1109/TOH.2025.3568804
Chungman Lim;Gyeongdeok Kim;Su-Yeon Kang;Hasti Seifi;Gunhyuk Park
Vibrotactile signals offer new possibilities for conveying sensations and emotions in various applications. Yet, designing vibrotactile tactile icons (i.e., Tactons) to evoke specific feelings often requires a trial-and-error process and user studies. To support haptic design, we propose a framework for predicting roughness and emotional ratings from vibration signals. We created 154 Tactons and conducted a study to collect acceleration data from smartphones and roughness, valence, and arousal user ratings (n = 36).We converted the Tacton signals into two-channel spectrograms reflecting the spectral sensitivities of mechanoreceptors, then input them into VibNet, our dual-stream neural network. The first stream captures sequential features using recurrent networks, while the second captures temporal-spectral features using 2D convolutional networks. VibNet outperformed baseline models, with 82% of its predictions falling within the standard deviations of ground truth user ratings for two new Tacton sets. We discuss the efficacy of our mechanoreceptive processing and dual-stream neural network and present future research directions.
{"title":"Can a Machine Feel Vibrations?: Predicting Roughness and Emotional Responses to Vibration Tactons via a Neural Network","authors":"Chungman Lim;Gyeongdeok Kim;Su-Yeon Kang;Hasti Seifi;Gunhyuk Park","doi":"10.1109/TOH.2025.3568804","DOIUrl":"10.1109/TOH.2025.3568804","url":null,"abstract":"Vibrotactile signals offer new possibilities for conveying sensations and emotions in various applications. Yet, designing vibrotactile tactile icons (i.e., Tactons) to evoke specific feelings often requires a trial-and-error process and user studies. To support haptic design, we propose a framework for predicting roughness and emotional ratings from vibration signals. We created 154 Tactons and conducted a study to collect acceleration data from smartphones and roughness, valence, and arousal user ratings (n = 36).We converted the Tacton signals into two-channel spectrograms reflecting the spectral sensitivities of mechanoreceptors, then input them into VibNet, our dual-stream neural network. The first stream captures sequential features using recurrent networks, while the second captures temporal-spectral features using 2D convolutional networks. VibNet outperformed baseline models, with 82% of its predictions falling within the standard deviations of ground truth user ratings for two new Tacton sets. We discuss the efficacy of our mechanoreceptive processing and dual-stream neural network and present future research directions.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"542-555"},"PeriodicalIF":2.8,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144003359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TOH.2025.3549036
Bryan A. MacGavin;Jennifer L. Tennison;Terra Edwards;Jenna L. Gorlewicz
Despite the richness of the human tactile capacity, remote communication practices often lack touch-based interactions. This leads to overtaxing our visual and auditory channels, a lack of connection and engagement, and inaccessibility for diverse sensory groups. In this paper, we learn from haptic intuitions of the blind and low vision (BLV) and the Protactile DeafBlind (PT-DB) community to investigate how core functions of communication can be routed through tactile channels. We investigate this re-routing by designing the Conversational Haptic Technology (CHAT) system, a wearable haptic system to explore the feasibility of language recreation through core functions of communication and emotional expression via touch. We contribute the design evolution of an input (sensing) pad and an output (actuation) pad, which enable a bidirectional, wireless system to support remote, touch-based communication. These systems were iteratively evaluated through a series of user studies with sighted-hearing (N=20), BLV (N=4), and PT-DB (N=7) participants to uncover touch profiles for relaying specific communication functions and emotional responses. Results indicate trends and similarities in the touch-based cues organically employed across the diverse groups and provide an initial framework for demonstrating the feasibility of communicating core functions through touch in a wearable form factor.
{"title":"The CHAT System: A Wearable Haptic System for Facilitating Tactile Communication","authors":"Bryan A. MacGavin;Jennifer L. Tennison;Terra Edwards;Jenna L. Gorlewicz","doi":"10.1109/TOH.2025.3549036","DOIUrl":"10.1109/TOH.2025.3549036","url":null,"abstract":"Despite the richness of the human tactile capacity, remote communication practices often lack touch-based interactions. This leads to overtaxing our visual and auditory channels, a lack of connection and engagement, and inaccessibility for diverse sensory groups. In this paper, we learn from haptic intuitions of the blind and low vision (BLV) and the Protactile DeafBlind (PT-DB) community to investigate how core functions of communication can be routed through tactile channels. We investigate this re-routing by designing the Conversational Haptic Technology (CHAT) system, a wearable haptic system to explore the feasibility of language recreation through core functions of communication and emotional expression via touch. We contribute the design evolution of an input (sensing) pad and an output (actuation) pad, which enable a bidirectional, wireless system to support remote, touch-based communication. These systems were iteratively evaluated through a series of user studies with sighted-hearing (N=20), BLV (N=4), and PT-DB (N=7) participants to uncover touch profiles for relaying specific communication functions and emotional responses. Results indicate trends and similarities in the touch-based cues organically employed across the diverse groups and provide an initial framework for demonstrating the feasibility of communicating core functions through touch in a wearable form factor.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 2","pages":"374-386"},"PeriodicalIF":2.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-07DOI: 10.1109/TOH.2025.3548880
Romain Le Magueresse;Fabrice Casset;Frédéric Giraud;Munique Kazar Mendes;Daniel Mermin;Rémi Franiatte;Anis Kaci;Mikael Colin
Current flexible haptic technologies struggle to render textures as effectively as rigid surfaces with friction reduction due to poor propagation of elastic waves in flexible substrates. Alternative solutions using different actuators have been explored, but their low density hampers fine renderings, and so texture rendering. To overcome these limits, we propose in this paper the development, the characterization, and the evaluation of an innovative haptic solution enabling localized or continuous texture rendering on a flexible surface. On the basis of previous work, the developed surface is composed of several haptic resonators vibrating at an ultrasonic frequency, driven by piezoelectric actuators, and associated with a polymer matrix. The solution combines the advantages of a rigid haptic surface, implementing friction modulation to obtain texture stimulation, and the conformability of a 75 $mathrm{mu }$m thick polymer sheet. By powering or not the actuators, it is possible to display simple tactile shapes. Tribological measurements confirm that the friction reduction matches the desired shape. Two studies demonstrated the device's effectiveness: participants identified simple geometric shapes with a 96$%$ success rate and 14 s detection time, and two users simultaneously recognized independent tactile patterns, achieving 89$%$ accuracy. This flexible device supports simple geometric shape display with texture rendering, multi-touch and multi-user interaction, offering potential for various applications.
{"title":"Reconfigurable Flexible Haptic Interface Using Localized Friction Modulation","authors":"Romain Le Magueresse;Fabrice Casset;Frédéric Giraud;Munique Kazar Mendes;Daniel Mermin;Rémi Franiatte;Anis Kaci;Mikael Colin","doi":"10.1109/TOH.2025.3548880","DOIUrl":"10.1109/TOH.2025.3548880","url":null,"abstract":"Current flexible haptic technologies struggle to render textures as effectively as rigid surfaces with friction reduction due to poor propagation of elastic waves in flexible substrates. Alternative solutions using different actuators have been explored, but their low density hampers fine renderings, and so texture rendering. To overcome these limits, we propose in this paper the development, the characterization, and the evaluation of an innovative haptic solution enabling localized or continuous texture rendering on a flexible surface. On the basis of previous work, the developed surface is composed of several haptic resonators vibrating at an ultrasonic frequency, driven by piezoelectric actuators, and associated with a polymer matrix. The solution combines the advantages of a rigid haptic surface, implementing friction modulation to obtain texture stimulation, and the conformability of a 75 <inline-formula><tex-math>$mathrm{mu }$</tex-math></inline-formula>m thick polymer sheet. By powering or not the actuators, it is possible to display simple tactile shapes. Tribological measurements confirm that the friction reduction matches the desired shape. Two studies demonstrated the device's effectiveness: participants identified simple geometric shapes with a 96<inline-formula><tex-math>$%$</tex-math></inline-formula> success rate and 14 s detection time, and two users simultaneously recognized independent tactile patterns, achieving 89<inline-formula><tex-math>$%$</tex-math></inline-formula> accuracy. This flexible device supports simple geometric shape display with texture rendering, multi-touch and multi-user interaction, offering potential for various applications.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 2","pages":"387-397"},"PeriodicalIF":2.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-05DOI: 10.1109/TOH.2025.3548478
Justine Saint-Aubert
The Snail is a wearable haptic interface that enables users to experience force feedback when grasping objects in Virtual Reality. It consists of a 3D-printed prop attached to the tip of the thumb that can rotate thanks to a small actuator. The prop is shaped like a snail to display different grasping sizes, ranging from $ 1.5,text{cm}$ to $ 7,text{cm}$, according to its orientation. The prop displays the force feedback, so forces over $ 100,text{N}$ can be displayed between fingers using small and low-power actuation. Very rigid objects can be rendered when the prop remains static, but rotations when the users grasp the prop also allow for the simulation of soft objects. The Snail is portable, low-cost, and easy to reproduce because it is made of 3D-printed parts. The design and performance of the device were evaluated through technical evaluations and 3 user experiments. They show that participants can discriminate different grasping sizes and levels of softness with the interface. The Snail also enhances user experience and performances in Virtual Reality compared to standard vibration feedback.
蜗牛 "是一种可穿戴的触觉界面,能让用户在虚拟现实中抓取物体时体验力反馈。它由一个 3D 打印的道具组成,该道具连接在拇指尖上,通过一个小型致动器可以旋转。该道具的形状像一只蜗牛,可根据方向显示不同的抓取大小,从到 ,不等。该道具可以显示力反馈,因此可以使用小巧、低功耗的致动器显示手指间的力。当道具保持静止时,可以呈现非常坚硬的物体,但当用户抓握道具时发生旋转,也可以模拟柔软的物体。蜗牛由三维打印部件制成,便于携带,成本低,易于复制。该装置的设计和性能通过技术评估和 3 个用户实验进行了评估。实验结果表明,参与者可以通过界面分辨不同的抓握尺寸和柔软度。与标准振动反馈相比,蜗牛还增强了虚拟现实中的用户体验和性能。
{"title":"The Snail: A Wearable Actuated Prop to Simulate Grasp of Rigid and Soft Objects in Virtual Reality","authors":"Justine Saint-Aubert","doi":"10.1109/TOH.2025.3548478","DOIUrl":"10.1109/TOH.2025.3548478","url":null,"abstract":"The Snail is a wearable haptic interface that enables users to experience force feedback when grasping objects in Virtual Reality. It consists of a 3D-printed prop attached to the tip of the thumb that can rotate thanks to a small actuator. The prop is shaped like a snail to display different grasping sizes, ranging from <inline-formula><tex-math>$ 1.5,text{cm}$</tex-math></inline-formula> to <inline-formula><tex-math>$ 7,text{cm}$</tex-math></inline-formula>, according to its orientation. The prop displays the force feedback, so forces over <inline-formula><tex-math>$ 100,text{N}$</tex-math></inline-formula> can be displayed between fingers using small and low-power actuation. Very rigid objects can be rendered when the prop remains static, but rotations when the users grasp the prop also allow for the simulation of soft objects. The Snail is portable, low-cost, and easy to reproduce because it is made of 3D-printed parts. The design and performance of the device were evaluated through technical evaluations and 3 user experiments. They show that participants can discriminate different grasping sizes and levels of softness with the interface. The Snail also enhances user experience and performances in Virtual Reality compared to standard vibration feedback.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 2","pages":"362-373"},"PeriodicalIF":2.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143566935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-03DOI: 10.1109/TOH.2025.3546979
Chandler Stubbs;Kathleen Steadman;David M. Bevly;Chad G. Rose
While much work is being done to advance autonomous capabilities of mobile robotics, specifically unmanned ground vehicles (UGVs), some applications might currently be too complex or undesirable for full autonomy. Maintaining a human in the loop has proven to be a reliable strategy in these applications, yet there are currently limitations to the efficacy of human operators. Haptic feedback has been proposed as a method of addressing these limitations, and aiding UGV operators in safe and effective operation. This manuscript presents the experimental validation of LARIAT (Lowering Attention Requirements in semi-Autonomous Teleoperation), a portable haptic device for teleoperated semi-autonomous UGVs. This device utilizes an adapted predictive form of the Zero-Moment Point (ZMP) rollover index to inform haptic squeeze cues provided to the UGV operator for human-on-the-loop notifications. First, a brief design overview of LARIAT, implemented haptic control, and the ZMP index are presented. In addition to experimental device characterization of the just noticeable difference, we present a case study that demonstrates LARIAT's abilities to improve teleoperation performance. In an experiment involving a simulation of walking behind a semi-autonomous UGV, LARIAT reduced the number of UGV rollovers by up to 50%, with comparable or increased performance in a concurrent secondary tasks.
{"title":"LARIAT: Predictive Haptic Feedback to Improve Semi-Autonomous UGV Safety in a Case Study","authors":"Chandler Stubbs;Kathleen Steadman;David M. Bevly;Chad G. Rose","doi":"10.1109/TOH.2025.3546979","DOIUrl":"10.1109/TOH.2025.3546979","url":null,"abstract":"While much work is being done to advance autonomous capabilities of mobile robotics, specifically unmanned ground vehicles (UGVs), some applications might currently be too complex or undesirable for full autonomy. Maintaining a human in the loop has proven to be a reliable strategy in these applications, yet there are currently limitations to the efficacy of human operators. Haptic feedback has been proposed as a method of addressing these limitations, and aiding UGV operators in safe and effective operation. This manuscript presents the experimental validation of LARIAT (Lowering Attention Requirements in semi-Autonomous Teleoperation), a portable haptic device for teleoperated semi-autonomous UGVs. This device utilizes an adapted predictive form of the Zero-Moment Point (ZMP) rollover index to inform haptic squeeze cues provided to the UGV operator for human-on-the-loop notifications. First, a brief design overview of LARIAT, implemented haptic control, and the ZMP index are presented. In addition to experimental device characterization of the just noticeable difference, we present a case study that demonstrates LARIAT's abilities to improve teleoperation performance. In an experiment involving a simulation of walking behind a semi-autonomous UGV, LARIAT reduced the number of UGV rollovers by up to 50%, with comparable or increased performance in a concurrent secondary tasks.","PeriodicalId":13215,"journal":{"name":"IEEE Transactions on Haptics","volume":"18 3","pages":"763-769"},"PeriodicalIF":2.8,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}