首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
ARGA-Unet: Advanced U-net segmentation model using residual grouped convolution and attention mechanism for brain tumor MRI image segmentation ARGA-Unet:利用残差分组卷积和注意力机制进行脑肿瘤 MRI 图像分割的高级 U 网分割模型
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2023.05.001
Siyi XUN , Yan ZHANG , Sixu DUAN , Mingwei WANG , Jiangang CHEN , Tong TONG , Qinquan GAO , Chantong LAM , Menghan HU , Tao TAN

Background

Magnetic resonance imaging (MRI) has played an important role in the rapid growth of medical imaging diagnostic technology, especially in the diagnosis and treatment of brain tumors owing to its non-invasive characteristics and superior soft tissue contrast. However, brain tumors are characterized by high non-uniformity and non-obvious boundaries in MRI images because of their invasive and highly heterogeneous nature. In addition, the labeling of tumor areas is time-consuming and laborious.

Methods

To address these issues, this study uses a residual grouped convolution module, convolutional block attention module, and bilinear interpolation upsampling method to improve the classical segmentation network U-net. The influence of network normalization, loss function, and network depth on segmentation performance is further considered.

Results

In the experiments, the Dice score of the proposed segmentation model reached 97.581%, which is 12.438% higher than that of traditional U-net, demonstrating the effective segmentation of MRI brain tumor images.

Conclusions

In conclusion, we use the improved U-net network to achieve a good segmentation effect of brain tumor MRI images.

背景磁共振成像(MRI)在医学影像诊断技术的快速发展中发挥了重要作用,尤其是在脑肿瘤的诊断和治疗方面,因为它具有无创的特点和卓越的软组织对比度。然而,脑肿瘤由于其侵袭性和高度异质性,在核磁共振成像图像中具有高度不均匀和边界不明显的特点。为了解决这些问题,本研究使用残差分组卷积模块、卷积块注意模块和双线性插值上采样方法来改进经典的分割网络 U-net。结果在实验中,所提出的分割模型的 Dice 分数达到了 97.581%,比传统 U-net 高出 12.438%,证明了对核磁共振脑肿瘤图像的有效分割。
{"title":"ARGA-Unet: Advanced U-net segmentation model using residual grouped convolution and attention mechanism for brain tumor MRI image segmentation","authors":"Siyi XUN ,&nbsp;Yan ZHANG ,&nbsp;Sixu DUAN ,&nbsp;Mingwei WANG ,&nbsp;Jiangang CHEN ,&nbsp;Tong TONG ,&nbsp;Qinquan GAO ,&nbsp;Chantong LAM ,&nbsp;Menghan HU ,&nbsp;Tao TAN","doi":"10.1016/j.vrih.2023.05.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.05.001","url":null,"abstract":"<div><h3>Background</h3><p>Magnetic resonance imaging (MRI) has played an important role in the rapid growth of medical imaging diagnostic technology, especially in the diagnosis and treatment of brain tumors owing to its non-invasive characteristics and superior soft tissue contrast. However, brain tumors are characterized by high non-uniformity and non-obvious boundaries in MRI images because of their invasive and highly heterogeneous nature. In addition, the labeling of tumor areas is time-consuming and laborious.</p></div><div><h3>Methods</h3><p>To address these issues, this study uses a residual grouped convolution module, convolutional block attention module, and bilinear interpolation upsampling method to improve the classical segmentation network U-net. The influence of network normalization, loss function, and network depth on segmentation performance is further considered.</p></div><div><h3>Results</h3><p>In the experiments, the Dice score of the proposed segmentation model reached 97.581%, which is 12.438% higher than that of traditional U-net, demonstrating the effective segmentation of MRI brain tumor images.</p></div><div><h3>Conclusions</h3><p>In conclusion, we use the improved U-net network to achieve a good segmentation effect of brain tumor MRI images.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 203-216"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000232/pdfft?md5=5e16730452951aa1e3b2edacee01d06e&pid=1-s2.0-S2096579623000232-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face animation based on multiple sources and perspective alignment 基于多源和透视对齐的人脸动画
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2024.04.002
Yuanzong Mei , Wenyi Wang , Xi Liu , Wei Yong , Weijie Wu , Yifan Zhu , Shuai Wang , Jianwen Chen

Background

Face image animation generates a synthetic human face video that harmoniously integrates the identity derived from the source image and facial motion obtained from the driving video. This technology could be beneficial in multiple medical fields, such as diagnosis and privacy protection. Previous studies on face animation often relied on a single source image to generate an output video. With a significant pose difference between the source image and the driving frame, the quality of the generated video is likely to be suboptimal because the source image may not provide sufficient features for the warped feature map.

Methods

In this study, we propose a novel face-animation scheme based on multiple sources and perspective alignment to address these issues. We first introduce a multiple-source sampling and selection module to screen the optimal source image set from the provided driving video. We then propose an inter-frame interpolation and alignment module to further eliminate the misalignment between the selected source image and the driving frame.

Conclusions

The proposed method exhibits superior performance in terms of objective metrics and visual quality in large-angle animation scenes compared to other state-of-the-art face animation methods. It indicates the effectiveness of the proposed method in addressing the distortion issues in large-angle animation.

背景人脸图像动画生成合成人脸视频,将源图像中的身份信息和驾驶视频中的面部动作和谐地结合在一起。这项技术可用于诊断和隐私保护等多个医疗领域。以往关于人脸动画的研究通常依赖单一源图像来生成输出视频。由于源图像和驾驶帧之间存在明显的姿态差异,生成的视频质量很可能不理想,因为源图像可能无法为扭曲特征图提供足够的特征。首先,我们引入了一个多源采样和选择模块,从提供的驾驶视频中筛选出最佳源图像集。结论与其他最先进的人脸动画方法相比,所提出的方法在大角度动画场景的客观指标和视觉质量方面表现出更优越的性能。这表明所提出的方法能有效解决大角度动画中的失真问题。
{"title":"Face animation based on multiple sources and perspective alignment","authors":"Yuanzong Mei ,&nbsp;Wenyi Wang ,&nbsp;Xi Liu ,&nbsp;Wei Yong ,&nbsp;Weijie Wu ,&nbsp;Yifan Zhu ,&nbsp;Shuai Wang ,&nbsp;Jianwen Chen","doi":"10.1016/j.vrih.2024.04.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.04.002","url":null,"abstract":"<div><h3>Background</h3><p>Face image animation generates a synthetic human face video that harmoniously integrates the identity derived from the source image and facial motion obtained from the driving video. This technology could be beneficial in multiple medical fields, such as diagnosis and privacy protection<em>.</em> Previous studies on face animation often relied on a single source image to generate an output video. With a significant pose difference between the source image and the driving frame, the quality of the generated video is likely to be suboptimal because the source image may not provide sufficient features for the warped feature map.</p></div><div><h3>Methods</h3><p>In this study, we propose a novel face-animation scheme based on multiple sources and perspective alignment to address these issues. We first introduce a multiple-source sampling and selection module to screen the optimal source image set from the provided driving video. We then propose an inter-frame interpolation and alignment module to further eliminate the misalignment between the selected source image and the driving frame.</p></div><div><h3>Conclusions</h3><p>The proposed method exhibits superior performance in terms of objective metrics and visual quality in large-angle animation scenes compared to other state-of-the-art face animation methods. It indicates the effectiveness of the proposed method in addressing the distortion issues in large-angle animation.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 252-266"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579624000202/pdfft?md5=2a9475967792588ba319db5427a9033d&pid=1-s2.0-S2096579624000202-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of medical ocular image segmentation 医学眼部图像分割综述
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2024.04.001
Lai WEI, Menghan HU

Deep learning has been extensively applied to medical image segmentation, resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U-Net in 2015. However, the application of deep learning models to ocular medical image segmentation poses unique challenges, especially compared to other body parts, due to the complexity, small size, and blurriness of such images, coupled with the scarcity of data. This article aims to provide a comprehensive review of medical image segmentation from two perspectives: the development of deep network structures and the application of segmentation in ocular imaging. Initially, the article introduces an overview of medical imaging, data processing, and performance evaluation metrics. Subsequently, it analyzes recent developments in U-Net-based network structures. Finally, for the segmentation of ocular medical images, the application of deep learning is reviewed and categorized by the type of ocular tissue.

深度学习已被广泛应用于医学图像分割,自 2015 年 U-Net 取得显著成功以来,深度神经网络在医学图像分割领域取得了重大进展。然而,由于眼部图像的复杂性、小尺寸和模糊性,再加上数据的稀缺性,将深度学习模型应用于眼部医学图像分割带来了独特的挑战,尤其是与其他身体部位相比。本文旨在从深度网络结构的发展和分割在眼科成像中的应用两个角度对医学图像分割进行全面评述。文章首先介绍了医学成像、数据处理和性能评估指标的概况。随后,文章分析了基于 U-Net 的网络结构的最新发展。最后,针对眼部医学图像的分割,回顾了深度学习的应用,并按眼部组织类型进行了分类。
{"title":"A review of medical ocular image segmentation","authors":"Lai WEI,&nbsp;Menghan HU","doi":"10.1016/j.vrih.2024.04.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.04.001","url":null,"abstract":"<div><p>Deep learning has been extensively applied to medical image segmentation, resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U-Net in 2015. However, the application of deep learning models to ocular medical image segmentation poses unique challenges, especially compared to other body parts, due to the complexity, small size, and blurriness of such images, coupled with the scarcity of data. This article aims to provide a comprehensive review of medical image segmentation from two perspectives: the development of deep network structures and the application of segmentation in ocular imaging. Initially, the article introduces an overview of medical imaging, data processing, and performance evaluation metrics. Subsequently, it analyzes recent developments in U-Net-based network structures. Finally, for the segmentation of ocular medical images, the application of deep learning is reviewed and categorized by the type of ocular tissue.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 181-202"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962400010X/pdfft?md5=c30a9952442a34ae8a35e52683ed1214&pid=1-s2.0-S209657962400010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic detection of breast lesions in automated 3D breast ultrasound with cross-organ transfer learning 利用跨器官迁移学习在自动三维乳腺超声中自动检测乳腺病变
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2024.02.001
B.A.O. Lingyun , Zhengrui HUANG , Zehui LIN , Yue SUN , Hui CHEN , You LI , Zhang LI , Xiaochen YUAN , Lin XU , Tao TAN

Background

Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications, particularly in visual recognition tasks such as image and video analyses. There is a growing interest in applying this technology to diverse applications in medical image analysis. Automated three-dimensional Breast Ultrasound is a vital tool for detecting breast cancer, and computer-assisted diagnosis software, developed based on deep learning, can effectively assist radiologists in diagnosis. However, the network model is prone to overfitting during training, owing to challenges such as insufficient training data. This study attempts to solve the problem caused by small datasets and improve model detection performance.

Methods

We propose a breast cancer detection framework based on deep learning (a transfer learning method based on cross-organ cancer detection) and a contrastive learning method based on breast imaging reporting and data systems (BI-RADS).

Results

When using cross organ transfer learning and BIRADS based contrastive learning, the average sensitivity of the model increased by a maximum of 16.05%.

Conclusion

Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced, and contrastive learning method based on BI-RADS can improve the detection performance of the model.

背景深层卷积神经网络在众多机器学习应用中,尤其是在图像和视频分析等视觉识别任务中,已经引起了广泛关注。人们对将这一技术应用于医学图像分析的各种应用越来越感兴趣。自动三维乳腺超声波检查是检测乳腺癌的重要工具,基于深度学习开发的计算机辅助诊断软件可以有效地协助放射科医生进行诊断。然而,由于训练数据不足等难题,网络模型在训练过程中容易出现过拟合。方法我们提出了一种基于深度学习的乳腺癌检测框架(一种基于跨器官癌症检测的迁移学习方法)和一种基于乳腺成像报告和数据系统(BI-RADS)的对比学习方法。结果当使用跨器官转移学习和基于 BIRADS 的对比学习时,模型的平均灵敏度最高提高了 16.05%。结论我们的实验证明,跨器官癌症检测的参数和经验可以相互参考,而基于 BI-RADS 的对比学习方法可以提高模型的检测性能。
{"title":"Automatic detection of breast lesions in automated 3D breast ultrasound with cross-organ transfer learning","authors":"B.A.O. Lingyun ,&nbsp;Zhengrui HUANG ,&nbsp;Zehui LIN ,&nbsp;Yue SUN ,&nbsp;Hui CHEN ,&nbsp;You LI ,&nbsp;Zhang LI ,&nbsp;Xiaochen YUAN ,&nbsp;Lin XU ,&nbsp;Tao TAN","doi":"10.1016/j.vrih.2024.02.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.02.001","url":null,"abstract":"<div><h3>Background</h3><p>Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications, particularly in visual recognition tasks such as image and video analyses. There is a growing interest in applying this technology to diverse applications in medical image analysis. Automated three-dimensional Breast Ultrasound is a vital tool for detecting breast cancer, and computer-assisted diagnosis software, developed based on deep learning, can effectively assist radiologists in diagnosis. However, the network model is prone to overfitting during training, owing to challenges such as insufficient training data. This study attempts to solve the problem caused by small datasets and improve model detection performance.</p></div><div><h3>Methods</h3><p>We propose a breast cancer detection framework based on deep learning (a transfer learning method based on cross-organ cancer detection) and a contrastive learning method based on breast imaging reporting and data systems (BI-RADS).</p></div><div><h3>Results</h3><p>When using cross organ transfer learning and BIRADS based contrastive learning, the average sensitivity of the model increased by a maximum of 16.05%.</p></div><div><h3>Conclusion</h3><p>Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced, and contrastive learning method based on BI-RADS can improve the detection performance of the model.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 239-251"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962400007X/pdfft?md5=a1bdf0d74f499e2548f6f5735dd9b5bf&pid=1-s2.0-S209657962400007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining machine and deep transfer learning for mediastinal lymph node evaluation in patients with lung cancer 结合机器学习和深度传输学习评估肺癌患者的纵隔淋巴结
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2023.08.002
Hui XIE , Jianfang ZHANG , Lijuan DING , Tao TAN , Qing LI

Background

The prognosis and survival of patients with lung cancer are likely to deteriorate with metastasis. Using deep-learning in the detection of lymph node metastasis can facilitate the noninvasive calculation of the likelihood of such metastasis, thereby providing clinicians with crucial information to enhance diagnostic precision and ultimately improve patient survival and prognosis

Methods

In total, 623 eligible patients were recruited from two medical institutions. Seven deep learning models, namely Alex, GoogLeNet, Resnet18, Resnet101, Vgg16, Vgg19, and MobileNetv3 (small), were utilized to extract deep image histological features. The dimensionality of the extracted features was then reduced using the Spearman correlation coefficient (r ≥ 0.9) and Least Absolute Shrinkage and Selection Operator. Eleven machine learning methods, namely Support Vector Machine, K-nearest neighbor, Random Forest, Extra Trees, XGBoost, LightGBM, Naive Bayes, AdaBoost, Gradient Boosting Decision Tree, Linear Regression, and Multilayer Perceptron, were employed to construct classification prediction models for the filtered final features. The diagnostic performances of the models were assessed using various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value. Calibration and decision-curve analyses were also performed.

Results

The present study demonstrated that using deep radiomic features extracted from Vgg16, in conjunction with a prediction model constructed via a linear regression algorithm, effectively distinguished the status of mediastinal lymph nodes in patients with lung cancer. The performance of the model was evaluated based on various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value, which yielded values of 0.808, 0.834, 0.851, 0.745, 0.829, and 0.776, respectively. The validation set of the model was assessed using clinical decision curves, calibration curves, and confusion matrices, which collectively demonstrated the model's stability and accuracy

Conclusion

In this study, information on the deep radiomics of Vgg16 was obtained from computed tomography images, and the linear regression method was able to accurately diagnose mediastinal lymph node metastases in patients with lung cancer.

背景肺癌患者的预后和生存率很可能随着转移而恶化。利用深度学习检测淋巴结转移可以无创计算淋巴结转移的可能性,从而为临床医生提供关键信息,提高诊断精度,最终改善患者的生存和预后。利用七个深度学习模型,即 Alex、GoogLeNet、Resnet18、Resnet101、Vgg16、Vgg19 和 MobileNetv3(小型),提取深度图像组织学特征。然后使用斯皮尔曼相关系数(r ≥ 0.9)和最小绝对收缩与选择操作符对提取的特征进行降维。采用了 11 种机器学习方法,即支持向量机、K-近邻、随机森林、额外树、XGBoost、LightGBM、Naive Bayes、AdaBoost、梯度提升决策树、线性回归和多层感知器,为过滤后的最终特征构建分类预测模型。使用各种指标评估了模型的诊断性能,包括准确率、接收者操作特征曲线下面积、灵敏度、特异性、阳性预测值和阴性预测值。结果本研究表明,使用从 Vgg16 提取的深度放射学特征,结合通过线性回归算法构建的预测模型,可以有效区分肺癌患者纵隔淋巴结的状态。该模型的性能评估基于各种指标,包括准确率、接收者工作特征曲线下面积、灵敏度、特异性、阳性预测值和阴性预测值,其值分别为 0.808、0.834、0.851、0.745、0.829 和 0.776。结论本研究从计算机断层扫描图像中获取了 Vgg16 的深部放射组学信息,并利用线性回归方法准确诊断了肺癌患者的纵隔淋巴结转移。
{"title":"Combining machine and deep transfer learning for mediastinal lymph node evaluation in patients with lung cancer","authors":"Hui XIE ,&nbsp;Jianfang ZHANG ,&nbsp;Lijuan DING ,&nbsp;Tao TAN ,&nbsp;Qing LI","doi":"10.1016/j.vrih.2023.08.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.002","url":null,"abstract":"<div><h3>Background</h3><p>The prognosis and survival of patients with lung cancer are likely to deteriorate with metastasis. Using deep-learning in the detection of lymph node metastasis can facilitate the noninvasive calculation of the likelihood of such metastasis, thereby providing clinicians with crucial information to enhance diagnostic precision and ultimately improve patient survival and prognosis</p></div><div><h3>Methods</h3><p>In total, 623 eligible patients were recruited from two medical institutions. Seven deep learning models, namely Alex, GoogLeNet, Resnet18, Resnet101, Vgg16, Vgg19, and MobileNetv3 (small), were utilized to extract deep image histological features. The dimensionality of the extracted features was then reduced using the Spearman correlation coefficient (r ≥ 0.9) and Least Absolute Shrinkage and Selection Operator. Eleven machine learning methods, namely Support Vector Machine, K-nearest neighbor, Random Forest, Extra Trees, XGBoost, LightGBM, Naive Bayes, AdaBoost, Gradient Boosting Decision Tree, Linear Regression, and Multilayer Perceptron, were employed to construct classification prediction models for the filtered final features. The diagnostic performances of the models were assessed using various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value. Calibration and decision-curve analyses were also performed.</p></div><div><h3>Results</h3><p>The present study demonstrated that using deep radiomic features extracted from Vgg16, in conjunction with a prediction model constructed via a linear regression algorithm, effectively distinguished the status of mediastinal lymph nodes in patients with lung cancer. The performance of the model was evaluated based on various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value, which yielded values of 0.808, 0.834, 0.851, 0.745, 0.829, and 0.776, respectively. The validation set of the model was assessed using clinical decision curves, calibration curves, and confusion matrices, which collectively demonstrated the model's stability and accuracy</p></div><div><h3>Conclusion</h3><p>In this study, information on the deep radiomics of Vgg16 was obtained from computed tomography images, and the linear regression method was able to accurately diagnose mediastinal lymph node metastases in patients with lung cancer.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 226-238"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000463/pdfft?md5=d355b811e3e99356748d10c345ee1b33&pid=1-s2.0-S2096579623000463-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent diagnosis of atrial septal defect in children using echocardiography with deep learning 利用深度学习超声心动图对儿童房间隔缺损进行智能诊断
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2023.05.002
Yiman LIU , Size HOU , Xiaoxiang HAN , Tongtong LIANG , Menghan HU , Xin WANG , Wei GU , Yuqi ZHANG , Qingli LI , Jiangang CHEN

Background

Atrial septal defect (ASD) is one of the most common congenital heart diseases. The diagnosis of ASD via transthoracic echocardiography is subjective and time-consuming.

Methods

The objective of this study was to evaluate the feasibility and accuracy of automatic detection of ASD in children based on color Doppler echocardiographic static images using end-to-end convolutional neural networks. The proposed depthwise separable convolution model identifies ASDs with static color Doppler images in a standard view. Among the standard views, we selected two echocardiographic views, i.e., the subcostal sagittal view of the atrium septum and the low parasternal four-chamber view. The developed ASD detection system was validated using a training set consisting of 396 echocardiographic images corresponding to 198 cases. Additionally, an independent test dataset of 112 images corresponding to 56 cases was used, including 101 cases with ASDs and 153 cases with normal hearts.

Results

The average area under the receiver operating characteristic curve, recall, precision, specificity, F1-score, and accuracy of the proposed ASD detection model were 91.99, 80.00, 82.22, 87.50, 79.57, and 83.04, respectively.

Conclusions

The proposed model can accurately and automatically identify ASD, providing a strong foundation for the intelligent diagnosis of congenital heart diseases.

背景房间隔缺损(ASD)是最常见的先天性心脏病之一。本研究的目的是评估使用端到端卷积神经网络根据彩色多普勒超声心动图静态图像自动检测儿童房间隔缺损的可行性和准确性。所提出的深度可分离卷积模型可通过标准视图中的静态彩色多普勒图像识别 ASD。在标准视图中,我们选择了两个超声心动图视图,即心房隔膜肋下矢状切面和胸骨旁四腔低切面。所开发的 ASD 检测系统通过由 198 个病例的 396 张超声心动图组成的训练集进行了验证。结果 ASD检测模型的平均接收者工作特征曲线下面积、召回率、精确率、特异性、F1-score和准确率分别为91.99、80.00、82.22、87.50、79.57和83.04。
{"title":"Intelligent diagnosis of atrial septal defect in children using echocardiography with deep learning","authors":"Yiman LIU ,&nbsp;Size HOU ,&nbsp;Xiaoxiang HAN ,&nbsp;Tongtong LIANG ,&nbsp;Menghan HU ,&nbsp;Xin WANG ,&nbsp;Wei GU ,&nbsp;Yuqi ZHANG ,&nbsp;Qingli LI ,&nbsp;Jiangang CHEN","doi":"10.1016/j.vrih.2023.05.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.05.002","url":null,"abstract":"<div><h3>Background</h3><p>Atrial septal defect (ASD) is one of the most common congenital heart diseases. The diagnosis of ASD via transthoracic echocardiography is subjective and time-consuming.</p></div><div><h3>Methods</h3><p>The objective of this study was to evaluate the feasibility and accuracy of automatic detection of ASD in children based on color Doppler echocardiographic static images using end-to-end convolutional neural networks. The proposed depthwise separable convolution model identifies ASDs with static color Doppler images in a standard view. Among the standard views, we selected two echocardiographic views, i.e., the subcostal sagittal view of the atrium septum and the low parasternal four-chamber view. The developed ASD detection system was validated using a training set consisting of 396 echocardiographic images corresponding to 198 cases. Additionally, an independent test dataset of 112 images corresponding to 56 cases was used, including 101 cases with ASDs and 153 cases with normal hearts.</p></div><div><h3>Results</h3><p>The average area under the receiver operating characteristic curve, recall, precision, specificity, F1-score, and accuracy of the proposed ASD detection model were 91.99, 80.00, 82.22, 87.50, 79.57, and 83.04, respectively.</p></div><div><h3>Conclusions</h3><p>The proposed model can accurately and automatically identify ASD, providing a strong foundation for the intelligent diagnosis of congenital heart diseases.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 217-225"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000244/pdfft?md5=3ade0d91e713f6555fd1c75181120add&pid=1-s2.0-S2096579623000244-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards engineering a portable platform for laparoscopic pre-training in virtual reality with haptic feedback 在虚拟现实中利用触觉反馈设计便携式腹腔镜术前培训平台
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.10.007
Hans-Georg Enkler , Wolfgang Kunert , Stefan Pfeffer , Kai-Jonas Bock , Steffen Axt , Jonas Johannink , Christoph Reich

Background

Laparoscopic surgery is a surgical technique in which special instruments are inserted through small incision holes inside the body. For some time, efforts have been made to improve surgical pre-training through practical exercises on abstracted and reduced models.

Methods

The authors strive for a portable, easy to use and cost-effective Virtual Reality-based (VR) laparoscopic pre-training platform and therefore address the question of how such a system has to be designed to achieve the quality of today's gold standard using real tissue specimens. Current VR controllers are limited regarding haptic feedback. Since haptic feedback is necessary or at least beneficial for laparoscopic surgery training, the platform to be developed consists of a newly designed prototype laparoscopic VR controller with haptic feedback, a commercially available head-mounted display, a VR environment for simulating a laparoscopic surgery, and a training concept.

Results

To take full advantage of benefits such as repeatability and cost-effectiveness of VR-based training, the system shall not require a tissue sample for haptic feedback. It is currently calculated and visually displayed to the user in the VR environment. On the prototype controller, a first axis was provided with perceptible feedback for test purposes. Two of the prototype VR controllers can be combined to simulate a typical both-handed use case, e.g., laparoscopic suturing. A Unity-based VR prototype allows the execution of simple standard pre-trainings.

Conclusions

The first prototype enables full operation of a virtual laparoscopic instrument in VR. In addition, the simulation can compute simple interaction forces. Major challenges lie in a realistic real-time tissue simulation and calculation of forces for the haptic feedback. Mechanical weaknesses were identified in the first hardware prototype, which will be improved in subsequent versions. All degrees of freedom of the controller are to be provided with haptic feedback. To make forces tangible in the simulation, characteristic values need to be determined using real tissue samples. The system has yet to be validated by cross-comparing real and VR haptics with surgeons.

背景腹腔镜手术是一种通过体内小切口插入特殊器械的外科技术。一段时间以来,人们一直在努力通过在抽象化和缩小的模型上进行实际练习来改善手术前培训。方法作者致力于开发一种便携式、易于使用且具有成本效益的基于虚拟现实技术(VR)的腹腔镜手术前培训平台,因此解决了这样一个问题:如何设计这样一个系统,才能利用真实组织标本达到当今黄金标准的质量。目前的 VR 控制器在触觉反馈方面受到限制。由于触觉反馈对于腹腔镜手术培训是必要的,或者至少是有益的,因此要开发的平台包括一个新设计的带有触觉反馈的腹腔镜 VR 控制器原型、一个市场上可买到的头戴式显示器、一个用于模拟腹腔镜手术的 VR 环境和一个培训概念。目前,它是通过计算得出的,并在 VR 环境中直观地显示给用户。在原型控制器上,第一轴提供了可感知的反馈,用于测试目的。两个原型 VR 控制器可以组合起来模拟典型的双手使用情况,例如腹腔镜缝合。基于 Unity 的 VR 原型允许执行简单的标准预培训。此外,模拟还能计算简单的相互作用力。主要挑战在于逼真的实时组织模拟和触觉反馈力的计算。在第一个硬件原型中发现了机械方面的弱点,并将在后续版本中加以改进。控制器的所有自由度都将提供触觉反馈。为了使模拟中的力有形化,需要使用真实的组织样本来确定特征值。该系统还有待通过与外科医生交叉比较真实和 VR 触觉来进行验证。
{"title":"Towards engineering a portable platform for laparoscopic pre-training in virtual reality with haptic feedback","authors":"Hans-Georg Enkler ,&nbsp;Wolfgang Kunert ,&nbsp;Stefan Pfeffer ,&nbsp;Kai-Jonas Bock ,&nbsp;Steffen Axt ,&nbsp;Jonas Johannink ,&nbsp;Christoph Reich","doi":"10.1016/j.vrih.2023.10.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.10.007","url":null,"abstract":"<div><h3>Background</h3><p>Laparoscopic surgery is a surgical technique in which special instruments are inserted through small incision holes inside the body. For some time, efforts have been made to improve surgical pre-training through practical exercises on abstracted and reduced models.</p></div><div><h3>Methods</h3><p>The authors strive for a portable, easy to use and cost-effective Virtual Reality-based (VR) laparoscopic pre-training platform and therefore address the question of how such a system has to be designed to achieve the quality of today's gold standard using real tissue specimens. Current VR controllers are limited regarding haptic feedback. Since haptic feedback is necessary or at least beneficial for laparoscopic surgery training, the platform to be developed consists of a newly designed prototype laparoscopic VR controller with haptic feedback, a commercially available head-mounted display, a VR environment for simulating a laparoscopic surgery, and a training concept.</p></div><div><h3>Results</h3><p>To take full advantage of benefits such as repeatability and cost-effectiveness of VR-based training, the system shall not require a tissue sample for haptic feedback. It is currently calculated and visually displayed to the user in the VR environment. On the prototype controller, a first axis was provided with perceptible feedback for test purposes. Two of the prototype VR controllers can be combined to simulate a typical both-handed use case, e.g., laparoscopic suturing. A Unity-based VR prototype allows the execution of simple standard pre-trainings.</p></div><div><h3>Conclusions</h3><p>The first prototype enables full operation of a virtual laparoscopic instrument in VR. In addition, the simulation can compute simple interaction forces. Major challenges lie in a realistic real-time tissue simulation and calculation of forces for the haptic feedback. Mechanical weaknesses were identified in the first hardware prototype, which will be improved in subsequent versions. All degrees of freedom of the controller are to be provided with haptic feedback. To make forces tangible in the simulation, characteristic values need to be determined using real tissue samples. The system has yet to be validated by cross-comparing real and VR haptics with surgeons.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 83-99"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962300075X/pdf?md5=d39d1a5a15a4f73d021bdb17019133aa&pid=1-s2.0-S209657962300075X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of virtual agents on interaction efficiency and environmental immersion in MR environments 虚拟代理对 MR 环境中互动效率和环境沉浸感的影响
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.11.001
Yihua Bao , Jie Guo , Dongdong Weng , Yue Liu , Zeyu Tian

Background

Physical entity interactions in mixed reality (MR) environments aim to harness human capabilities in manipulating physical objects, thereby enhancing virtual environment (VEs) functionality. In MR, a common strategy is to use virtual agents as substitutes for physical entities, balancing interaction efficiency with environmental immersion. However, the impact of virtual agent size and form on interaction performance remains unclear.

Methods

Two experiments were conducted to explore how virtual agent size and form affect interaction performance, immersion, and preference in MR environments. The first experiment assessed five virtual agent sizes (25%, 50%, 75%, 100%, and 125% of physical size). The second experiment tested four types of frames (no frame, consistent frame, half frame, and surrounding frame) across all agent sizes. Participants, utilizing a head-mounted display, performed tasks involving moving cups, typing words, and using a mouse. They completed questionnaires assessing aspects such as the virtual environment effects, interaction effects, collision concerns, and preferences.

Results

Results from the first experiment revealed that agents matching physical object size produced the best overall performance. The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion. To balance efficiency and immersion, frameless agents matching physical object sizes were deemed optimal.

Conclusions

Virtual agents matching physical entity sizes enhance user experience and interaction performance. Conversely, familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces. This study provides valuable insights for the future development of MR systems.

背景混合现实(MR)环境中的物理实体交互旨在利用人类操纵物理对象的能力,从而增强虚拟环境(VE)的功能。在混合现实环境中,一种常见的策略是使用虚拟代理来替代物理实体,从而在交互效率和环境沉浸感之间取得平衡。然而,虚拟代理的大小和形式对交互性能的影响仍不清楚。方法进行了两项实验,以探索虚拟代理的大小和形式如何影响 MR 环境中的交互性能、沉浸感和偏好。第一个实验评估了五种虚拟代理尺寸(物理尺寸的 25%、50%、75%、100% 和 125%)。第二个实验测试了所有虚拟代理尺寸下的四种框架(无框架、一致框架、半框架和周围框架)。参与者使用头戴式显示器完成了移动杯子、输入文字和使用鼠标等任务。他们填写了调查问卷,对虚拟环境效果、交互效果、碰撞问题和偏好等方面进行了评估。结果第一个实验的结果显示,与实物大小相匹配的代理总体表现最佳。第二个实验表明,一致的框架明显提高了交互的准确性和速度,但降低了沉浸感。为了在效率和沉浸感之间取得平衡,与物理对象尺寸相匹配的无框架代理被认为是最佳选择。相反,二维界面中熟悉的框架会对虚拟空间中的交互和沉浸感产生不利影响。这项研究为未来 MR 系统的开发提供了宝贵的见解。
{"title":"Effects of virtual agents on interaction efficiency and environmental immersion in MR environments","authors":"Yihua Bao ,&nbsp;Jie Guo ,&nbsp;Dongdong Weng ,&nbsp;Yue Liu ,&nbsp;Zeyu Tian","doi":"10.1016/j.vrih.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.11.001","url":null,"abstract":"<div><h3>Background</h3><p>Physical entity interactions in mixed reality (MR) environments aim to harness human capabilities in manipulating physical objects, thereby enhancing virtual environment (VEs) functionality. In MR, a common strategy is to use virtual agents as substitutes for physical entities, balancing interaction efficiency with environmental immersion. However, the impact of virtual agent size and form on interaction performance remains unclear.</p></div><div><h3>Methods</h3><p>Two experiments were conducted to explore how virtual agent size and form affect interaction performance, immersion, and preference in MR environments. The first experiment assessed five virtual agent sizes (25%, 50%, 75%, 100%, and 125% of physical size). The second experiment tested four types of frames (no frame, consistent frame, half frame, and surrounding frame) across all agent sizes. Participants, utilizing a head-mounted display, performed tasks involving moving cups, typing words, and using a mouse. They completed questionnaires assessing aspects such as the virtual environment effects, interaction effects, collision concerns, and preferences.</p></div><div><h3>Results</h3><p>Results from the first experiment revealed that agents matching physical object size produced the best overall performance. The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion. To balance efficiency and immersion, frameless agents matching physical object sizes were deemed optimal.</p></div><div><h3>Conclusions</h3><p>Virtual agents matching physical entity sizes enhance user experience and interaction performance. Conversely, familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces. This study provides valuable insights for the future development of MR systems.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 169-179"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000761/pdf?md5=79a7ef4bebb12cdd0b6fb18240dafefc&pid=1-s2.0-S2096579623000761-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR-based digital twin for remote monitoring of mining equipment: Architecture and a case study 基于虚拟现实技术的数字孪生系统,用于远程监控采矿设备:架构和案例研究
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.12.002
Jovana Plavšić, Ilija Mišković

Background

Traditional methods for monitoring mining equipment rely primarily on visual inspections, which are time-consuming, inefficient, and hazardous. This article introduces a novel approach to monitoring mission-critical systems and services in the mining industry by integrating virtual reality (VR) and digital twin (DT) technologies. VR-based DTs enable remote equipment monitoring, advanced analysis of machine health, enhanced visualization, and improved decision making.

Methods

This article presents an architecture for VR-based DT development, including the developmental stages, activities, and stakeholders involved. A case study on the condition monitoring of a conveyor belt using real-time synthetic vibration sensor data was conducted using the proposed methodology. The study demonstrated the application of the methodology in remote monitoring and identified the need for further development for implementation in active mining operations. The article also discusses interdisciplinarity, choice of tools, computational resources, time and cost, human involvement, user acceptance, frequency of inspection, multiuser environment, potential risks, and applications beyond the mining industry.

Results

The findings of this study provide a foundation for future research in the domain of VR-based DTs for remote equipment monitoring and a novel application area for VR in mining.

背景传统的采矿设备监控方法主要依靠目视检查,这种方法耗时长、效率低且危险。本文介绍了一种通过整合虚拟现实(VR)和数字孪生(DT)技术来监控采矿业关键任务系统和服务的新方法。方法本文介绍了基于虚拟现实技术的数字孪生技术开发架构,包括开发阶段、活动和所涉及的利益相关者。使用所提出的方法,对使用实时合成振动传感器数据进行传送带状态监测的案例进行了研究。该研究展示了该方法在远程监控中的应用,并确定了在主动采矿作业中实施该方法所需的进一步开发。文章还讨论了跨学科性、工具选择、计算资源、时间和成本、人工参与、用户接受度、检测频率、多用户环境、潜在风险以及采矿业以外的应用。
{"title":"VR-based digital twin for remote monitoring of mining equipment: Architecture and a case study","authors":"Jovana Plavšić,&nbsp;Ilija Mišković","doi":"10.1016/j.vrih.2023.12.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.12.002","url":null,"abstract":"<div><h3>Background</h3><p>Traditional methods for monitoring mining equipment rely primarily on visual inspections, which are time-consuming, inefficient, and hazardous. This article introduces a novel approach to monitoring mission-critical systems and services in the mining industry by integrating virtual reality (VR) and digital twin (DT) technologies. VR-based DTs enable remote equipment monitoring, advanced analysis of machine health, enhanced visualization, and improved decision making.</p></div><div><h3>Methods</h3><p>This article presents an architecture for VR-based DT development, including the developmental stages, activities, and stakeholders involved. A case study on the condition monitoring of a conveyor belt using real-time synthetic vibration sensor data was conducted using the proposed methodology. The study demonstrated the application of the methodology in remote monitoring and identified the need for further development for implementation in active mining operations. The article also discusses interdisciplinarity, choice of tools, computational resources, time and cost, human involvement, user acceptance, frequency of inspection, multiuser environment, potential risks, and applications beyond the mining industry.</p></div><div><h3>Results</h3><p>The findings of this study provide a foundation for future research in the domain of VR-based DTs for remote equipment monitoring and a novel application area for VR in mining.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 100-112"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000852/pdf?md5=fc1470df3595a2597f7acf4dc88f0ea0&pid=1-s2.0-S2096579623000852-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the effect of fingertip aero-haptic feedforward cues in directing eyes-free target acquisition in VR 探索指尖气动触觉前馈线索在 VR 中引导无眼目标获取的效果
Q1 Computer Science Pub Date : 2024-04-01 DOI: 10.1016/j.vrih.2023.12.001
Xiaofei Ren , Jian He , Teng Han , Songxian Liu , Mengfei Lv , Rui Zhou

Background

The sense of touch plays a crucial role in interactive behavior within virtual spaces, particularly when visual attention is absent. Although haptic feedback has been widely used to compensate for the lack of visual cues, the use of tactile information as a predictive feedforward cue to guide hand movements remains unexplored and lacks theoretical understanding.

Methods

This study introduces a fingertip aero-haptic rendering method to investigate its effectiveness in directing hand movements during eyes-free spatial interactions. The wearable device incorporates a multichannel micro-airflow chamber to deliver adjustable tactile effects on the fingertips.

Results

The first study verified that tactile directional feedforward cues significantly improve user capabilities in eyes-free target acquisition and that users rely heavily on haptic indications rather than spatial memory to control their hands. A subsequent study examined the impact of enriched tactile feedforward cues on assisting users in determining precise target positions during eyes-free interactions, and assessed the required learning efforts.

Conclusions

The haptic feedforward effect holds great practical promise in eyeless design for virtual reality. We aim to integrate cognitive models and tactile feedforward cues in the future, and apply richer tactile feedforward information to alleviate users' perceptual deficiencies.

背景触觉在虚拟空间的交互行为中发挥着至关重要的作用,尤其是在视觉注意力缺失的情况下。尽管触觉反馈已被广泛用于弥补视觉线索的不足,但将触觉信息作为引导手部动作的预测性前馈线索仍有待探索,也缺乏理论上的理解。方法本研究引入了一种指尖气动触觉渲染方法,以研究其在无视觉空间交互过程中引导手部动作的有效性。结果第一项研究验证了触觉方向前馈提示可显著提高用户在无眼目标获取中的能力,而且用户在很大程度上依赖触觉指示而非空间记忆来控制双手。随后的一项研究考察了丰富的触觉前馈提示对协助用户在无眼交互过程中确定精确目标位置的影响,并评估了所需的学习努力。 结论触觉前馈效应在虚拟现实的无眼设计中具有巨大的实用前景。我们的目标是在未来整合认知模型和触觉前馈线索,并应用更丰富的触觉前馈信息来缓解用户的感知缺陷。
{"title":"Exploring the effect of fingertip aero-haptic feedforward cues in directing eyes-free target acquisition in VR","authors":"Xiaofei Ren ,&nbsp;Jian He ,&nbsp;Teng Han ,&nbsp;Songxian Liu ,&nbsp;Mengfei Lv ,&nbsp;Rui Zhou","doi":"10.1016/j.vrih.2023.12.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.12.001","url":null,"abstract":"<div><h3>Background</h3><p>The sense of touch plays a crucial role in interactive behavior within virtual spaces, particularly when visual attention is absent. Although haptic feedback has been widely used to compensate for the lack of visual cues, the use of tactile information as a predictive feedforward cue to guide hand movements remains unexplored and lacks theoretical understanding.</p></div><div><h3>Methods</h3><p>This study introduces a fingertip aero-haptic rendering method to investigate its effectiveness in directing hand movements during eyes-free spatial interactions. The wearable device incorporates a multichannel micro-airflow chamber to deliver adjustable tactile effects on the fingertips.</p></div><div><h3>Results</h3><p>The first study verified that tactile directional feedforward cues significantly improve user capabilities in eyes-free target acquisition and that users rely heavily on haptic indications rather than spatial memory to control their hands. A subsequent study examined the impact of enriched tactile feedforward cues on assisting users in determining precise target positions during eyes-free interactions, and assessed the required learning efforts.</p></div><div><h3>Conclusions</h3><p>The haptic feedforward effect holds great practical promise in eyeless design for virtual reality. We aim to integrate cognitive models and tactile feedforward cues in the future, and apply richer tactile feedforward information to alleviate users' perceptual deficiencies.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 2","pages":"Pages 113-131"},"PeriodicalIF":0.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000839/pdf?md5=d8fff3e7495bcc4ee949335d5463ff3c&pid=1-s2.0-S2096579623000839-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1