首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Publisher’s Note: Hardware—A New Open Access Journal 出版商注:硬件——一种新的开放获取期刊
Q1 Computer Science Pub Date : 2023-03-30 DOI: 10.3390/hardware1010002
Liliane Auwerter
The development of new hardware has never been as accessible as it is today [...]
新硬件的开发从来没有像今天这样容易获得。
{"title":"Publisher’s Note: Hardware—A New Open Access Journal","authors":"Liliane Auwerter","doi":"10.3390/hardware1010002","DOIUrl":"https://doi.org/10.3390/hardware1010002","url":null,"abstract":"The development of new hardware has never been as accessible as it is today [...]","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74840471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transformer Architecture based mutual attention for Image Anomaly Detection 一种基于互感器结构的图像异常检测相互注意
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.006
Mengting Zhang, Xiuxia Tian

Background

Image anomaly detection is a popular task in computer graphics, which is widely used in industrial fields. Previous works that address this problem often train CNN-based (e.g. Auto-Encoder, GANs) models to reconstruct covered parts of input images and calculate the difference between the input and the reconstructed image. However, convolutional operations are good at extracting local features making it difficult to identify larger image anomalies. To this end, we propose a transformer architecture based on mutual attention for image anomaly separation. This architecture can capture long-term dependencies and fuse local features with global features to facilitate better image anomaly detection. Our method was extensively evaluated on several benchmarks, and experimental results showed that it improved detection capability by 3.1% and localization capability by 1.0% compared with state-of-the-art reconstruction-based methods.

背景图像异常检测是计算机图形学中的一项热门任务,在工业领域有着广泛的应用。解决这个问题的先前工作通常训练基于CNN的(例如,自动编码器,GANs)模型来重建输入图像的覆盖部分,并计算输入和重建图像之间的差。然而,卷积运算善于提取局部特征,这使得识别更大的图像异常变得困难。为此,我们提出了一种基于相互关注的图像异常分离转换器架构。该架构可以捕获长期相关性,并将局部特征与全局特征融合,以便于更好地检测图像异常。我们的方法在几个基准上进行了广泛的评估,实验结果表明,与最先进的基于重建的方法相比,它的检测能力提高了3.1%,定位能力提高了1.0%。
{"title":"A Transformer Architecture based mutual attention for Image Anomaly Detection","authors":"Mengting Zhang,&nbsp;Xiuxia Tian","doi":"10.1016/j.vrih.2022.07.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.006","url":null,"abstract":"<div><h3>Background</h3><p>Image anomaly detection is a popular task in computer graphics, which is widely used in industrial fields. Previous works that address this problem often train CNN-based (e.g. Auto-Encoder, GANs) models to reconstruct covered parts of input images and calculate the difference between the input and the reconstructed image. However, convolutional operations are good at extracting local features making it difficult to identify larger image anomalies. To this end, we propose a transformer architecture based on mutual attention for image anomaly separation. This architecture can capture long-term dependencies and fuse local features with global features to facilitate better image anomaly detection. Our method was extensively evaluated on several benchmarks, and experimental results showed that it improved detection capability by 3.1% and localization capability by 1.0% compared with state-of-the-art reconstruction-based methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
View Interpolation Networks for Reproducing Material Appearance of Specular Objects 再现镜面反射物体材料外观的视图插值网络
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.11.001
Chihiro Hoshizawa, Takashi Komuro

In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.

在这项研究中,我们提出了视图插值网络来再现物体表面亮度随观看方向的变化,这对再现真实物体的材料外观很重要。我们使用U-Net的原始版本和修改版本进行图像转换。这些网络被训练成从放置在正方形角落的四个相机的中间视点生成图像。我们用三种不同的方法和训练数据格式组合进行了一项实验。我们发现,最好将视点的坐标与四个相机图像一起输入,并使用来自随机视点的图像作为训练数据。
{"title":"View Interpolation Networks for Reproducing Material Appearance of Specular Objects","authors":"Chihiro Hoshizawa,&nbsp;Takashi Komuro","doi":"10.1016/j.vrih.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.001","url":null,"abstract":"<div><p>In this study, we propose view interpolation networks to reproduce changes in the brightness of an object's surface depending on the viewing direction, which is important in reproducing the material appearance of a real object. We use an original and a modified version of U-Net for image transformation. The networks were trained to generate images from intermediate viewpoints of four cameras placed at the corners of a square. We conducted an experiment with three different combinations of methods and training data formats. We found that it is best to input the coordinates of the viewpoints together with the four camera images and to use images from random viewpoints as the training data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing 虚拟虚拟社交中心在老年人社交距离中的应用
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.007
Hui Liang, Jiupeng Li, Yi Wang, Junjun Pan, Yazhou Zhang, Xiaohang Dong
{"title":"Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing","authors":"Hui Liang, Jiupeng Li, Yi Wang, Junjun Pan, Yazhou Zhang, Xiaohang Dong","doi":"10.1016/j.vrih.2022.07.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.007","url":null,"abstract":"","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"55184258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Unrolling Rain-guided Detail Recovery Network for Single Image Deraining 推出用于单图像降阶的雨水引导细节恢复网络
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.06.002
Kailong Lin, Shaowei Zhang, Yu Luo, Jie Ling

Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.

由于深度网络的快速发展,单图像去噪任务取得了重大进展。已经设计了各种架构来递归地或直接地去除雨水,并且大多数雨条纹可以通过现有的去噪方法来去除。然而,它们中的许多会在去噪过程中导致细节丢失,从而导致视觉伪影。为了解决细节丢失问题,我们提出了一种新的展开雨水引导细节恢复网络(URDRN),用于单图像去噪,该网络基于对背景图像中退化程度最高的区域往往是雨水破坏程度最高的地区的观察。此外,为了解决大多数现有的基于深度学习的方法轻视观测模型并简单地学习端到端映射的问题,所提出的URDRN将单个图像去噪任务分解为两个子问题:雨水提取和细节恢复。具体来说,首先引入上下文聚合注意力网络来有效地提取雨带,然后生成雨带注意力图作为指标来指导细节恢复过程。对于细节恢复子网络,在雨水注意力图的指导下,一个简单的编码器-解码器模型就足以恢复丢失的细节。在几个著名的基准数据集上的实验表明,与其他最先进的方法相比,所提出的方法可以获得有竞争力的性能。
{"title":"Unrolling Rain-guided Detail Recovery Network for Single Image Deraining","authors":"Kailong Lin,&nbsp;Shaowei Zhang,&nbsp;Yu Luo,&nbsp;Jie Ling","doi":"10.1016/j.vrih.2022.06.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.002","url":null,"abstract":"<div><p>Owing to the rapid development of deep networks, single image deraining tasks have achieved significant progress. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed by existing deraining methods. However, many of them cause a loss of details during deraining, resulting in visual artifacts. To resolve the detail-losing issue, we propose a novel unrolling rain-guided detail recovery network (URDRN) for single image deraining based on the observation that the most degraded areas of the background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learning-based methods trivialize the observation model and simply learn an end-to-end mapping, the proposed URDRN unrolls the single image deraining task into two subproblems: rain extraction and detail recovery. Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks, and then, a rain attention map is generated as an indicator to guide the detail-recovery process. For a detail-recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details. Experiments on several well-known benchmark datasets show that the proposed approach can achieve a competitive performance in comparison with other state-of-the-art methods.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IAACS: Image Aesthetic Assessment Through Color Composition And Space Formation IAACS:从色彩构成和空间构成看图像的审美评价
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.06.006
Bailin Yang , Changrui zhu , Frederick W.B. Li , Tianxiang Wei , Xiaohui Liang , Qingxu Wang

Judging how an image is visually appealing is a complicated and subjective task. This highly motivates having a machine learning model to automatically evaluate image aesthetic by matching the aesthetics of general public. Although deep learning methods have been successfully learning good visual features from images, correctly assessing image aesthetic quality is still challenging for deep learning. To tackle this, we propose a novel multi-view convolutional neural network to assess image aesthetic by analyzing image color composition and space formation (IAACS). Specifically, from different views of an image, including its key color components with their contributions, the image space formation and the image itself, our network extracts their corresponding features through our proposed feature extraction module (FET) and the ImageNet weight-based classification model. By fusing the extracted features, our network produces an accurate prediction score distribution of image aesthetic. Experiment results have shown that we have achieved a superior performance.

判断图像在视觉上的吸引力是一项复杂而主观的任务。这高度激励了拥有一个机器学习模型,通过匹配普通公众的审美来自动评估图像审美。尽管深度学习方法已经成功地从图像中学习了良好的视觉特征,但正确评估图像的美学质量仍然是深度学习的挑战。为了解决这一问题,我们提出了一种新的多视图卷积神经网络,通过分析图像颜色组成和空间形成(IACAS)来评估图像美学。具体来说,从图像的不同视图,包括图像的关键颜色分量及其贡献、图像空间的形成和图像本身,我们的网络通过我们提出的特征提取模块(FET)和基于ImageNet权重的分类模型提取其相应的特征。通过融合提取的特征,我们的网络产生了准确的图像美学预测分数分布。实验结果表明,我们取得了优越的性能。
{"title":"IAACS: Image Aesthetic Assessment Through Color Composition And Space Formation","authors":"Bailin Yang ,&nbsp;Changrui zhu ,&nbsp;Frederick W.B. Li ,&nbsp;Tianxiang Wei ,&nbsp;Xiaohui Liang ,&nbsp;Qingxu Wang","doi":"10.1016/j.vrih.2022.06.006","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.006","url":null,"abstract":"<div><p>Judging how an image is visually appealing is a complicated and subjective task. This highly motivates having a machine learning model to automatically evaluate image aesthetic by matching the aesthetics of general public. Although deep learning methods have been successfully learning good visual features from images, correctly assessing image aesthetic quality is still challenging for deep learning. To tackle this, we propose a novel multi-view convolutional neural network to assess image aesthetic by analyzing image color composition and space formation (IAACS). Specifically, from different views of an image, including its key color components with their contributions, the image space formation and the image itself, our network extracts their corresponding features through our proposed feature extraction module (FET) and the ImageNet weight-based classification model. By fusing the extracted features, our network produces an accurate prediction score distribution of image aesthetic. Experiment results have shown that we have achieved a superior performance.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVAD: Content-Oriented Video Anomaly Detection using a Self-Attention based Deep Learning Model COVAD:使用基于自注意的深度学习模型的面向内容的视频异常检测
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.06.001
Wenhao Shao , Praboda Rajapaksha , Yanyan Wei , Dun Li , Noel Crespi , Zhigang Luo

Background

Video anomaly detection has always been a hot topic and attracting an increasing amount of attention. Much of the existing methods on video anomaly detection depend on processing the entire video rather than considering only the significant context. This paper proposes a novel video anomaly detection method named COVAD, which mainly focuses on the region of interest in the video instead of the entire video. Our proposed COVAD method is based on an auto-encoded convolutional neural network and coordinated attention mechanism, which can effectively capture meaningful objects in the video and dependencies between different objects. Relying on the existing memory-guided video frame prediction network, our algorithm can more effectively predict the future motion and appearance of objects in the video. Our proposed algorithm obtained better experimental results on multiple data sets and outperformed the baseline models considered in our analysis. At the same time we improve a visual test that can provide pixel-level anomaly explanations.

背景视频异常检测一直是一个热门话题,越来越受到人们的关注。现有的视频异常检测方法大多依赖于处理整个视频,而不是只考虑重要的上下文。本文提出了一种新的视频异常检测方法COVAD,该方法主要关注视频中的感兴趣区域,而不是整个视频。我们提出的COVAD方法基于自动编码卷积神经网络和协调注意力机制,可以有效地捕捉视频中有意义的对象以及不同对象之间的依赖关系。基于现有的记忆引导视频帧预测网络,我们的算法可以更有效地预测视频中对象的未来运动和外观。我们提出的算法在多个数据集上获得了更好的实验结果,并且优于我们分析中考虑的基线模型。同时,我们改进了视觉测试,可以提供像素级异常解释。
{"title":"COVAD: Content-Oriented Video Anomaly Detection using a Self-Attention based Deep Learning Model","authors":"Wenhao Shao ,&nbsp;Praboda Rajapaksha ,&nbsp;Yanyan Wei ,&nbsp;Dun Li ,&nbsp;Noel Crespi ,&nbsp;Zhigang Luo","doi":"10.1016/j.vrih.2022.06.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.06.001","url":null,"abstract":"<div><h3>Background</h3><p>Video anomaly detection has always been a hot topic and attracting an increasing amount of attention. Much of the existing methods on video anomaly detection depend on processing the entire video rather than considering only the significant context. This paper proposes a novel video anomaly detection method named COVAD, which mainly focuses on the region of interest in the video instead of the entire video. Our proposed COVAD method is based on an auto-encoded convolutional neural network and coordinated attention mechanism, which can effectively capture meaningful objects in the video and dependencies between different objects. Relying on the existing memory-guided video frame prediction network, our algorithm can more effectively predict the future motion and appearance of objects in the video. Our proposed algorithm obtained better experimental results on multiple data sets and outperformed the baseline models considered in our analysis. At the same time we improve a visual test that can provide pixel-level anomaly explanations.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49830407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Point Cloud Upsampling Adversarial Network Based on Residual Multi-Scale Off-Set Attention 基于残差多尺度偏移注意力的点云上采样对抗网络
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.08.016
Bin Shen , Li Li , Xinrong Hu , Shengyi Guo , Jin Huang , Zhiyao Liang

Due to the limitation of the working principle of 3D scanning equipment, the point cloud obtained by 3D scanning is usually sparse and unevenly distributed. In this paper, we propose a new Generative Adversarial Network(GAN) for point cloud upsampling, which is extended from PU-GAN. Its core architecture is to replace the traditional Self-Attention (SA) module with the implicit Laplacian Off-Set Attention(OA) module, and adjacency features are aggregated using the Multi-Scale Off-Set Attention(MSOA) module, which adaptively adjusts the receptive field to learn various structural features. Finally, Residual links were added to form our Residual Multi-Scale Off-Set Attention (RMSOA) module, which utilized multi-scale structural relationships to generate finer details. A large number of experiments show that the performance of our method is superior to the existing methods, and our model has high robustness.

由于三维扫描设备工作原理的限制,通过三维扫描获得的点云通常是稀疏且分布不均匀的。在本文中,我们提出了一种新的用于点云上采样的生成对抗性网络(GAN),它是从PU-GAN扩展而来的。其核心架构是用隐式拉普拉斯偏移注意力(OA)模块取代传统的自注意(SA)模块,并使用多尺度偏移注意力(MSOA)模块聚合相邻特征,自适应地调整感受野以学习各种结构特征。最后,添加了残差链接以形成我们的残差多尺度偏移注意力(RMSOA)模块,该模块利用多尺度结构关系生成更精细的细节。大量的实验表明,我们的方法的性能优于现有的方法,并且我们的模型具有很高的鲁棒性。
{"title":"A Point Cloud Upsampling Adversarial Network Based on Residual Multi-Scale Off-Set Attention","authors":"Bin Shen ,&nbsp;Li Li ,&nbsp;Xinrong Hu ,&nbsp;Shengyi Guo ,&nbsp;Jin Huang ,&nbsp;Zhiyao Liang","doi":"10.1016/j.vrih.2022.08.016","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.016","url":null,"abstract":"<div><p>Due to the limitation of the working principle of 3D scanning equipment, the point cloud obtained by 3D scanning is usually sparse and unevenly distributed. In this paper, we propose a new Generative Adversarial Network(GAN) for point cloud upsampling, which is extended from PU-GAN. Its core architecture is to replace the traditional Self-Attention (SA) module with the implicit Laplacian Off-Set Attention(OA) module, and adjacency features are aggregated using the Multi-Scale Off-Set Attention(MSOA) module, which adaptively adjusts the receptive field to learn various structural features. Finally, Residual links were added to form our Residual Multi-Scale Off-Set Attention (RMSOA) module, which utilized multi-scale structural relationships to generate finer details. A large number of experiments show that the performance of our method is superior to the existing methods, and our model has high robustness.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49868174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing 虚拟虚拟社交中心在老年人社交距离中的应用
Q1 Computer Science Pub Date : 2023-02-01 DOI: 10.1016/j.vrih.2022.07.007
Hui Liang , Jiupeng Li , Yi Wang , Junjun Pan , Yazhou Zhang , Xiaohang Dong

The lack of social activities in the elderly for physical reasons can make them feel lonely and prone to depression. With the spread of COVID-19, it is difficult for the elderly to conduct the few social activities stably, causing the elderly to be more lonely. The metaverse is a virtual world that mirrors reality. It allows the elderly to get rid of the constraints of reality and perform social activities stably and continuously, providing new ideas for alleviating the loneliness of the elderly. Through the analysis of the needs of the elderly, a virtual social center framework for the elderly was proposed in this study. Besides, a prototype system was designed according to the framework. The elderly can socialize in virtual reality with metaverse-related technologies and human-computer interaction tools. Additionally, a test was jointly conducted with the chief physician of the geriatric rehabilitation department of a tertiary hospital. The results demonstrated that the mental state of the elderly who had used the virtual social center was significantly better than that of the elderly who had not used it. Thus, virtual social centers alleviated loneliness and depression in older adults. Virtual social centers can help the elderly relieve loneliness and depression when the global epidemic is normalizing and the population is aging. Hence, they have promotion value

老年人由于身体原因缺乏社交活动会使他们感到孤独,容易患抑郁症。随着新冠肺炎的传播,老年人很难稳定地进行为数不多的社交活动,导致老年人更加孤独。元宇宙是一个反映现实的虚拟世界。它使老年人能够摆脱现实的束缚,稳定、持续地进行社会活动,为缓解老年人的孤独感提供了新的思路。通过对老年人需求的分析,本研究提出了一个面向老年人的虚拟社交中心框架。此外,还根据该框架设计了原型系统。老年人可以通过元宇宙相关技术和人机交互工具在虚拟现实中进行社交。此外,还与一家三级医院老年康复科的主任医师联合进行了一项测试。结果表明,使用过虚拟社交中心的老年人的心理状态明显好于未使用过的老年人。因此,虚拟社交中心缓解了老年人的孤独和抑郁。在全球疫情正常化和人口老龄化的情况下,虚拟社交中心可以帮助老年人缓解孤独和抑郁。因此,它们具有推广价值
{"title":"Metaverse Virtual Social Center for the Elderly Communication During the Social Distancing","authors":"Hui Liang ,&nbsp;Jiupeng Li ,&nbsp;Yi Wang ,&nbsp;Junjun Pan ,&nbsp;Yazhou Zhang ,&nbsp;Xiaohang Dong","doi":"10.1016/j.vrih.2022.07.007","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.007","url":null,"abstract":"<div><p>The lack of social activities in the elderly for physical reasons can make them feel lonely and prone to depression. With the spread of COVID-19, it is difficult for the elderly to conduct the few social activities stably, causing the elderly to be more lonely. The metaverse is a virtual world that mirrors reality. It allows the elderly to get rid of the constraints of reality and perform social activities stably and continuously, providing new ideas for alleviating the loneliness of the elderly. Through the analysis of the needs of the elderly, a virtual social center framework for the elderly was proposed in this study. Besides, a prototype system was designed according to the framework. The elderly can socialize in virtual reality with metaverse-related technologies and human-computer interaction tools. Additionally, a test was jointly conducted with the chief physician of the geriatric rehabilitation department of a tertiary hospital. The results demonstrated that the mental state of the elderly who had used the virtual social center was significantly better than that of the elderly who had not used it. Thus, virtual social centers alleviated loneliness and depression in older adults. Virtual social centers can help the elderly relieve loneliness and depression when the global epidemic is normalizing and the population is aging. Hence, they have promotion value</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49868173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Building the metaverse using digital twins at all scales,states, and relations 使用各种规模、状态和关系的数字双胞胎构建元世界
Q1 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.vrih.2022.06.005
Zhihan Lv , Shuxuan Xie , Yuxi Li , M. Shamim Hossain , Abdulmotaleb El Saddik

Developments in new-generation information technology have enabled Digital Twins to reshape the physical world into a virtual digital space and provide technical support for constructing the Metaverse. Metaverse objects can be at the micro-, meso-, or macroscale. The Metaverse is a complex collection of solid, liquid, gaseous, plasma, and other uncertain states. Additionally, the Metaverse integrates tangibles with social relations, such as interpersonal (friends, partners, and family) and social relations (ethics, morality, and law). This review introduces some principles and laws, such as broken windows theory, small-world phenomenon, survivor bias, and herd behavior, for constructing a Digital Twins model for social relations. Therefore, from multiple perspectives, this article reviews mappings of tangible and intangible real-world objects to the Metaverse using the Digital Twins model.

新一代信息技术的发展使数字孪生能够将物理世界重塑为虚拟数字空间,并为构建元宇宙提供技术支持。元宇宙对象可以在微观、中观或宏观尺度上。元宇宙是固体、液体、气体、等离子体和其他不确定状态的复杂集合。此外,虚拟世界还整合了有形的社会关系,如人际关系(朋友、伴侣、家庭)和社会关系(伦理、道德、法律)。本文从破窗理论、小世界现象、幸存者偏见、羊群行为等方面介绍了构建数字孪生社会关系模型的基本原理和规律。因此,本文将使用Digital Twins模型从多个角度回顾有形和无形的现实世界对象到元宇宙的映射。
{"title":"Building the metaverse using digital twins at all scales,states, and relations","authors":"Zhihan Lv ,&nbsp;Shuxuan Xie ,&nbsp;Yuxi Li ,&nbsp;M. Shamim Hossain ,&nbsp;Abdulmotaleb El Saddik","doi":"10.1016/j.vrih.2022.06.005","DOIUrl":"10.1016/j.vrih.2022.06.005","url":null,"abstract":"<div><p>Developments in new-generation information technology have enabled Digital Twins to reshape the physical world into a virtual digital space and provide technical support for constructing the Metaverse. Metaverse objects can be at the micro-, meso-, or macroscale. The Metaverse is a complex collection of solid, liquid, gaseous, plasma, and other uncertain states. Additionally, the Metaverse integrates tangibles with social relations, such as interpersonal (friends, partners, and family) and social relations (ethics, morality, and law). This review introduces some principles and laws, such as broken windows theory, small-world phenomenon, survivor bias, and herd behavior, for constructing a Digital Twins model for social relations. Therefore, from multiple perspectives, this article reviews mappings of tangible and intangible real-world objects to the Metaverse using the Digital Twins model.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000602/pdf?md5=a21672be799f10764afb283c622bf66e&pid=1-s2.0-S2096579622000602-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125427534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1