首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Novel virtual nasal endoscopy system based on computed tomography scans 基于计算机断层扫描的新型虚拟鼻内窥镜系统
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2021.09.005
Fábio de O. Sousa , Daniel S. da Silva , Tarique da S. Cavalcante , Edson C. Neto , Victor José T. Gondim , Ingrid C. Nogueira , Auzuir Ripardo de Alexandria , Victor Hugo C. de Albuquerque

Background

Currently, many simulator systems for medical procedures are under development. These systems can provide new solutions for training, planning, and testing medical practices, improve performance, and optimize the time of the exams. However, to achieve the best results, certain premises must be followed and applied to the model under development, such as usability, control, graphics realism, and interactive and dynamic gamification.

Methods

This study presents a system for simulating a medical examination procedure in the nasal cavity for training and research purposes, using a patient′s accurate computed tomography (CT) as a reference. The pathologies that are used as a guide for the development of the system are highlighted. Furthermore, an overview of current studies covering bench medical mannequins, 3D printing, animals, hardware, software, and software that use hardware to boost user interaction, is given. Finally, a comparison with similar state-of-the-art studies is made.

Results

The main result of this work is interactive gamification techniques to propose an experience of simulation of an immersive exam by identifying pathologies present in the nasal cavity such as hypertrophy of turbinates, septal deviation adenoid hypertrophy, nasal polyposis, and tumor.

目前,许多医疗程序的模拟系统正在开发中。这些系统可以为培训、计划和测试医疗实践提供新的解决方案,提高性能并优化考试时间。然而,为了获得最佳结果,必须遵循某些前提并将其应用于正在开发的模型,例如可用性、控制、图像真实感以及交互式和动态游戏化。方法本研究以患者的精确计算机断层扫描(CT)为参考,提出了一个用于训练和研究目的的模拟鼻腔医学检查过程的系统。病理是用来作为指导系统的发展是突出显示。此外,还概述了目前的研究,包括台式医疗人体模型,3D打印,动物,硬件,软件和使用硬件来促进用户交互的软件。最后,与同类研究进行了比较。这项工作的主要成果是交互式游戏化技术,通过识别鼻腔中存在的病理,如鼻甲肥大、鼻中隔偏曲、腺样体肥大、鼻息肉病和肿瘤,提出了一种模拟沉浸式检查的体验。
{"title":"Novel virtual nasal endoscopy system based on computed tomography scans","authors":"Fábio de O. Sousa ,&nbsp;Daniel S. da Silva ,&nbsp;Tarique da S. Cavalcante ,&nbsp;Edson C. Neto ,&nbsp;Victor José T. Gondim ,&nbsp;Ingrid C. Nogueira ,&nbsp;Auzuir Ripardo de Alexandria ,&nbsp;Victor Hugo C. de Albuquerque","doi":"10.1016/j.vrih.2021.09.005","DOIUrl":"10.1016/j.vrih.2021.09.005","url":null,"abstract":"<div><h3>Background</h3><p>Currently, many simulator systems for medical procedures are under development. These systems can provide new solutions for training, planning, and testing medical practices, improve performance, and optimize the time of the exams. However, to achieve the best results, certain premises must be followed and applied to the model under development, such as usability, control, graphics realism, and interactive and dynamic gamification.</p></div><div><h3>Methods</h3><p>This study presents a system for simulating a medical examination procedure in the nasal cavity for training and research purposes, using a patient′s accurate computed tomography (CT) as a reference. The pathologies that are used as a guide for the development of the system are highlighted. Furthermore, an overview of current studies covering bench medical mannequins, 3D printing, animals, hardware, software, and software that use hardware to boost user interaction, is given. Finally, a comparison with similar state-of-the-art studies is made.</p></div><div><h3>Results</h3><p>The main result of this work is interactive gamification techniques to propose an experience of simulation of an immersive exam by identifying pathologies present in the nasal cavity such as hypertrophy of turbinates, septal deviation adenoid hypertrophy, nasal polyposis, and tumor.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages 359-379"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000158/pdf?md5=56e0a645d11847e1aa7d2a1aa890e681&pid=1-s2.0-S2096579622000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133998648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Virtual-reality and intelligent hardware in digital twins 数字孪生中的虚拟现实和智能硬件
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2022.08.002
Zhihan Lv , Gustavo Marfia , Fabio Poiesi , Neil Vaughan , Jun Shen
{"title":"Virtual-reality and intelligent hardware in digital twins","authors":"Zhihan Lv ,&nbsp;Gustavo Marfia ,&nbsp;Fabio Poiesi ,&nbsp;Neil Vaughan ,&nbsp;Jun Shen","doi":"10.1016/j.vrih.2022.08.002","DOIUrl":"10.1016/j.vrih.2022.08.002","url":null,"abstract":"","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages ii-iv"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000705/pdfft?md5=db4abbeed2bd0f584132707933abf5c0&pid=1-s2.0-S2096579622000705-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131609670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balanced-partitioning treemapping method for digital hierarchical dataset 数字分层数据集的平衡分区树映射方法
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2021.09.006
Cong Feng , Minglun Gong , Oliver Deussen

Background

The problem of visualizing a hierarchical dataset is an important and useful technique in many real-life situations. Folder systems, stock markets, and other hierarchical-related datasets can use this technique to better understand the structure and dynamic variation of the dataset. Traditional space-filling(square)-based methods have the advantages of compact space usage and node size as opposed to diagram-based methods. Spacefilling-based methods have two main research directions: static and dynamic performance.

Methods

This study presented a treemapping method based on balanced partitioning that enables excellent aspect ratios in one variant, good temporal coherence for dynamic data in another, and in the third, a satisfactory compromise between these two aspects. To layout a treemap, all the children of a node were divided into two groups, which were then further divided until groups of single elements were reached. After this, these groups were combined to form a rectangle representing the parent node. This process was performed for each layer of the hierarchical dataset. For the first variant from the partitioning, the child elements were sorted and two groups, sized as equally as possible, were built from both big and small elements (size-balanced partition). This achieved satisfactory aspect ratios for the rectangles but less so temporal coherence (dynamic). For the second variant, the sequence of children was taken and from this, groups, sized as equally as possible, were created without the need for sorting (sequence-based, good compromise between aspect ratio and temporal coherency). For the third variant, the children were split into two groups of equal cardinalities, regardless of their size (number-balanced, worse aspect ratios but good temporal coherence).

Results

This study evaluated the aspect ratios and dynamic stability of the employed methods and proposed a new metric that measures the visual difference between rectangles during their movement to represent temporally changing inputs.

Conclusion

This study demonstrated that the proposed method of treemapping via balanced partitioning outperformed the state-of-the-art methods for several real-world datasets.

在许多现实生活中,可视化分层数据集是一项重要而有用的技术。文件夹系统、股票市场和其他与层次结构相关的数据集可以使用这种技术来更好地理解数据集的结构和动态变化。与基于图的方法相比,传统的基于空间填充(正方形)的方法具有紧凑的空间使用和节点大小的优点。基于空间填充的方法主要有两个研究方向:静态性能和动态性能。本研究提出了一种基于平衡分区的树状图方法,该方法在一种变体中实现了出色的纵横比,在另一种变体中实现了动态数据的良好时间相干性,在第三种变体中实现了这两方面的令人满意的折衷。为了布局树状图,将节点的所有子节点分成两组,然后进一步划分,直到到达单个元素的组。在此之后,这些组被组合成一个表示父节点的矩形。该过程对分层数据集的每一层执行。对于分区的第一个变体,子元素被排序,大小尽可能相等的两个组由大元素和小元素构建(大小平衡分区)。这实现了令人满意的矩形长宽比,但时间相干性(动态)较差。对于第二种变体,取子序列,并从中创建尽可能大小相等的组,而不需要排序(基于序列,在纵横比和时间一致性之间取得良好的折衷)。对于第三种变体,孩子们被分成两个基数相等的组,不管他们的大小(数量平衡,较差的长宽比,但良好的时间一致性)。结果本研究评估了所采用方法的长宽比和动态稳定性,并提出了一种新的度量,用于测量矩形在其运动期间的视觉差异,以表示时间变化的输入。结论本研究表明,本文提出的基于平衡分区的树映射方法在多个真实数据集上的表现优于最先进的方法。
{"title":"Balanced-partitioning treemapping method for digital hierarchical dataset","authors":"Cong Feng ,&nbsp;Minglun Gong ,&nbsp;Oliver Deussen","doi":"10.1016/j.vrih.2021.09.006","DOIUrl":"10.1016/j.vrih.2021.09.006","url":null,"abstract":"<div><h3>Background</h3><p>The problem of visualizing a hierarchical dataset is an important and useful technique in many real-life situations. Folder systems, stock markets, and other hierarchical-related datasets can use this technique to better understand the structure and dynamic variation of the dataset. Traditional space-filling(square)-based methods have the advantages of compact space usage and node size as opposed to diagram-based methods. Spacefilling-based methods have two main research directions: static and dynamic performance.</p></div><div><h3>Methods</h3><p>This study presented a treemapping method based on balanced partitioning that enables excellent aspect ratios in one variant, good temporal coherence for dynamic data in another, and in the third, a satisfactory compromise between these two aspects. To layout a treemap, all the children of a node were divided into two groups, which were then further divided until groups of single elements were reached. After this, these groups were combined to form a rectangle representing the parent node. This process was performed for each layer of the hierarchical dataset. For the first variant from the partitioning, the child elements were sorted and two groups, sized as equally as possible, were built from both big and small elements (size-balanced partition). This achieved satisfactory aspect ratios for the rectangles but less so temporal coherence (dynamic). For the second variant, the sequence of children was taken and from this, groups, sized as equally as possible, were created without the need for sorting (sequence-based, good compromise between aspect ratio and temporal coherency). For the third variant, the children were split into two groups of equal cardinalities, regardless of their size (number-balanced, worse aspect ratios but good temporal coherence).</p></div><div><h3>Results</h3><p>This study evaluated the aspect ratios and dynamic stability of the employed methods and proposed a new metric that measures the visual difference between rectangles during their movement to represent temporally changing inputs.</p></div><div><h3>Conclusion</h3><p>This study demonstrated that the proposed method of treemapping via balanced partitioning outperformed the state-of-the-art methods for several real-world datasets.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages 342-358"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962200016X/pdf?md5=3522acfc9eaaf5a0fffab76c1bc1bfad&pid=1-s2.0-S209657962200016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123073673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Measuring 3D face deformations from RGB images of expression rehabilitation exercises 利用表情康复训练的RGB图像测量三维面部变形
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2022.05.004
Claudio Ferrari , Stefano Berretti , Pietro Pala , Alberto Del Bimbo

Background

The accurate (quantitative) analysis of 3D face deformation is a problem of increasing interest in many applications. In particular, defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature. A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Parkinson’s or Alzheimer’s disease or those recovering from a stroke.

Methods

In this paper, a complete framework that allows the construction of a 3D morphable shape model (3DMM) of the face is presented for fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation. The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM. The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.

Results

The method was experimentally validated using the MICC-3D dataset, which includes 11 subjects. Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, 3DMM was fit to an RGB frame whereby, from the apex facial action and the neutral frame, the extent of the deformation was computed. The results indicate that the proposed approach can accurately capture face deformation, even localized and asymmetric deformations.

Conclusion

The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets. Interestingly, these results were obtained using only RGB targets, without the need for 3D scans captured with costly devices. This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.

在许多应用中,准确(定量)分析三维人脸变形是一个越来越受关注的问题。特别是,在现有文献中,将面部变形的3D模型定义为2D目标图像以捕获局部和不对称变形仍然是一个挑战。这种局部变形的测量可能是监测帕金森病或阿尔茨海默病患者或中风恢复期患者康复锻炼的相关指标。方法提出了一个完整的人脸三维变形模型(3DMM)构建框架,用于拟合目标RGB图像。该模型具有基于局部变形分量的特点。拟合变换从3D到2D,并根据目标图像中检测到的地标与手动标注在平均3DMM上的地标之间的对应关系进行指导。拟合还具有分两个步骤进行的区别,以将与目标受试者身份相关的面部变形与面部动作引起的面部变形分开。结果采用MICC-3D数据集对该方法进行了实验验证。每位受试者都以一个中立的姿势拍照,同时进行18个面部动作,这些动作会以局部和不对称的方式使面部变形。对于每个采集,3DMM拟合到RGB帧,由此,从顶点面部动作和中性帧,计算变形的程度。结果表明,该方法可以准确地捕获人脸变形,甚至是局部变形和非对称变形。所提出的框架表明,可以测量重建的3D面部模型的变形,以监测面部对一组目标的响应。有趣的是,这些结果仅使用RGB目标获得,而不需要使用昂贵的设备进行3D扫描。这为在远程医疗康复监测中使用拟议的工具铺平了道路。
{"title":"Measuring 3D face deformations from RGB images of expression rehabilitation exercises","authors":"Claudio Ferrari ,&nbsp;Stefano Berretti ,&nbsp;Pietro Pala ,&nbsp;Alberto Del Bimbo","doi":"10.1016/j.vrih.2022.05.004","DOIUrl":"10.1016/j.vrih.2022.05.004","url":null,"abstract":"<div><h3>Background</h3><p>The accurate (quantitative) analysis of 3D face deformation is a problem of increasing interest in many applications. In particular, defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature. A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Parkinson’s or Alzheimer’s disease or those recovering from a stroke.</p></div><div><h3>Methods</h3><p>In this paper, a complete framework that allows the construction of a 3D morphable shape model (3DMM) of the face is presented for fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation. The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM. The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.</p></div><div><h3>Results</h3><p>The method was experimentally validated using the MICC-3D dataset, which includes 11 subjects. Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, 3DMM was fit to an RGB frame whereby, from the apex facial action and the neutral frame, the extent of the deformation was computed. The results indicate that the proposed approach can accurately capture face deformation, even localized and asymmetric deformations.</p></div><div><h3>Conclusion</h3><p>The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets. Interestingly, these results were obtained using only RGB targets, without the need for 3D scans captured with costly devices. This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages 306-323"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000456/pdf?md5=10f3974adc62709cdc0d135e68fc356c&pid=1-s2.0-S2096579622000456-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130890604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in wireless sensor networks under AI-5G for augmented reality 面向增强现实的AI-5G无线传感器网络进展
Q1 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.vrih.2022.06.003
Muhammad Khan
{"title":"Advances in wireless sensor networks under AI-5G for augmented reality","authors":"Muhammad Khan","doi":"10.1016/j.vrih.2022.06.003","DOIUrl":"10.1016/j.vrih.2022.06.003","url":null,"abstract":"","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages ii-iv"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000493/pdfft?md5=4539102c7e8705a74eb1beacbee6409b&pid=1-s2.0-S2096579622000493-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128356220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepdive: a learning-based approach for virtual camera in immersive contents 深度潜水:一种基于学习的沉浸式内容虚拟相机方法
Q1 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.vrih.2022.05.001
Muhammad Irfan , Muhammad Munsif

A 360° video stream provide users a choice of viewing one's own point of interest inside the immersive contents. Performing head or hand manipulations to view the interesting scene in a 360° video is very tedious and the user may view the interested frame during his head/hand movement or even lose it. While automatically extracting user's point of interest (UPI) in a 360° video is very challenging because of subjectivity and difference of comforts. To handle these challenges and provide user's the best and visually pleasant view, we propose an automatic approach by utilizing two CNN models: object detector and aesthetic score of the scene. The proposed framework is three folded: pre-processing, Deepdive architecture, and view selection pipeline. In first fold, an input 360° video-frame is divided into three subframes, each one with 120° view. In second fold, each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score. Finally, decision pipeline selects the subframe with salient object based on the detected object and calculated aesthetic score. As compared to other state-of-the-art techniques which are domain specific approaches i.e., support sports 360° video, our system support most of the 360° videos genre. Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360° videos.

360°视频流为用户提供了在沉浸式内容中观看自己感兴趣的点的选择。在360°视频中执行头部或手部操作来查看有趣的场景是非常繁琐的,用户可能会在他的头部/手部运动中查看感兴趣的帧,甚至失去它。而在360°视频中自动提取用户兴趣点(UPI)由于主观性和舒适度的差异是非常具有挑战性的。为了应对这些挑战并为用户提供最佳的视觉愉悦视图,我们提出了一种利用两个CNN模型的自动方法:对象检测器和场景美学评分。该框架分为三个部分:预处理、Deepdive架构和视图选择管道。在第一次折叠中,输入的360°视频帧被分成三个子帧,每个子帧具有120°视图。在第二步中,每个子帧通过CNN模型提取子帧中的视觉特征并计算美学分数。最后,决策流水线根据检测到的目标和计算出的美学分数,选择具有显著目标的子框架。与其他最先进的技术相比,这些技术是特定领域的方法,即支持体育360°视频,我们的系统支持大多数360°视频类型。根据我们自己从各个网站收集的数据对提出的框架进行性能评估,表明不同类别的360°视频的性能。
{"title":"Deepdive: a learning-based approach for virtual camera in immersive contents","authors":"Muhammad Irfan ,&nbsp;Muhammad Munsif","doi":"10.1016/j.vrih.2022.05.001","DOIUrl":"10.1016/j.vrih.2022.05.001","url":null,"abstract":"<div><p>A 360° video stream provide users a choice of viewing one's own point of interest inside the immersive contents. Performing head or hand manipulations to view the interesting scene in a 360° video is very tedious and the user may view the interested frame during his head/hand movement or even lose it. While automatically extracting user's point of interest (UPI) in a 360° video is very challenging because of subjectivity and difference of comforts. To handle these challenges and provide user's the best and visually pleasant view, we propose an automatic approach by utilizing two CNN models: object detector and aesthetic score of the scene. The proposed framework is three folded: pre-processing, Deepdive architecture, and view selection pipeline. In first fold, an input 360° video-frame is divided into three subframes, each one with 120° view. In second fold, each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score. Finally, decision pipeline selects the subframe with salient object based on the detected object and calculated aesthetic score. As compared to other state-of-the-art techniques which are domain specific approaches i.e., support sports 360° video, our system support most of the 360° videos genre. Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360° videos.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 247-262"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000420/pdf?md5=de115425d3e578bfb5831120557517a6&pid=1-s2.0-S2096579622000420-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123177239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Serious games in science education: a systematic literature 科学教育中的严肃游戏。系统文献综述
Q1 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.vrih.2022.02.001
Mohib Ullah , Sareer Ul Amin , Muhammad Munsif , Muhammad Mudassar Yamin , Utkurbek Safaev , Habib Khan , Salman Khan , Habib Ullah

Teaching science through computer games, simulations, and artificial intelligence (AI) is an increasingly active research field. To this end, we conducted a systematic literature review on serious games for science education to reveal research trends and patterns. We discussed the role of virtual reality (VR), AI, and augmented reality (AR) games in teaching science subjects like physics. Specifically, we covered the research spanning between 2011 and 2021, investigated country-wise concentration and most common evaluation methods, and discussed the positive and negative aspects of serious games in science education in particular and attitudes towards the use of serious games in education in general.

通过电脑游戏、模拟和人工智能(AI)进行科学教学是一个日益活跃的研究领域。为此,我们对严肃游戏在科学教育中的应用进行了系统的文献综述,以揭示研究趋势和模式。我们讨论了虚拟现实(VR)、人工智能(AI)和增强现实(AR)游戏在物理等科学学科教学中的作用。具体来说,我们涵盖了2011年至2021年间的研究,调查了国家集中度和最常见的评估方法,并讨论了严肃游戏在科学教育中的积极和消极方面,以及对在教育中使用严肃游戏的态度。
{"title":"Serious games in science education: a systematic literature","authors":"Mohib Ullah ,&nbsp;Sareer Ul Amin ,&nbsp;Muhammad Munsif ,&nbsp;Muhammad Mudassar Yamin ,&nbsp;Utkurbek Safaev ,&nbsp;Habib Khan ,&nbsp;Salman Khan ,&nbsp;Habib Ullah","doi":"10.1016/j.vrih.2022.02.001","DOIUrl":"10.1016/j.vrih.2022.02.001","url":null,"abstract":"<div><p>Teaching science through computer games, simulations, and artificial intelligence (AI) is an increasingly active research field. To this end, we conducted a systematic literature review on serious games for science education to reveal research trends and patterns. We discussed the role of virtual reality (VR), AI, and augmented reality (AR) games in teaching science subjects like physics. Specifically, we covered the research spanning between 2011 and 2021, investigated country-wise concentration and most common evaluation methods, and discussed the positive and negative aspects of serious games in science education in particular and attitudes towards the use of serious games in education in general.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 189-209"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000201/pdf?md5=88ee50356fb17742bbff5a754acd90a6&pid=1-s2.0-S2096579622000201-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133346423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Perceptual quality assessment of panoramic stitched contents for immersive applications: a prospective survey 沉浸式应用全景拼接内容的感知质量评估:一项前瞻性调查
Q1 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.vrih.2022.03.004
Hayat Ullah , Sitara Afzal , Imran Ullah Khan

The recent advancements in the field of Virtual Reality (VR) and Augmented Reality (AR) have a substantial impact on modern day technology by digitizing each and everything related to human life and open the doors to the next generation Software Technology (Soft Tech). VR and AR technology provide astonishing immersive contents with the help of high quality stitched panoramic contents and 360° imagery that widely used in the education, gaming, entertainment, and production sector. The immersive quality of VR and AR contents are greatly dependent on the perceptual quality of panoramic or 360° images, in fact a minor visual distortion can significantly degrade the overall quality. Thus, to ensure the quality of constructed panoramic contents for VR and AR applications, numerous Stitched Image Quality Assessment (SIQA) methods have been proposed to assess the quality of panoramic contents before using in VR and AR. In this survey, we provide a detailed overview of the SIQA literature and exclusively focus on objective SIQA methods presented till date. For better understanding, the objective SIQA methods are classified into two classes namely Full-Reference SIQA and No-Reference SIQA approaches. Each class is further categorized into traditional and deep learning-based methods and examined their performance for SIQA task. Further, we shortlist the publicly available benchmark SIQA datasets and evaluation metrices used for quality assessment of panoramic contents. In last, we highlight the current challenges in this area based on the existing SIQA methods and suggest future research directions that need to be target for further improvement in SIQA domain.

虚拟现实(VR)和增强现实(AR)领域的最新进展通过数字化与人类生活相关的每件事对现代技术产生了重大影响,并为下一代软件技术(软技术)打开了大门。VR和AR技术借助高质量的拼接全景内容和360°图像,提供令人惊叹的沉浸式内容,广泛应用于教育,游戏,娱乐和生产领域。VR和AR内容的沉浸式质量在很大程度上取决于全景或360°图像的感知质量,实际上轻微的视觉失真就会显著降低整体质量。因此,为了确保为VR和AR应用构建的全景内容的质量,已经提出了许多缝合图像质量评估(SIQA)方法,在VR和AR应用之前评估全景内容的质量。在本调查中,我们提供了SIQA文献的详细概述,并专注于迄今为止提出的客观SIQA方法。为了更好地理解,客观SIQA方法分为两类,即全参考SIQA方法和无参考SIQA方法。将每个类进一步分为传统方法和基于深度学习的方法,并检查它们在SIQA任务中的表现。此外,我们还列出了用于全景内容质量评估的公开可用的基准SIQA数据集和评估指标。最后,在现有SIQA方法的基础上,指出了该领域目前面临的挑战,并提出了SIQA领域未来需要进一步完善的研究方向。
{"title":"Perceptual quality assessment of panoramic stitched contents for immersive applications: a prospective survey","authors":"Hayat Ullah ,&nbsp;Sitara Afzal ,&nbsp;Imran Ullah Khan","doi":"10.1016/j.vrih.2022.03.004","DOIUrl":"10.1016/j.vrih.2022.03.004","url":null,"abstract":"<div><p>The recent advancements in the field of Virtual Reality (VR) and Augmented Reality (AR) have a substantial impact on modern day technology by digitizing each and everything related to human life and open the doors to the next generation Software Technology (Soft Tech). VR and AR technology provide astonishing immersive contents with the help of high quality stitched panoramic contents and 360° imagery that widely used in the education, gaming, entertainment, and production sector. The immersive quality of VR and AR contents are greatly dependent on the perceptual quality of panoramic or 360° images, in fact a minor visual distortion can significantly degrade the overall quality. Thus, to ensure the quality of constructed panoramic contents for VR and AR applications, numerous Stitched Image Quality Assessment (SIQA) methods have been proposed to assess the quality of panoramic contents before using in VR and AR. In this survey, we provide a detailed overview of the SIQA literature and exclusively focus on objective SIQA methods presented till date. For better understanding, the objective SIQA methods are classified into two classes namely Full-Reference SIQA and No-Reference SIQA approaches. Each class is further categorized into traditional and deep learning-based methods and examined their performance for SIQA task. Further, we shortlist the publicly available benchmark SIQA datasets and evaluation metrices used for quality assessment of panoramic contents. In last, we highlight the current challenges in this area based on the existing SIQA methods and suggest future research directions that need to be target for further improvement in SIQA domain.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 223-246"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000262/pdf?md5=31a80674d804c0f95bfedc53925d3c42&pid=1-s2.0-S2096579622000262-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115462630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AR-assisted children book for smart teaching and learning of Turkish alphabets ar辅助儿童书籍智能教学和学习土耳其字母
Q1 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.vrih.2022.05.002
Ahmed L. Alyousify , Ramadhan J. Mstafa

Background

Augmented reality (AR), virtual reality (VR), and remote-controlled devices are driving the need for a better 5G infrastructure to support faster data transmission. In this study, mobile AR is emphasized as a viable and widespread solution that can be easily scaled to millions of end-users and educators because it is lightweight and low-cost and can be implemented in a cross-platform manner. Low-efficiency smart devices and high latencies for real-time interactions via regular mobile networks are primary barriers to the use of AR in education. New 5G cellular networks can mitigate some of these issues via network slicing, device-to-device communication, and mobile edge computing.

Methods

In this study, we use a new technology to solve some of these problems. The proposed software monitors image targets on a printed book and renders 3D objects and alphabetic models. In addition, the application considers phonetics. The sound (phonetic) and 3D representation of a letter are played as soon as the image target is detected. 3D models of the Turkish alphabet are created by using Adobe Photoshop with Unity3D and Vuforia SDK.

Results

The proposed application teaches Turkish alphabets and phonetics by using 3D object models, 3D letters, and 3D phrases, including letters and sounds.

增强现实(AR)、虚拟现实(VR)和远程控制设备正在推动对更好的5G基础设施的需求,以支持更快的数据传输。在这项研究中,移动增强现实被强调为一种可行且广泛的解决方案,可以很容易地扩展到数百万最终用户和教育工作者,因为它重量轻,成本低,可以以跨平台的方式实现。低效率的智能设备和通过常规移动网络进行实时交互的高延迟是AR在教育中使用的主要障碍。新的5G蜂窝网络可以通过网络切片、设备对设备通信和移动边缘计算来缓解其中的一些问题。方法在本研究中,我们采用了一种新的技术来解决这些问题。所提出的软件监控印刷书籍上的图像目标,并呈现3D对象和字母模型。此外,该应用程序还考虑语音。一旦检测到图像目标,就会播放字母的声音(语音)和3D表示。土耳其字母的3D模型是通过使用Adobe Photoshop与Unity3D和Vuforia SDK创建的。该应用程序通过使用3D对象模型、3D字母和3D短语(包括字母和声音)来教授土耳其语字母和语音。
{"title":"AR-assisted children book for smart teaching and learning of Turkish alphabets","authors":"Ahmed L. Alyousify ,&nbsp;Ramadhan J. Mstafa","doi":"10.1016/j.vrih.2022.05.002","DOIUrl":"10.1016/j.vrih.2022.05.002","url":null,"abstract":"<div><h3>Background</h3><p>Augmented reality (AR), virtual reality (VR), and remote-controlled devices are driving the need for a better 5G infrastructure to support faster data transmission. In this study, mobile AR is emphasized as a viable and widespread solution that can be easily scaled to millions of end-users and educators because it is lightweight and low-cost and can be implemented in a cross-platform manner. Low-efficiency smart devices and high latencies for real-time interactions via regular mobile networks are primary barriers to the use of AR in education. New 5G cellular networks can mitigate some of these issues via network slicing, device-to-device communication, and mobile edge computing.</p></div><div><h3>Methods</h3><p>In this study, we use a new technology to solve some of these problems. The proposed software monitors image targets on a printed book and renders 3D objects and alphabetic models. In addition, the application considers phonetics. The sound (phonetic) and 3D representation of a letter are played as soon as the image target is detected. 3D models of the Turkish alphabet are created by using Adobe Photoshop with Unity3D and Vuforia SDK.</p></div><div><h3>Results</h3><p>The proposed application teaches Turkish alphabets and phonetics by using 3D object models, 3D letters, and 3D phrases, including letters and sounds.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 263-277"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000432/pdf?md5=69d83e6557b258f371ddc091f0376da6&pid=1-s2.0-S2096579622000432-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131839835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Privacy-preserving deep learning techniques for wearable sensor-based big data applications 5G应用时代的隐私保护技术研究
Q1 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.vrih.2022.01.007
Rafik Hamza, Dao Minh-Son

Wearable technologies have the potential to become a valuable influence on human daily life where they may enable observing the world in new ways, including, for example, using augmented reality (AR) applications. Wearable technology uses electronic devices that may be carried as accessories, clothes, or even embedded in the user's body. Although the potential benefits of smart wearables are numerous, their extensive and continual usage creates several privacy concerns and tricky information security challenges. In this paper, we present a comprehensive survey of recent privacy-preserving big data analytics applications based on wearable sensors. We highlight the fundamental features of security and privacy for wearable device applications. Then, we examine the utilization of deep learning algorithms with cryptography and determine their usability for wearable sensors. We also present a case study on privacy-preserving machine learning techniques. Herein, we theoretically and empirically evaluate the privacy-preserving deep learning framework's performance. We explain the implementation details of a case study of a secure prediction service using the convolutional neural network (CNN) model and the Cheon-Kim-Kim-Song (CHKS) homomorphic encryption algorithm. Finally, we explore the obstacles and gaps in the deployment of practical real-world applications. Following a comprehensive overview, we identify the most important obstacles that must be overcome and discuss some interesting future research directions.

可穿戴技术有可能对人类日常生活产生宝贵影响,使人们能够以新的方式观察世界,例如使用增强现实(AR)应用程序。可穿戴技术使用的电子设备可以作为配件、衣服携带,甚至可以嵌入用户体内。尽管智能可穿戴设备的潜在好处很多,但它们的广泛和持续使用带来了一些隐私问题和棘手的信息安全挑战。在本文中,我们对最近基于可穿戴传感器的隐私保护大数据分析应用进行了全面调查。我们强调了可穿戴设备应用的安全和隐私的基本特征。然后,我们研究了深度学习算法与密码学的应用,并确定了它们对可穿戴传感器的可用性。我们还介绍了一个关于保护隐私的机器学习技术的案例研究。在此,我们从理论上和经验上评估了保护隐私的深度学习框架的性能。我们解释了使用卷积神经网络(CNN)模型和Cheon-Kim-Kim-Song (CHKS)同态加密算法的安全预测服务的案例研究的实现细节。最后,我们探讨了实际应用部署中的障碍和差距。在全面概述之后,我们确定了必须克服的最重要的障碍,并讨论了一些有趣的未来研究方向。
{"title":"Privacy-preserving deep learning techniques for wearable sensor-based big data applications","authors":"Rafik Hamza,&nbsp;Dao Minh-Son","doi":"10.1016/j.vrih.2022.01.007","DOIUrl":"10.1016/j.vrih.2022.01.007","url":null,"abstract":"<div><p>Wearable technologies have the potential to become a valuable influence on human daily life where they may enable observing the world in new ways, including, for example, using augmented reality (AR) applications. Wearable technology uses electronic devices that may be carried as accessories, clothes, or even embedded in the user's body. Although the potential benefits of smart wearables are numerous, their extensive and continual usage creates several privacy concerns and tricky information security challenges. In this paper, we present a comprehensive survey of recent privacy-preserving big data analytics applications based on wearable sensors. We highlight the fundamental features of security and privacy for wearable device applications. Then, we examine the utilization of deep learning algorithms with cryptography and determine their usability for wearable sensors. We also present a case study on privacy-preserving machine learning techniques. Herein, we theoretically and empirically evaluate the privacy-preserving deep learning framework's performance. We explain the implementation details of a case study of a secure prediction service using the convolutional neural network (CNN) model and the Cheon-Kim-Kim-Song (CHKS) homomorphic encryption algorithm. Finally, we explore the obstacles and gaps in the deployment of practical real-world applications. Following a comprehensive overview, we identify the most important obstacles that must be overcome and discuss some interesting future research directions.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 210-222"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000237/pdf?md5=2c9c4d531b19450d41b2bc107e5adf4b&pid=1-s2.0-S2096579622000237-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124436400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1