首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Survey of neurocognitive disorder detection methods based on speech, visual, and virtual reality technologies
Q1 Computer Science Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.08.001
Tian ZHENG , Xinheng WANG , Xiaolan PENG , Ning SU , Tianyi XU , Xurong XIE , Jin HUANG , Lun XIE , Feng TIAN
The global trend of population aging poses significant challenges to society and healthcare systems, particularly because of neurocognitive disorders (NCDs) such as Parkinson's disease (PD) and Alzheimer's disease (AD). In this context, artificial intelligence techniques have demonstrated promising potential for the objective assessment and detection of NCDs. Multimodal contactless screening technologies, such as speech-language processing, computer vision, and virtual reality, offer efficient and convenient methods for disease diagnosis and progression tracking. This paper systematically reviews the specific methods and applications of these technologies in the detection of NCDs using data collection paradigms, feature extraction, and modeling approaches. Additionally, the potential applications and future prospects of these technologies for the detection of cognitive and motor disorders are explored. By providing a comprehensive summary and refinement of the extant theories, methodologies, and applications, this study aims to facilitate an in-depth understanding of these technologies for researchers, both within and outside the field. To the best of our knowledge, this is the first survey to cover the use of speech-language processing, computer vision, and virtual reality technologies for the detection of NSDs.
{"title":"Survey of neurocognitive disorder detection methods based on speech, visual, and virtual reality technologies","authors":"Tian ZHENG ,&nbsp;Xinheng WANG ,&nbsp;Xiaolan PENG ,&nbsp;Ning SU ,&nbsp;Tianyi XU ,&nbsp;Xurong XIE ,&nbsp;Jin HUANG ,&nbsp;Lun XIE ,&nbsp;Feng TIAN","doi":"10.1016/j.vrih.2024.08.001","DOIUrl":"10.1016/j.vrih.2024.08.001","url":null,"abstract":"<div><div>The global trend of population aging poses significant challenges to society and healthcare systems, particularly because of neurocognitive disorders (NCDs) such as Parkinson's disease (PD) and Alzheimer's disease (AD). In this context, artificial intelligence techniques have demonstrated promising potential for the objective assessment and detection of NCDs. Multimodal contactless screening technologies, such as speech-language processing, computer vision, and virtual reality, offer efficient and convenient methods for disease diagnosis and progression tracking. This paper systematically reviews the specific methods and applications of these technologies in the detection of NCDs using data collection paradigms, feature extraction, and modeling approaches. Additionally, the potential applications and future prospects of these technologies for the detection of cognitive and motor disorders are explored. By providing a comprehensive summary and refinement of the extant theories, methodologies, and applications, this study aims to facilitate an in-depth understanding of these technologies for researchers, both within and outside the field. To the best of our knowledge, this is the first survey to cover the use of speech-language processing, computer vision, and virtual reality technologies for the detection of NSDs.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 421-472"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Previs-Real:Interactive virtual previsualization system for news shooting rehearsal and evaluation
Q1 Computer Science Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.12.001
Che Qu , Shaocong Wang , Chao Zhou , Tongchen Zhao , Rui Guo , Cheng Wa Wong , Chi Deng , Bin Ji , Yuhui Wen , Yuanchun Shi , Yong-Jin Liu

Background

In the demanding field of live news broadcasting, the intricate studio production procedures and tight schedules pose significant challenges for physical rehearsals by cameramen. This paper explores the design and implementation of a lightweight virtual news previsualization system, leveraging virtual production technology and interaction design methods to address the lack of fidelity in presentations and manipulations, and the quantitative feedback of rehearsal effects in previous virtual approaches.

Methods

Our system, Previs-Real, is informed by user investigation with professional cameramen and studio technicians, and adheres to principles of high fidelity, accurate replication of actual hardware operations, and real-time feedback on rehearsal results. The system's software and hardware development are implemented based on Unreal Engine and accompanying toolsets, incorporating cutting-edge modeling and camera calibration methods.

Results

We validated Previs-Real through a user study, demonstrating superior performance in previsualization shooting tasks using the virtual system compared to traditional camera setups. The findings, supported by both objective performance metrics and subjective responses, underline Previs-Real's effectiveness and potential in transforming news broadcasting rehearsals.

Conclusions

Previs-Real eliminates the requirement for complex equipment interconnections and team coordination inherent in a physical studio by implementing methodologies complying the above principles, objectively resulting in a lightweight design of applicable version of virtual news previsualization system. It offers a novel solution to the challenges in news studio previsualization by focusing on key operational features rather than full environment replication. This design approach is equally effective in the process of designing lightweight systems in other fields.
{"title":"Previs-Real:Interactive virtual previsualization system for news shooting rehearsal and evaluation","authors":"Che Qu ,&nbsp;Shaocong Wang ,&nbsp;Chao Zhou ,&nbsp;Tongchen Zhao ,&nbsp;Rui Guo ,&nbsp;Cheng Wa Wong ,&nbsp;Chi Deng ,&nbsp;Bin Ji ,&nbsp;Yuhui Wen ,&nbsp;Yuanchun Shi ,&nbsp;Yong-Jin Liu","doi":"10.1016/j.vrih.2024.12.001","DOIUrl":"10.1016/j.vrih.2024.12.001","url":null,"abstract":"<div><h3>Background</h3><div>In the demanding field of live news broadcasting, the intricate studio production procedures and tight schedules pose significant challenges for physical rehearsals by cameramen. This paper explores the design and implementation of a lightweight virtual news previsualization system, leveraging virtual production technology and interaction design methods to address the lack of fidelity in presentations and manipulations, and the quantitative feedback of rehearsal effects in previous virtual approaches.</div></div><div><h3>Methods</h3><div>Our system, Previs-Real, is informed by user investigation with professional cameramen and studio technicians, and adheres to principles of high fidelity, accurate replication of actual hardware operations, and real-time feedback on rehearsal results. The system's software and hardware development are implemented based on Unreal Engine and accompanying toolsets, incorporating cutting-edge modeling and camera calibration methods.</div></div><div><h3>Results</h3><div>We validated Previs-Real through a user study, demonstrating superior performance in previsualization shooting tasks using the virtual system compared to traditional camera setups. The findings, supported by both objective performance metrics and subjective responses, underline Previs-Real's effectiveness and potential in transforming news broadcasting rehearsals.</div></div><div><h3>Conclusions</h3><div>Previs-Real eliminates the requirement for complex equipment interconnections and team coordination inherent in a physical studio by implementing methodologies complying the above principles, objectively resulting in a lightweight design of applicable version of virtual news previsualization system. It offers a novel solution to the challenges in news studio previsualization by focusing on key operational features rather than full environment replication. This design approach is equally effective in the process of designing lightweight systems in other fields.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 527-549"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatStick: Changing the material sensation of objects upon impact
Q1 Computer Science Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.11.001
Songxian Liu, Jian He, Shengsheng Jiang, Ziyan Zhang, Mengfei Lv
An increasing number of studies have focused on providing rich tactile feedback in virtual reality interactive scenarios. In this study, we addressed a tapping scenario in virtual reality by designing MatStick, a solution capable of offering diverse tapping sensations. MatStick utilizes a soft physical base to provide force feedback and modulates the instantaneous vibration of the base using a voice coil motor, thereby altering the perception of the base material. We conducted two psychophysical experiments and a subjective evaluation to assess the capabilities of MatStick. The results demonstrate that MatStick can deliver rich tapping sensations. Although users may find it challenging to directly correlate the tapping sensation with the actual physical material based solely on tactile feedback, in immersive scenarios combined with visual and auditory cues, MatStick significantly enhances the user's interaction experience.
{"title":"MatStick: Changing the material sensation of objects upon impact","authors":"Songxian Liu,&nbsp;Jian He,&nbsp;Shengsheng Jiang,&nbsp;Ziyan Zhang,&nbsp;Mengfei Lv","doi":"10.1016/j.vrih.2024.11.001","DOIUrl":"10.1016/j.vrih.2024.11.001","url":null,"abstract":"<div><div>An increasing number of studies have focused on providing rich tactile feedback in virtual reality interactive scenarios. In this study, we addressed a tapping scenario in virtual reality by designing MatStick, a solution capable of offering diverse tapping sensations. MatStick utilizes a soft physical base to provide force feedback and modulates the instantaneous vibration of the base using a voice coil motor, thereby altering the perception of the base material. We conducted two psychophysical experiments and a subjective evaluation to assess the capabilities of MatStick. The results demonstrate that MatStick can deliver rich tapping sensations. Although users may find it challenging to directly correlate the tapping sensation with the actual physical material based solely on tactile feedback, in immersive scenarios combined with visual and auditory cues, MatStick significantly enhances the user's interaction experience.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 486-501"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InputJump: Augmented reality-facilitated cross-device input fusion based on spatial and semantic information
Q1 Computer Science Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.10.001
Xin Zeng , Xiaoyu Wang , Tengxiang Zhang , Yukang Yan , Yiqiang Chen
The proliferation of computing devices requires seamless cross-device interactions. Augmented reality (AR) headsets can facilitate interactions with existing computers owing to their user-centered views and natural inputs. In this study, we propose InputJump, a user-centered cross-device input fusion method that maps multi-modal cross-device inputs to interactive elements on graphical interfaces. The input jump calculates the spatial coordinates of the input target positions and the interactive elements within the coordinate system of the AR headset. It also extracts semantic descriptions of inputs and elements using large language models (LLMs). Two types of information from different inputs (e.g., gaze, gesture, mouse, and keyboard) were fused to map onto an interactive element. The proposed method is explained in detail and implemented on both an AR headset and a desktop PC. We then conducted a user study and extensive simulations to validate our proposed method. The results showed that InputJump can accurately associate a fused input with the target interactive element, enabling a more natural and flexible interaction experience.
{"title":"InputJump: Augmented reality-facilitated cross-device input fusion based on spatial and semantic information","authors":"Xin Zeng ,&nbsp;Xiaoyu Wang ,&nbsp;Tengxiang Zhang ,&nbsp;Yukang Yan ,&nbsp;Yiqiang Chen","doi":"10.1016/j.vrih.2024.10.001","DOIUrl":"10.1016/j.vrih.2024.10.001","url":null,"abstract":"<div><div>The proliferation of computing devices requires seamless cross-device interactions. Augmented reality (AR) headsets can facilitate interactions with existing computers owing to their user-centered views and natural inputs. In this study, we propose InputJump, a user-centered cross-device input fusion method that maps multi-modal cross-device inputs to interactive elements on graphical interfaces. The input jump calculates the spatial coordinates of the input target positions and the interactive elements within the coordinate system of the AR headset. It also extracts semantic descriptions of inputs and elements using large language models (LLMs). Two types of information from different inputs (e.g., gaze, gesture, mouse, and keyboard) were fused to map onto an interactive element. The proposed method is explained in detail and implemented on both an AR headset and a desktop PC. We then conducted a user study and extensive simulations to validate our proposed method. The results showed that InputJump can accurately associate a fused input with the target interactive element, enabling a more natural and flexible interaction experience.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 502-526"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic piano performance interaction system based on greedy algorithm for dexterous manipulator
Q1 Computer Science Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.09.001
Yufei Wang , Junfeng Yao , Yalan Zhou , Zefeng Wang
With continuous advancements in artificial intelligence (AI), automatic piano-playing robots have become subjects of cross-disciplinary interest. However, in most studies, these robots served merely as objects of observation with limited user engagement or interaction. To address this issue, we propose a user-friendly and innovative interaction system based on the principles of greedy algorithms. This system features three modules: score management, performance control, and keyboard interactions. Upon importing a custom score or playing a note via an external device, the system performs on a virtual piano in line with user inputs. This system has been successfully integrated into our dexterous manipulator-based piano-playing device, which significantly enhances user interactions.
{"title":"Automatic piano performance interaction system based on greedy algorithm for dexterous manipulator","authors":"Yufei Wang ,&nbsp;Junfeng Yao ,&nbsp;Yalan Zhou ,&nbsp;Zefeng Wang","doi":"10.1016/j.vrih.2024.09.001","DOIUrl":"10.1016/j.vrih.2024.09.001","url":null,"abstract":"<div><div>With continuous advancements in artificial intelligence (AI), automatic piano-playing robots have become subjects of cross-disciplinary interest. However, in most studies, these robots served merely as objects of observation with limited user engagement or interaction. To address this issue, we propose a user-friendly and innovative interaction system based on the principles of greedy algorithms. This system features three modules: score management, performance control, and keyboard interactions. Upon importing a custom score or playing a note via an external device, the system performs on a virtual piano in line with user inputs. This system has been successfully integrated into our dexterous manipulator-based piano-playing device, which significantly enhances user interactions.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 473-485"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pre-training transformer with dual-branch context content module for table detection in document images 采用双分支上下文内容模块的预训练变换器,用于文档图像中的表格检测
Q1 Computer Science Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.003
Yongzhi Li , Pengle Zhang , Meng Sun , Jin Huang , Ruhan He

Background

Document images such as statistical reports and scientific journals are widely used in information technology. Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction. However, because of the diversity in the shapes and sizes of tables, existing table detection methods adapted from general object detection algorithms, have not yet achieved satisfactory results. Incorrect detection results might lead to the loss of critical information.

Methods

Therefore, we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections. To better deal with table areas of different shapes and sizes, we added a dual-branch context content attention module (DCCAM) to high-dimensional features to extract context content information, thereby enhancing the network's ability to learn shape features. For feature fusion at different scales, we replaced the original 3×3 convolution with a multilayer residual module, which contains enhanced gradient flow information to improve the feature representation and extraction capability.

Results

We evaluated our method on public document datasets and compared it with previous methods, which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score. https://github.com/YongZ-Lee/TD-DCCAM
背景统计报告和科学期刊等文档图像被广泛应用于信息技术领域。准确检测文档图像中的表格区域是完成信息提取等任务的必要前提。然而,由于表格的形状和大小多种多样,从一般对象检测算法中改编而来的现有表格检测方法尚未取得令人满意的结果。因此,我们提出了一种新颖的端到端可训练深度网络,并结合自监督预训练转换器进行特征提取,以尽量减少错误检测。为了更好地处理不同形状和大小的桌面区域,我们在高维特征中添加了双分支上下文内容关注模块(DCCAM),以提取上下文内容信息,从而增强网络学习形状特征的能力。对于不同尺度的特征融合,我们用多层残差模块取代了原来的 3×3 卷积,该模块包含增强的梯度流信息,从而提高了特征表示和提取能力。结果我们在公共文档数据集上对我们的方法进行了评估,并将其与之前的方法进行了比较,后者在召回率和 F1 分数等评估指标方面取得了最先进的结果。https://github.com/YongZ-Lee/TD-DCCAM。
{"title":"Pre-training transformer with dual-branch context content module for table detection in document images","authors":"Yongzhi Li ,&nbsp;Pengle Zhang ,&nbsp;Meng Sun ,&nbsp;Jin Huang ,&nbsp;Ruhan He","doi":"10.1016/j.vrih.2024.06.003","DOIUrl":"10.1016/j.vrih.2024.06.003","url":null,"abstract":"<div><h3>Background</h3><div>Document images such as statistical reports and scientific journals are widely used in information technology. Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction. However, because of the diversity in the shapes and sizes of tables, existing table detection methods adapted from general object detection algorithms, have not yet achieved satisfactory results. Incorrect detection results might lead to the loss of critical information.</div></div><div><h3>Methods</h3><div>Therefore, we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections. To better deal with table areas of different shapes and sizes, we added a dual-branch context content attention module (DCCAM) to high-dimensional features to extract context content information, thereby enhancing the network's ability to learn shape features. For feature fusion at different scales, we replaced the original 3×3 convolution with a multilayer residual module, which contains enhanced gradient flow information to improve the feature representation and extraction capability.</div></div><div><h3>Results</h3><div>We evaluated our method on public document datasets and compared it with previous methods, which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score. <span><span>https://github.com/YongZ-Lee/TD-DCCAM</span><svg><path></path></svg></span></div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 408-420"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-salient object detection with iterative purification and predictive optimization 通过迭代净化和预测优化进行共轴物体检测
Q1 Computer Science Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.002
Yang Wen, Yuhuan Wang, Hao Wang, Wuzhen Shi, Wenming Cao

Background

Co-salient object detection (Co-SOD) aims to identify and segment commonly salient objects in a set of related images. However, most current Co-SOD methods encounter issues with the inclusion of irrelevant information in the co-representation. These issues hamper their ability to locate co-salient objects and significantly restrict the accuracy of detection.

Methods

To address this issue, this study introduces a novel Co-SOD method with iterative purification and predictive optimization (IPPO) comprising a common salient purification module (CSPM), predictive optimizing module (POM), and diminishing mixed enhancement block (DMEB).

Results

These components are designed to explore noise-free joint representations, assist the model in enhancing the quality of the final prediction results, and significantly improve the performance of the Co-SOD algorithm. Furthermore, through a comprehensive evaluation of IPPO and state-of-the-art algorithms focusing on the roles of CSPM, POM, and DMEB, our experiments confirmed that these components are pivotal in enhancing the performance of the model, substantiating the significant advancements of our method over existing benchmarks. Experiments on several challenging benchmark co-saliency datasets demonstrate that the proposed IPPO achieves state-of-the-art performance.
背景显著性物体检测(Co-SOD)旨在识别和分割一组相关图像中的共同显著性物体。然而,目前大多数共相关对象检测方法都会遇到在共呈现中包含无关信息的问题。方法为了解决这一问题,本研究引入了一种新型的共同突出物检测方法,该方法具有迭代净化和预测优化(IPPO)功能,包括共同突出物净化模块(CSPM)、预测优化模块(POM)和递减混合增强块(DMEB)。结果这些组件旨在探索无噪声联合表征,协助模型提高最终预测结果的质量,并显著提高 Co-SOD 算法的性能。此外,通过对 IPPO 和最先进算法的全面评估,重点关注 CSPM、POM 和 DMEB 的作用,我们的实验证实了这些组件在提高模型性能方面的关键作用,从而证实了我们的方法比现有基准有了显著的进步。在几个具有挑战性的基准共锯齿数据集上进行的实验证明,所提出的 IPPO 达到了最先进的性能。
{"title":"Co-salient object detection with iterative purification and predictive optimization","authors":"Yang Wen,&nbsp;Yuhuan Wang,&nbsp;Hao Wang,&nbsp;Wuzhen Shi,&nbsp;Wenming Cao","doi":"10.1016/j.vrih.2024.06.002","DOIUrl":"10.1016/j.vrih.2024.06.002","url":null,"abstract":"<div><h3>Background</h3><div>Co-salient object detection (Co-SOD) aims to identify and segment commonly salient objects in a set of related images. However, most current Co-SOD methods encounter issues with the inclusion of irrelevant information in the co-representation. These issues hamper their ability to locate co-salient objects and significantly restrict the accuracy of detection.</div></div><div><h3>Methods</h3><div>To address this issue, this study introduces a novel Co-SOD method with iterative purification and predictive optimization (IPPO) comprising a common salient purification module (CSPM), predictive optimizing module (POM), and diminishing mixed enhancement block (DMEB).</div></div><div><h3>Results</h3><div>These components are designed to explore noise-free joint representations, assist the model in enhancing the quality of the final prediction results, and significantly improve the performance of the Co-SOD algorithm. Furthermore, through a comprehensive evaluation of IPPO and state-of-the-art algorithms focusing on the roles of CSPM, POM, and DMEB, our experiments confirmed that these components are pivotal in enhancing the performance of the model, substantiating the significant advancements of our method over existing benchmarks. Experiments on several challenging benchmark co-saliency datasets demonstrate that the proposed IPPO achieves state-of-the-art performance.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 396-407"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Music-stylized hierarchical dance synthesis with user control 用户控制的音乐风格化分层舞蹈合成
Q1 Computer Science Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.004
Yanbo Cheng, Yichen Jiang, Yingying Wang

Background

Synthesizing dance motions to match musical inputs is a significant challenge in animation research. Compared to functional human motions, such as locomotion, dance motions are creative and artistic, often influenced by music, and can be independent body language expressions. Dance choreography requires motion content to follow a general dance genre, whereas dance performances under musical influence are infused with diverse impromptu motion styles. Considering the high expressiveness and variations in space and time, providing accessible and effective user control for tuning dance motion styles remains an open problem.

Methods

In this study, we present a hierarchical framework that decouples the dance synthesis task into independent modules. We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences. This novel framework allows the individual modules to be trained separately. Because of the decoupling, dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments, and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network. Each module is replaceable at runtime, which adds flexibility to the synthesis of dance sequences.

Results

Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.
背景合成与音乐输入相匹配的舞蹈动作是动画研究中的一项重大挑战。与运动等人体功能性动作相比,舞蹈动作具有创造性和艺术性,经常受到音乐的影响,可以是独立的肢体语言表达。舞蹈编排要求动作内容遵循一般的舞蹈流派,而音乐影响下的舞蹈表演则注入了多样化的即兴动作风格。考虑到舞蹈在空间和时间上的高表现力和变化,为调整舞蹈动作风格提供方便有效的用户控制仍是一个有待解决的问题。方法在本研究中,我们提出了一个分层框架,将舞蹈合成任务分解为独立的模块。我们使用一个高级舞蹈编排模块,该模块由一个基于变换器的序列模型和一个低级实现模块组成,前者用于预测舞蹈流派的长期结构,后者用于实现舞蹈风格化和同步,以匹配音乐输入或用户偏好。这种新颖的框架允许对各个模块进行单独训练。由于解耦,舞蹈创作可以充分利用现有的没有音乐伴奏的高质量舞蹈数据集,舞蹈实现可以通过解码器网络方便地纳入用户控制和编辑动作。每个模块都可以在运行时更换,这增加了舞蹈序列合成的灵活性。结果合成结果表明,我们的框架能生成高质量的多样化舞蹈动作,并能很好地适应不同的音乐条件和用户控制。
{"title":"Music-stylized hierarchical dance synthesis with user control","authors":"Yanbo Cheng,&nbsp;Yichen Jiang,&nbsp;Yingying Wang","doi":"10.1016/j.vrih.2024.06.004","DOIUrl":"10.1016/j.vrih.2024.06.004","url":null,"abstract":"<div><h3>Background</h3><div>Synthesizing dance motions to match musical inputs is a significant challenge in animation research. Compared to functional human motions, such as locomotion, dance motions are creative and artistic, often influenced by music, and can be independent body language expressions. Dance choreography requires motion content to follow a general dance genre, whereas dance performances under musical influence are infused with diverse impromptu motion styles. Considering the high expressiveness and variations in space and time, providing accessible and effective user control for tuning dance motion styles remains an open problem.</div></div><div><h3>Methods</h3><div>In this study, we present a hierarchical framework that decouples the dance synthesis task into independent modules. We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences. This novel framework allows the individual modules to be trained separately. Because of the decoupling, dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments, and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network. Each module is replaceable at runtime, which adds flexibility to the synthesis of dance sequences.</div></div><div><h3>Results</h3><div>Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 339-357"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models 网格表示很重要:研究不同网格特征对深度三维可变形模型的感知和空间保真度的影响
Q1 Computer Science Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.08.006
Robert KOSK , Richard SOUTHERN , Lihua YOU , Shaojun BIAN , Willem KOKKE , Greg MAGUIRE

Background

Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.

Methods

We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L1 and L2 norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.

Results

The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.
背景深三维可变形模型(deep 3DMM)在计算机视觉中发挥着至关重要的作用。它们用于面部合成、压缩、重建和动画、头像创建、虚拟试穿、面部识别系统和医学成像。这些应用对合成网格的空间和感知质量要求很高。我们比较了不同网格表示特征对各种深度 3DMM 在重建网格的空间和感知保真度上的影响。本文证明了一个假设,即用全局表示法表示的网格构建深度 3DMM 会降低用 L1 和 L2 准则度量的空间重建误差,而在感知度量方面则表现不佳。与此相反,使用描述差异表面特性的差异网格表示法可获得较低的感知 FMPD 和 DAME,以及较高的空间保真度误差。本文介绍的结果为根据空间和感知质量目标选择网格表示法来构建深度 3DMM 提供了指导,并提出了网格表示法和深度 3DMM 的组合,从而提高了现有方法的感知或空间保真度。
{"title":"Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models","authors":"Robert KOSK ,&nbsp;Richard SOUTHERN ,&nbsp;Lihua YOU ,&nbsp;Shaojun BIAN ,&nbsp;Willem KOKKE ,&nbsp;Greg MAGUIRE","doi":"10.1016/j.vrih.2024.08.006","DOIUrl":"10.1016/j.vrih.2024.08.006","url":null,"abstract":"<div><h3>Background</h3><div>Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.</div></div><div><h3>Methods</h3><div>We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with <span><math><mrow><msub><mi>L</mi><mn>1</mn></msub></mrow></math></span> and <span><math><mrow><msub><mi>L</mi><mn>2</mn></msub></mrow></math></span> norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.</div></div><div><h3>Results</h3><div>The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 383-395"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CURDIS: A template for incremental curve discretization algorithms and its application to conics CURDIS:增量曲线离散化算法模板及其在圆锥曲线中的应用
Q1 Computer Science Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.005
Philippe Latour, Marc Van Droogenbroeck
We introduce CURDIS, a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc. In this template, algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion. These two elements can be adapted for any type of curve, leading to algorithms dedicated to the shape of specific curves. While the calculation of the tangent quadrant for various curves, such as lines, conics, or cubics, is simple, it is more complex to analyze how pixels are traversed by the curve. In the case of conic arcs, we found a criterion for determining the pixel exit side. This leads us to present a new algorithm, called CURDIS-C, specific to the discretization of conics, for which we provide all the details. Surprisingly, the criterion for conics requires between one and three sign tests and four additions per pixel, making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations. Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel, achieving this generality at the cost of potentially computing up to two square roots per arc. We illustrate the use of CURDIS for the discretization of different curves, such as ellipses, hyperbolas, and parabolas, even when they degenerate into lines or corners.
我们介绍的 CURDIS 是一种用于对规则曲线的弧线进行离散化处理的算法模板,其方法是逐步生成覆盖弧线的支持像素列表。在该模板中,算法通过查找弧线每一点的切象限,并根据定制标准确定曲线从哪一侧流出像素。这两个要素可适用于任何类型的曲线,从而形成专门针对特定曲线形状的算法。虽然计算直线、圆锥曲线或立方体等各种曲线的切象限非常简单,但分析像素如何被曲线穿越则更为复杂。对于圆锥曲线,我们找到了确定像素出口边的标准。因此,我们提出了一种新算法,称为 CURDIS-C,专门用于圆锥曲线的离散化,并提供了所有细节。令人惊讶的是,圆锥曲线的标准只需要对每个像素进行一到三次符号检验和四次加法运算,这使得该算法在资源受限的系统中非常高效,在定点或整数运算实现中也是可行的。我们的算法还能完美处理圆锥与一个像素相交两次或在该像素内多次改变象限的病理情况,实现这种通用性的代价是每个弧可能要计算多达两个平方根。我们举例说明了 CURDIS 在椭圆、双曲线和抛物线等不同曲线离散化中的应用,即使它们退化为直线或角。
{"title":"CURDIS: A template for incremental curve discretization algorithms and its application to conics","authors":"Philippe Latour,&nbsp;Marc Van Droogenbroeck","doi":"10.1016/j.vrih.2024.06.005","DOIUrl":"10.1016/j.vrih.2024.06.005","url":null,"abstract":"<div><div>We introduce CURDIS, a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc. In this template, algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion. These two elements can be adapted for any type of curve, leading to algorithms dedicated to the shape of specific curves. While the calculation of the tangent quadrant for various curves, such as lines, conics, or cubics, is simple, it is more complex to analyze how pixels are traversed by the curve. In the case of conic arcs, we found a criterion for determining the pixel exit side. This leads us to present a new algorithm, called CURDIS-C, specific to the discretization of conics, for which we provide all the details. Surprisingly, the criterion for conics requires between one and three sign tests and four additions per pixel, making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations. Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel, achieving this generality at the cost of potentially computing up to two square roots per arc. We illustrate the use of CURDIS for the discretization of different curves, such as ellipses, hyperbolas, and parabolas, even when they degenerate into lines or corners.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 358-382"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1