首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Uncanny valley for interactive social agents: an experimental study 互动社会主体的恐怖谷:一项实验研究
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.08.003
Nidhi Mishra , Manoj Ramanathan , Gauri Tulsulkar , Nadia Magneat Thalmann

Background

The uncanny valley hypothesis states that users may experience discomfort when interacting with almost human-like artificial characters. Advancements in artificial intelligence, robotics, and computer graphics have led to the development of life-like virtual humans and humanoid robots. Revisiting this hypothesis is necessary to check whether they positively or negatively affect the current population, who are highly accustomed to the latest technologies.

Methods

In this study, we present a unique evaluation of the uncanny valley hypothesis by allowing participants to interact live with four humanoid robots that have varying levels of human-likeness. Each participant completed a survey questionnaire to evaluate the affinity of each robot. Additionally, we used deep learning methods to quantify the participants’ emotional states using multimodal cues, including visual, audio, and text cues, by recording the participant–robot interactions.

Results

Multi-modal analysis and surveys provided interesting results and insights into the uncanny valley hypothesis.

恐怖谷假说认为,用户在与几乎与人类相似的人工角色互动时可能会感到不舒服。人工智能、机器人技术和计算机图形学的进步导致了栩栩如生的虚拟人和人形机器人的发展。重新审视这一假设是必要的,以检查它们对当前高度习惯最新技术的人群是积极的还是消极的影响。方法在这项研究中,我们提出了一个独特的评估恐怖谷假说,允许参与者与四个具有不同程度的人形机器人互动。每个参与者都完成了一份调查问卷,以评估每个机器人的亲和力。此外,我们使用深度学习方法,通过记录参与者与机器人的互动,使用多模态线索(包括视觉、音频和文本线索)量化参与者的情绪状态。结果多模态分析和调查为恐怖谷假说提供了有趣的结果和见解。
{"title":"Uncanny valley for interactive social agents: an experimental study","authors":"Nidhi Mishra ,&nbsp;Manoj Ramanathan ,&nbsp;Gauri Tulsulkar ,&nbsp;Nadia Magneat Thalmann","doi":"10.1016/j.vrih.2022.08.003","DOIUrl":"10.1016/j.vrih.2022.08.003","url":null,"abstract":"<div><h3>Background</h3><p>The uncanny valley hypothesis states that users may experience discomfort when interacting with almost human-like artificial characters. Advancements in artificial intelligence, robotics, and computer graphics have led to the development of life-like virtual humans and humanoid robots. Revisiting this hypothesis is necessary to check whether they positively or negatively affect the current population, who are highly accustomed to the latest technologies.</p></div><div><h3>Methods</h3><p>In this study, we present a unique evaluation of the uncanny valley hypothesis by allowing participants to interact live with four humanoid robots that have varying levels of human-likeness. Each participant completed a survey questionnaire to evaluate the affinity of each robot. Additionally, we used deep learning methods to quantify the participants’ emotional states using multimodal cues, including visual, audio, and text cues, by recording the participant–robot interactions.</p></div><div><h3>Results</h3><p>Multi-modal analysis and surveys provided interesting results and insights into the uncanny valley hypothesis.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages 393-405"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962200078X/pdf?md5=006cc0cfa178979a31eb04f193763508&pid=1-s2.0-S209657962200078X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computer graphics for metaverse 虚拟世界的计算机图形
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.10.001
Nadia Magnenat Thalmann , Jinman Kim , George Papagiannakis , Daniel Thalmann , Bin Sheng
{"title":"Computer graphics for metaverse","authors":"Nadia Magnenat Thalmann ,&nbsp;Jinman Kim ,&nbsp;George Papagiannakis ,&nbsp;Daniel Thalmann ,&nbsp;Bin Sheng","doi":"10.1016/j.vrih.2022.10.001","DOIUrl":"10.1016/j.vrih.2022.10.001","url":null,"abstract":"","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages ii-iv"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000997/pdfft?md5=3e3da9d06de21804e1f0dbe952be9beb&pid=1-s2.0-S2096579622000997-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121178249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DSD-MatchingNet: Deformable sparse-to-dense feature matching for learning accurate correspondences DSD-MatchingNet:可变形的稀疏到密集的特征匹配,用于学习精确的对应关系
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.08.007
Yicheng Zhao , Han Zhang , Ping Lu , Ping Li , Enhua Wu , Bin Sheng

Background

Exploring correspondences across multiview images is the basis of various computer vision tasks. However, most existing methods have limited accuracy under challenging conditions.

Method

To learn more robust and accurate correspondences, we propose DSD-MatchingNet for local feature matching in this study. First, we develop a deformable feature extraction module to obtain multilevel feature maps, which harvest contextual information from dynamic receptive fields. The dynamic receptive fields provided by the deformable convolution network ensure that our method obtains dense and robust correspondence. Second, we utilize sparse-to-dense matching with symmetry of correspondence to implement accurate pixel-level matching, which enables our method to produce more accurate correspondences.

Result

Experiments show that our proposed DSD-MatchingNet achieves a better performance on the image matching benchmark, as well as on the visual localization benchmark. Specifically, our method achieved 91.3% mean matching accuracy on the HPatches dataset and 99.3% visual localization recalls on the Aachen Day-Night dataset.

探索多视图图像之间的对应关系是各种计算机视觉任务的基础。然而,大多数现有方法在具有挑战性的条件下精度有限。方法为了获得更鲁棒和准确的对应关系,我们提出了DSD-MatchingNet进行局部特征匹配。首先,我们开发了一个可变形的特征提取模块,以获得多层次的特征映射,从动态接受域中获取上下文信息。可变形卷积网络提供的动态接收域保证了该方法得到密集的鲁棒对应。其次,我们利用稀疏到密集匹配与对称的对应实现精确的像素级匹配,使我们的方法产生更准确的对应。结果实验表明,我们提出的DSD-MatchingNet在图像匹配基准和视觉定位基准上都取得了较好的性能。具体来说,我们的方法在HPatches数据集上的平均匹配准确率为91.3%,在Aachen Day-Night数据集上的视觉定位召回率为99.3%。
{"title":"DSD-MatchingNet: Deformable sparse-to-dense feature matching for learning accurate correspondences","authors":"Yicheng Zhao ,&nbsp;Han Zhang ,&nbsp;Ping Lu ,&nbsp;Ping Li ,&nbsp;Enhua Wu ,&nbsp;Bin Sheng","doi":"10.1016/j.vrih.2022.08.007","DOIUrl":"10.1016/j.vrih.2022.08.007","url":null,"abstract":"<div><h3>Background</h3><p>Exploring correspondences across multiview images is the basis of various computer vision tasks. However, most existing methods have limited accuracy under challenging conditions.</p></div><div><h3>Method</h3><p>To learn more robust and accurate correspondences, we propose DSD-MatchingNet for local feature matching in this study. First, we develop a deformable feature extraction module to obtain multilevel feature maps, which harvest contextual information from dynamic receptive fields. The dynamic receptive fields provided by the deformable convolution network ensure that our method obtains dense and robust correspondence. Second, we utilize sparse-to-dense matching with symmetry of correspondence to implement accurate pixel-level matching, which enables our method to produce more accurate correspondences.</p></div><div><h3>Result</h3><p>Experiments show that our proposed DSD-MatchingNet achieves a better performance on the image matching benchmark, as well as on the visual localization benchmark. Specifically, our method achieved 91.3% mean matching accuracy on the HPatches dataset and 99.3% visual localization recalls on the Aachen Day-Night dataset.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages 432-443"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000821/pdf?md5=b3b9d92de1f1714de8cb8ab71d43808f&pid=1-s2.0-S2096579622000821-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133210436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual reality for immersive multi-user firefighter-training scenarios 虚拟现实沉浸式多用户消防员训练场景
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.08.006
Philipp Braun, Michaela Grafelmann, Felix Gill, Hauke Stolz, Johannes Hinckeldeyn, Ann-Kathrin Lange

Background

Virtual reality (VR) applications can be used to provide comprehensive training scenarios that are difficult or impossible to represent in physical configurations. This includes team training for emergency services such as firefighting. Creating a high level of immersion is essential for achieving effective virtual training. In this respect, motion-capture systems offer the possibility of creating highly immersive multi-user training experiences, including full-body avatars.

Methods

This study presents a preliminary prototype that helps extinguish a virtual fire on a container ship as a VR training scenario. The prototype provides a full-body and multi-user VR experience based on the synthesis of position data provided by the motion-capture system and orientation data from the VR headsets. Moreover, the prototype facilitates an initial evaluation of the results.

Results

The results confirm the value of using VR for training procedures that are difficult to implement in the real world. Furthermore, the results show that motion-capture-based VR technologies are particularly useful for firefighting training, in which participants can collaborate in difficult-to-access environments. However, this study also indicates that increasing the immersion in such training remains a challenge.

Conclusions

This study presents a prototypical VR application that enables the multi-user training of maritime firefighters. Future research should evaluate the initial results, provide more extensive training scenarios, and measure the training progress.

虚拟现实(VR)应用程序可用于提供在物理配置中难以或不可能表示的全面训练场景。这包括消防等紧急服务的团队培训。创造高水平的沉浸感对于实现有效的虚拟培训至关重要。在这方面,动作捕捉系统提供了创造高度身临其境的多用户训练体验的可能性,包括全身化身。方法本研究提出了一个初步的原型,帮助扑灭集装箱船上的虚拟火灾作为虚拟现实训练场景。该原型基于动作捕捉系统提供的位置数据和VR头显提供的方向数据的综合,提供了全身和多用户的VR体验。此外,原型有助于对结果进行初步评估。结果证实了在现实世界中难以实施的训练过程中使用VR的价值。此外,研究结果表明,基于动作捕捉的VR技术对消防培训特别有用,参与者可以在难以进入的环境中进行协作。然而,这项研究也表明,增加这种培训的沉浸感仍然是一个挑战。本研究提出了一个原型VR应用程序,使海上消防员的多用户培训成为可能。未来的研究应评估初步结果,提供更广泛的训练场景,并衡量训练进度。
{"title":"Virtual reality for immersive multi-user firefighter-training scenarios","authors":"Philipp Braun,&nbsp;Michaela Grafelmann,&nbsp;Felix Gill,&nbsp;Hauke Stolz,&nbsp;Johannes Hinckeldeyn,&nbsp;Ann-Kathrin Lange","doi":"10.1016/j.vrih.2022.08.006","DOIUrl":"10.1016/j.vrih.2022.08.006","url":null,"abstract":"<div><h3>Background</h3><p>Virtual reality (VR) applications can be used to provide comprehensive training scenarios that are difficult or impossible to represent in physical configurations. This includes team training for emergency services such as firefighting. Creating a high level of immersion is essential for achieving effective virtual training. In this respect, motion-capture systems offer the possibility of creating highly immersive multi-user training experiences, including full-body avatars.</p></div><div><h3>Methods</h3><p>This study presents a preliminary prototype that helps extinguish a virtual fire on a container ship as a VR training scenario. The prototype provides a full-body and multi-user VR experience based on the synthesis of position data provided by the motion-capture system and orientation data from the VR headsets. Moreover, the prototype facilitates an initial evaluation of the results.</p></div><div><h3>Results</h3><p>The results confirm the value of using VR for training procedures that are difficult to implement in the real world. Furthermore, the results show that motion-capture-based VR technologies are particularly useful for firefighting training, in which participants can collaborate in difficult-to-access environments. However, this study also indicates that increasing the immersion in such training remains a challenge.</p></div><div><h3>Conclusions</h3><p>This study presents a prototypical VR application that enables the multi-user training of maritime firefighters. Future research should evaluate the initial results, provide more extensive training scenarios, and measure the training progress.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages 406-417"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962200081X/pdf?md5=1287c6b99ffe058108e336cc8bf7aca8&pid=1-s2.0-S209657962200081X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
NPIPVis: A visualization system involving NBA visual analysis and integrated learning model prediction NPIPVis:一个包含NBA可视化分析和综合学习模型预测的可视化系统
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.08.008
Zhuo Shi , Mingrui Li , Meng Wang , Jing Shen , Wei Chen , Xiaonan Luo

Background

Data-driven event analysis has gradually become the backbone of modern competitive sports analysis. Competitive sports data analysis tasks increasingly use computer vision and machine-learning models for intelligent data analysis. Existing sports visualization systems focus on the player–team data visualization, which is not intuitive enough for team season win–loss data and game time-series data visualization and neglects the prediction of all-star players.

Methods

This study used an interactive visualization system designed with parallel aggregated ordered hypergraph dynamic hypergraphs, Calliope visualization data story technology, and iStoryline narrative visualization technology to visualize the regular statistics and game time data of players and teams. NPIPVis includes dynamic hypergraphs of a teamʹs wins and losses and game plot narrative visualization components. In addition, an integrated learning-based all-star player prediction model, SRR-voting, which starts from the existing minority and majority samples, was proposed using the synthetic minority oversampling technique and RandomUnderSampler methods to generate and eliminate samples of a certain size to balance the number of allstar and average players in the datasets. Next, a random forest algorithm was introduced to extract and construct the features of players and combined with the voting integrated model to predict the all-star players, using Grid- SearchCV, to optimize the hyperparameters of each model in integrated learning and then combined with five-fold cross-validation to improve the generalization ability of the model. Finally, the SHapley Additive exPlanations (SHAP) model was introduced to enhance the interpretability of the model.

Results

The experimental results of comparing the SRR-voting model with six common models show that the accuracy, F1-score, and recall metrics are significantly improved, which verifies the effectiveness and practicality of the SRR-voting model.

Conclusions

This study combines data visualization and machine learning to design a National Basketball Association data visualization system to help the general audience visualize game data and predict all-star players; this can also be extended to other sports events or related fields.

数据驱动的赛事分析已逐渐成为现代竞技体育分析的中坚力量。竞技体育数据分析任务越来越多地使用计算机视觉和机器学习模型进行智能数据分析。现有的体育可视化系统主要集中在球员-球队数据的可视化上,对球队赛季胜负数据和比赛时间序列数据的可视化不够直观,忽略了对全明星球员的预测。方法采用并行聚合有序超图动态超图、Calliope可视化数据故事技术和iStoryline叙事可视化技术设计的交互式可视化系统,对运动员和球队的常规统计数据和比赛时间数据进行可视化。NPIPVis包括球队输赢的动态超图和游戏情节叙事可视化组件。此外,提出了基于学习的全明星球员综合预测模型SRR-voting,该模型从现有的少数和多数样本出发,采用合成少数过采样技术和RandomUnderSampler方法生成和剔除一定规模的样本,以平衡数据集中全明星和普通球员的数量。接下来,引入随机森林算法提取和构造球员特征,结合投票综合模型预测全明星球员,利用Grid- SearchCV对综合学习中各模型的超参数进行优化,再结合五重交叉验证提高模型的泛化能力。最后,引入SHapley加性解释(SHAP)模型,增强模型的可解释性。结果将SRR-voting模型与6种常用模型进行对比,结果表明该模型在准确率、F1-score和召回率指标上均有显著提高,验证了SRR-voting模型的有效性和实用性。本研究将数据可视化与机器学习相结合,设计一个nba数据可视化系统,帮助普通观众将比赛数据可视化,预测全明星球员;这也可以扩展到其他体育赛事或相关领域。
{"title":"NPIPVis: A visualization system involving NBA visual analysis and integrated learning model prediction","authors":"Zhuo Shi ,&nbsp;Mingrui Li ,&nbsp;Meng Wang ,&nbsp;Jing Shen ,&nbsp;Wei Chen ,&nbsp;Xiaonan Luo","doi":"10.1016/j.vrih.2022.08.008","DOIUrl":"10.1016/j.vrih.2022.08.008","url":null,"abstract":"<div><h3>Background</h3><p>Data-driven event analysis has gradually become the backbone of modern competitive sports analysis. Competitive sports data analysis tasks increasingly use computer vision and machine-learning models for intelligent data analysis. Existing sports visualization systems focus on the player–team data visualization, which is not intuitive enough for team season win–loss data and game time-series data visualization and neglects the prediction of all-star players.</p></div><div><h3>Methods</h3><p>This study used an interactive visualization system designed with parallel aggregated ordered hypergraph dynamic hypergraphs, Calliope visualization data story technology, and iStoryline narrative visualization technology to visualize the regular statistics and game time data of players and teams. NPIPVis includes dynamic hypergraphs of a teamʹs wins and losses and game plot narrative visualization components. In addition, an integrated learning-based all-star player prediction model, SRR-voting, which starts from the existing minority and majority samples, was proposed using the synthetic minority oversampling technique and RandomUnderSampler methods to generate and eliminate samples of a certain size to balance the number of allstar and average players in the datasets. Next, a random forest algorithm was introduced to extract and construct the features of players and combined with the voting integrated model to predict the all-star players, using Grid- SearchCV, to optimize the hyperparameters of each model in integrated learning and then combined with five-fold cross-validation to improve the generalization ability of the model. Finally, the SHapley Additive exPlanations (SHAP) model was introduced to enhance the interpretability of the model.</p></div><div><h3>Results</h3><p>The experimental results of comparing the SRR-voting model with six common models show that the accuracy, F1-score, and recall metrics are significantly improved, which verifies the effectiveness and practicality of the SRR-voting model.</p></div><div><h3>Conclusions</h3><p>This study combines data visualization and machine learning to design a National Basketball Association data visualization system to help the general audience visualize game data and predict all-star players; this can also be extended to other sports events or related fields.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages 444-458"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000833/pdf?md5=1a85324d30377a749ed5c9c70fb6f227&pid=1-s2.0-S2096579622000833-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple, stroke-based method for gesture drawing 一个简单的,基于笔画的手势绘制方法
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.08.004
Lesley Istead , Joe Istead , Andreea Pocol , Craig S. Kaplan

Background

Gesture drawing is a type of fluid, fast sketch with loose and roughly drawn lines that captures the motion and feeling of a subject. Although style transfer methods, which are able to learn a style from an input image and apply it to a secondary image, can reproduce many styles, they are currently unable to produce the flowing strokes of gesture drawings.

Method

In this paper, we present a method for producing gesture drawings that roughly depict objects or scenes with loose dancing contours and frantic textures. By following a gradient field, our method adapts stroke-based painterly rendering algorithms to produce long curved strokes. A rough, overdrawn appearance is created through a progressive refinement. In addition, we produce rough hatch strokes by altering the stroke direction. These add optional shading to gesture drawings.

Results

A wealth of parameters provide users the ability to adjust the output style, from short and rapid strokes to long and fluid strokes, and from swirling to straight lines. Potential stylistic outputs include pen-and-ink and colored pencil. We present several generated gesture drawings and discuss the application of our method to video.

Conclusion

Our stroke-based rendering algorithm produces convincing gesture drawings with numerous controllable parameters, permitting the creation of a variety of styles.

手势绘画是一种流畅、快速的素描,用松散和粗略的线条来捕捉物体的运动和感觉。虽然能够从输入图像中学习风格并将其应用于第二图像的风格转移方法可以复制许多风格,但它们目前无法产生手势绘画的流畅笔触。方法在本文中,我们提出了一种粗略描绘具有松散舞蹈轮廓和疯狂纹理的物体或场景的手势绘图方法。通过跟踪梯度场,我们的方法采用基于笔画的绘画渲染算法来生成长曲线笔画。粗糙,透支的外观是通过逐步细化创建的。此外,我们通过改变冲程方向来产生粗冲程。它们为手势绘图添加了可选的阴影。结果丰富的参数为用户提供了调整输出样式的能力,从短而快速的笔画到长而流畅的笔画,从旋转到直线。潜在的风格输出包括钢笔和墨水和彩色铅笔。我们给出了几个生成的手势图形,并讨论了我们的方法在视频中的应用。我们的基于笔画的绘制算法产生了具有许多可控参数的令人信服的手势绘图,允许创建各种风格。
{"title":"A simple, stroke-based method for gesture drawing","authors":"Lesley Istead ,&nbsp;Joe Istead ,&nbsp;Andreea Pocol ,&nbsp;Craig S. Kaplan","doi":"10.1016/j.vrih.2022.08.004","DOIUrl":"10.1016/j.vrih.2022.08.004","url":null,"abstract":"<div><h3>Background</h3><p>Gesture drawing is a type of fluid, fast sketch with loose and roughly drawn lines that captures the motion and feeling of a subject. Although style transfer methods, which are able to learn a style from an input image and apply it to a secondary image, can reproduce many styles, they are currently unable to produce the flowing strokes of gesture drawings.</p></div><div><h3>Method</h3><p>In this paper, we present a method for producing gesture drawings that roughly depict objects or scenes with loose dancing contours and frantic textures. By following a gradient field, our method adapts stroke-based painterly rendering algorithms to produce long curved strokes. A rough, overdrawn appearance is created through a progressive refinement. In addition, we produce rough hatch strokes by altering the stroke direction. These add optional shading to gesture drawings.</p></div><div><h3>Results</h3><p>A wealth of parameters provide users the ability to adjust the output style, from short and rapid strokes to long and fluid strokes, and from swirling to straight lines. Potential stylistic outputs include pen-and-ink and colored pencil. We present several generated gesture drawings and discuss the application of our method to video.</p></div><div><h3>Conclusion</h3><p>Our stroke-based rendering algorithm produces convincing gesture drawings with numerous controllable parameters, permitting the creation of a variety of styles.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages 381-392"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000791/pdf?md5=5d6ede6955247fdfc333f73a0cddaa0d&pid=1-s2.0-S2096579622000791-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122622709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RADepthNet: Reflectance-aware monocular depth estimation RADepthNet:反射感知单目深度估计
Q1 Computer Science Pub Date : 2022-10-01 DOI: 10.1016/j.vrih.2022.08.005
Chuxuan Li , Ran Yi , Saba Ghazanfar Ali , Lizhuang Ma , Enhua Wu , Jihong Wang , Lijuan Mao , Bin Sheng

Background

Monocular depth estimation aims to predict a dense depth map from a single RGB image, and has important applications in 3D reconstruction, automatic driving, and augmented reality. However, existing methods directly feed the original RGB image into the model to extract depth features without avoiding the interference of depth-irrelevant information on depth-estimation accuracy, which leads to inferior performance.

Methods

To remove the influence of depth-irrelevant information and improve the depth-prediction accuracy, we propose RADepthNet, a novel reflectance-guided network that fuses boundary features. Specifically, our method predicts depth maps using the following three steps: (1) Intrinsic Image Decomposition. We propose a reflectance extraction module consisting of an encoder-decoder structure to extract the depth-related reflectance. Through an ablation study, we demonstrate that the module can reduce the influence of illumination on depth estimation. (2) Boundary Detection. A boundary extraction module, consisting of an encoder, refinement block, and upsample block, was proposed to better predict the depth at object boundaries utilizing gradient constraints. (3) Depth Prediction Module. We use an encoder different from (2) to obtain depth features from the reflectance map and fuse boundary features to predict depth. In addition, we proposed FIFADataset, a depth-estimation dataset applied in soccer scenarios.

Results

Extensive experiments on a public dataset and our proposed FIFADataset show that our method achieves state-of-the-art performance.

单目深度估计旨在从单个RGB图像中预测密集的深度图,在3D重建,自动驾驶和增强现实中具有重要应用。然而,现有方法直接将原始RGB图像输入到模型中提取深度特征,没有避免深度无关信息对深度估计精度的干扰,导致性能较差。方法为了消除深度无关信息的影响,提高深度预测精度,我们提出了一种融合边界特征的反射制导网络RADepthNet。具体来说,我们的方法通过以下三个步骤来预测深度图:(1)内在图像分解。我们提出了一个由编码器-解码器结构组成的反射率提取模块来提取深度相关反射率。通过烧蚀实验,我们证明了该模块可以减少光照对深度估计的影响。(2)边界检测。为了更好地利用梯度约束预测目标边界深度,提出了一种由编码器、细化块和上样块组成的边界提取模块。(3)深度预测模块。我们使用不同于(2)的编码器从反射率图中获取深度特征,并融合边界特征来预测深度。此外,我们提出了FIFADataset,这是一个应用于足球场景的深度估计数据集。结果在公共数据集和我们提出的fifadata数据集上进行的大量实验表明,我们的方法达到了最先进的性能。
{"title":"RADepthNet: Reflectance-aware monocular depth estimation","authors":"Chuxuan Li ,&nbsp;Ran Yi ,&nbsp;Saba Ghazanfar Ali ,&nbsp;Lizhuang Ma ,&nbsp;Enhua Wu ,&nbsp;Jihong Wang ,&nbsp;Lijuan Mao ,&nbsp;Bin Sheng","doi":"10.1016/j.vrih.2022.08.005","DOIUrl":"10.1016/j.vrih.2022.08.005","url":null,"abstract":"<div><h3>Background</h3><p>Monocular depth estimation aims to predict a dense depth map from a single RGB image, and has important applications in 3D reconstruction, automatic driving, and augmented reality. However, existing methods directly feed the original RGB image into the model to extract depth features without avoiding the interference of depth-irrelevant information on depth-estimation accuracy, which leads to inferior performance.</p></div><div><h3>Methods</h3><p>To remove the influence of depth-irrelevant information and improve the depth-prediction accuracy, we propose RADepthNet, a novel reflectance-guided network that fuses boundary features. Specifically, our method predicts depth maps using the following three steps: (1) Intrinsic Image Decomposition. We propose a reflectance extraction module consisting of an encoder-decoder structure to extract the depth-related reflectance. Through an ablation study, we demonstrate that the module can reduce the influence of illumination on depth estimation. (2) Boundary Detection. A boundary extraction module, consisting of an encoder, refinement block, and upsample block, was proposed to better predict the depth at object boundaries utilizing gradient constraints. (3) Depth Prediction Module<strong>.</strong> We use an encoder different from (2) to obtain depth features from the reflectance map and fuse boundary features to predict depth. In addition, we proposed FIFADataset, a depth-estimation dataset applied in soccer scenarios.</p></div><div><h3>Results</h3><p>Extensive experiments on a public dataset and our proposed FIFADataset show that our method achieves state-of-the-art performance.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 5","pages":"Pages 418-431"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000808/pdf?md5=fc1d9cddf0180762f5b3a461f1d2e01d&pid=1-s2.0-S2096579622000808-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116232217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating digital twins and deep learning for medical image analysis in the era of COVID-19 融合数字孪生和深度学习,实现新冠肺炎时代医学图像分析
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2022.03.002
Imran Ahmed , Misbah Ahmad , Gwanggil Jeon

Background

Digital twins are virtual representations of devices and processes that capture the physical properties of the environment and operational algorithms/techniques in the context of medical devices and technologies. Digital twins may allow healthcare organizations to determine methods of improving medical processes, enhancing patient experience, lowering operating expenses, and extending the value of care. During the present COVID-19 pandemic, various medical devices, such as X-rays and CT scan machines and processes, are constantly being used to collect and analyze medical images. When collecting and processing an extensive volume of data in the form of images, machines and processes sometimes suffer from system failures, creating critical issues for hospitals and patients.

Methods

To address this, we introduce a digital-twin-based smart healthcare system integrated with medical devices to collect information regarding the current health condition, configuration, and maintenance history of the device/machine/system. Furthermore, medical images, that is, X-rays, are analyzed by using a deep-learning model to detect the infection of COVID-19. The designed system is based on the cascade recurrent convolution neural network (RCNN) architecture. In this architecture, the detector stages are deeper and more sequentially selective against small and close false positives. This architecture is a multi-stage extension of the RCNN model and sequentially trained using the output of one stage for training the other. At each stage, the bounding boxes are adjusted to locate a suitable value of the nearest false positives during the training of the different stages. In this manner, the arrangement of detectors is adjusted to increase the intersection over union, overcoming the problem of overfitting. We train the model by using X-ray images as the model was previously trained on another dataset.

Results

The developed system achieves good accuracy during the detection phase of COVID-19. The experimental outcomes reveal the efficiency of the detection architecture, which yields a mean average precision rate of 0.94.

数字孪生是设备和过程的虚拟表示,可捕获医疗设备和技术背景下环境和操作算法/技术的物理特性。数字孪生可以让医疗保健组织确定改进医疗流程、增强患者体验、降低运营费用和扩展护理价值的方法。在当前的COVID-19大流行期间,各种医疗设备,如x射线和CT扫描仪和流程,不断被用于收集和分析医学图像。在以图像形式收集和处理大量数据时,机器和流程有时会出现系统故障,给医院和患者带来严重问题。方法为了解决这一问题,我们引入了一种基于数字孪生的智能医疗保健系统,该系统与医疗设备集成在一起,收集有关设备/机器/系统当前健康状况、配置和维护历史的信息。此外,利用深度学习模型分析医学图像,即x射线,以检测COVID-19的感染。所设计的系统基于级联递归卷积神经网络(RCNN)架构。在这种体系结构中,检测器阶段更深入,更有顺序地选择小而接近的假阳性。该体系结构是RCNN模型的多阶段扩展,并使用一个阶段的输出依次训练另一个阶段。在每个阶段,对边界框进行调整,以在不同阶段的训练中找到最接近的假阳性的合适值。通过这种方式,调整检测器的排列以增加交集比并,克服了过拟合的问题。我们使用x射线图像来训练模型,因为模型之前是在另一个数据集上训练的。结果所开发的系统在COVID-19检测阶段具有较好的准确性。实验结果表明了该检测体系的有效性,平均准确率为0.94。
{"title":"Integrating digital twins and deep learning for medical image analysis in the era of COVID-19","authors":"Imran Ahmed ,&nbsp;Misbah Ahmad ,&nbsp;Gwanggil Jeon","doi":"10.1016/j.vrih.2022.03.002","DOIUrl":"10.1016/j.vrih.2022.03.002","url":null,"abstract":"<div><h3>Background</h3><p>Digital twins are virtual representations of devices and processes that capture the physical properties of the environment and operational algorithms/techniques in the context of medical devices and technologies. Digital twins may allow healthcare organizations to determine methods of improving medical processes, enhancing patient experience, lowering operating expenses, and extending the value of care. During the present COVID-19 pandemic, various medical devices, such as X-rays and CT scan machines and processes, are constantly being used to collect and analyze medical images. When collecting and processing an extensive volume of data in the form of images, machines and processes sometimes suffer from system failures, creating critical issues for hospitals and patients.</p></div><div><h3>Methods</h3><p>To address this, we introduce a digital-twin-based smart healthcare system integrated with medical devices to collect information regarding the current health condition, configuration, and maintenance history of the device/machine/system. Furthermore, medical images, that is, X-rays, are analyzed by using a deep-learning model to detect the infection of COVID-19. The designed system is based on the cascade recurrent convolution neural network (RCNN) architecture. In this architecture, the detector stages are deeper and more sequentially selective against small and close false positives. This architecture is a multi-stage extension of the RCNN model and sequentially trained using the output of one stage for training the other. At each stage, the bounding boxes are adjusted to locate a suitable value of the nearest false positives during the training of the different stages. In this manner, the arrangement of detectors is adjusted to increase the intersection over union, overcoming the problem of overfitting. We train the model by using X-ray images as the model was previously trained on another dataset.</p></div><div><h3>Results</h3><p>The developed system achieves good accuracy during the detection phase of COVID-19. The experimental outcomes reveal the efficiency of the detection architecture, which yields a mean average precision rate of 0.94.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages 292-305"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000183/pdf?md5=1f5a53060a043bec60b5fd3de876ef4d&pid=1-s2.0-S2096579622000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42600248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Digital twin intelligent system for industrial internet of things-based big data management and analysis in cloud environments 基于工业物联网的云环境下大数据管理分析的数字孪生智能系统
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2022.05.003
Christos L. Stergiou, Kostas E. Psannis

This work surveys and illustrates multiple open challenges in the field of industrial Internet of Things (IoT)-based big data management and analysis in cloud environments. Challenges arising from the fields of machine learning in cloud infrastructures, artificial intelligence techniques for big data analytics in cloud environments, and federated learning cloud systems are elucidated. Additionally, reinforcement learning, which is a novel technique that allows large cloud-based data centers, to allocate more energy-efficient resources is examined. Moreover, we propose an architecture that attempts to combine the features offered by several cloud providers to achieve an energy-efficient industrial IoT-based big data management framework (EEIBDM) established outside of every user in the cloud. IoT data can be integrated with techniques such as reinforcement and federated learning to achieve a digital twin scenario for the virtual representation of industrial IoT-based big data of machines and room temperatures. Furthermore, we propose an algorithm for determining the energy consumption of the infrastructure by evaluating the EEIBDM framework. Finally, future directions for the expansion of this research are discussed.

这项工作调查并说明了云环境下基于工业物联网(IoT)的大数据管理和分析领域的多个开放挑战。阐述了云基础设施中的机器学习、云环境中用于大数据分析的人工智能技术以及联合学习云系统等领域所面临的挑战。此外,强化学习是一种允许大型基于云的数据中心分配更节能资源的新技术。此外,我们提出了一种架构,试图结合几家云提供商提供的功能,以实现在云中的每个用户之外建立的节能的基于工业物联网的大数据管理框架(EEIBDM)。物联网数据可以与强化和联邦学习等技术集成,以实现基于工业物联网的机器和室温大数据的虚拟表示的数字孪生场景。此外,我们提出了一种通过评估EEIBDM框架来确定基础设施能耗的算法。最后,对未来研究的发展方向进行了展望。
{"title":"Digital twin intelligent system for industrial internet of things-based big data management and analysis in cloud environments","authors":"Christos L. Stergiou,&nbsp;Kostas E. Psannis","doi":"10.1016/j.vrih.2022.05.003","DOIUrl":"10.1016/j.vrih.2022.05.003","url":null,"abstract":"<div><p>This work surveys and illustrates multiple open challenges in the field of industrial Internet of Things (IoT)-based big data management and analysis in cloud environments. Challenges arising from the fields of machine learning in cloud infrastructures, artificial intelligence techniques for big data analytics in cloud environments, and federated learning cloud systems are elucidated. Additionally, reinforcement learning, which is a novel technique that allows large cloud-based data centers, to allocate more energy-efficient resources is examined. Moreover, we propose an architecture that attempts to combine the features offered by several cloud providers to achieve an energy-efficient industrial IoT-based big data management framework (EEIBDM) established outside of every user in the cloud. IoT data can be integrated with techniques such as reinforcement and federated learning to achieve a digital twin scenario for the virtual representation of industrial IoT-based big data of machines and room temperatures. Furthermore, we propose an algorithm for determining the energy consumption of the infrastructure by evaluating the EEIBDM framework. Finally, future directions for the expansion of this research are discussed.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages 279-291"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000444/pdf?md5=77ac7ba395219ea4a1f3583a51767386&pid=1-s2.0-S2096579622000444-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132922825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Deep inside molecules — digital twins at the nanoscale 分子的深处——纳米级的数字双胞胎
Q1 Computer Science Pub Date : 2022-08-01 DOI: 10.1016/j.vrih.2022.03.001
Marc Baaden

Background

Digital twins offer rich potential for exploration in virtual reality (VR). Using interactive molecular simulation approaches, they enable a human operator to access the physical properties of molecular objects and to build, manipulate, and study their assemblies. Integrative modeling and drug design are important applications of this technology.

Methods

In this study, head-mounted virtual reality displays connected to molecular simulation engines were used to create interactive and immersive digital twins. They were used to perform tasks relevant to specific use cases.

Results

Three areas were investigated, including model building, rational design, and tangible models. Here, we report several membrane-embedded systems of ion channels, viral components, and artificial water channels. We were able to improve and create molecular designs based on digital twins.

Conclusions

The molecular application domain offers great opportunities, and most of the technical and technological aspects have been solved. Wider adoption is expected once the onboarding of VR is simplified and the technology gains wider acceptance.

数字孪生为虚拟现实(VR)的探索提供了丰富的潜力。使用交互式分子模拟方法,它们使人类操作员能够访问分子对象的物理特性,并构建、操作和研究它们的组装。综合建模和药物设计是该技术的重要应用。方法本研究采用头戴式虚拟现实显示器与分子模拟引擎相连接,创建交互式沉浸式数字双胞胎。它们被用来执行与特定用例相关的任务。结果从模型构建、合理设计和实物模型三个方面进行了研究。在这里,我们报道了几种离子通道、病毒成分和人工水通道的膜嵌入系统。我们能够改进和创造基于数字双胞胎的分子设计。结论分子领域的应用前景广阔,大部分技术和工艺方面的问题已经得到解决。一旦虚拟现实的使用简化,技术得到更广泛的接受,预计会有更广泛的采用。
{"title":"Deep inside molecules — digital twins at the nanoscale","authors":"Marc Baaden","doi":"10.1016/j.vrih.2022.03.001","DOIUrl":"10.1016/j.vrih.2022.03.001","url":null,"abstract":"<div><h3>Background</h3><p>Digital twins offer rich potential for exploration in virtual reality (VR). Using interactive molecular simulation approaches, they enable a human operator to access the physical properties of molecular objects and to build, manipulate, and study their assemblies. Integrative modeling and drug design are important applications of this technology.</p></div><div><h3>Methods</h3><p>In this study, head-mounted virtual reality displays connected to molecular simulation engines were used to create interactive and immersive digital twins. They were used to perform tasks relevant to specific use cases.</p></div><div><h3>Results</h3><p>Three areas were investigated, including model building, rational design, and tangible models. Here, we report several membrane-embedded systems of ion channels, viral components, and artificial water channels. We were able to improve and create molecular designs based on digital twins.</p></div><div><h3>Conclusions</h3><p>The molecular application domain offers great opportunities, and most of the technical and technological aspects have been solved. Wider adoption is expected once the onboarding of VR is simplified and the technology gains wider acceptance.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 4","pages":"Pages 324-341"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000171/pdf?md5=c3f874da70ddd62619d89326c3770de9&pid=1-s2.0-S2096579622000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132137540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1