首页 > 最新文献

Virtual Reality Intelligent Hardware最新文献

英文 中文
Multi-scale context-aware network for continuous sign language recognition 用于连续手语识别的多尺度情境感知网络
Q1 Computer Science Pub Date : 2024-08-01 DOI: 10.1016/j.vrih.2023.06.011
Senhua XUE, Liqing GAO, Liang WAN, Wei FENG

The hands and face are the most important parts for expressing sign language morphemes in sign language videos. However, we find that existing Continuous Sign Language Recognition (CSLR) methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information. In addition, the signs have different lengths, whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling, which disturbs the perception of complete signs. In this study, we propose a Multi-Scale Context-Aware network (MSCA-Net) to solve the aforementioned problems. Our MSCA-Net contains two main modules: (1) Multi-Scale Motion Attention (MSMA), which uses the differences among frames to perceive information of the hands and face in multiple spatial scales, replacing the heavy feature extractors; and (2) Multi-Scale Temporal Modeling (MSTM), which explores crucial temporal information in the sign language video from different temporal scales. We conduct extensive experiments using three widely used sign language datasets, i.e., RWTH-PHOENIX-Weather-2014, RWTH-PHOENIX-Weather-2014T, and CSL-Daily. The proposed MSCA-Net achieve state-of-the-art performance, demonstrating the effectiveness of our approach.

手和脸是手语视频中表达手语语素的最重要部分。然而,我们发现现有的连续手语识别(CSLR)方法缺乏对视觉骨干中手和脸部信息的挖掘,或者使用昂贵耗时的外部提取器来挖掘这些信息。此外,手势的长度各不相同,而以往的 CSLR 方法通常使用固定长度的窗口分割视频以捕捉连续特征,然后进行全局时序建模,这干扰了对完整手势的感知。在本研究中,我们提出了一种多尺度上下文感知网络(MSCA-Net)来解决上述问题。我们的 MSCA-Net 包含两个主要模块:(1) 多尺度运动注意(MSMA),它利用帧间的差异来感知多个空间尺度上的手部和面部信息,取代了繁重的特征提取器;(2) 多尺度时间建模(MSTM),它从不同的时间尺度上探索手语视频中关键的时间信息。我们使用三个广泛使用的手语数据集(即 RWTH-PHOENIX-Weather-2014、RWTH-PHOENIX-Weather-2014T 和 CSL-Daily)进行了大量实验。所提出的 MSCA-Net 达到了最先进的性能,证明了我们方法的有效性。
{"title":"Multi-scale context-aware network for continuous sign language recognition","authors":"Senhua XUE,&nbsp;Liqing GAO,&nbsp;Liang WAN,&nbsp;Wei FENG","doi":"10.1016/j.vrih.2023.06.011","DOIUrl":"10.1016/j.vrih.2023.06.011","url":null,"abstract":"<div><p>The hands and face are the most important parts for expressing sign language morphemes in sign language videos. However, we find that existing Continuous Sign Language Recognition (CSLR) methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information. In addition, the signs have different lengths, whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling, which disturbs the perception of complete signs. In this study, we propose a Multi-Scale Context-Aware network (MSCA-Net) to solve the aforementioned problems. Our MSCA-Net contains two main modules: <strong>(</strong>1) Multi-Scale Motion Attention (MSMA), which uses the differences among frames to perceive information of the hands and face in multiple spatial scales, replacing the heavy feature extractors; and <strong>(</strong>2) Multi-Scale Temporal Modeling (MSTM), which explores crucial temporal information in the sign language video from different temporal scales. We conduct extensive experiments using three widely used sign language datasets, i.e., RWTH-PHOENIX-Weather-2014, RWTH-PHOENIX-Weather-2014T, and CSL-Daily. The proposed MSCA-Net achieve state-of-the-art performance, demonstrating the effectiveness of our approach.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 4","pages":"Pages 323-337"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000414/pdfft?md5=d9cac344d105f6ddc495c1cb1e50a67a&pid=1-s2.0-S2096579623000414-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust blind image watermarking based on interest points 基于兴趣点的鲁棒盲图像水印技术
Q1 Computer Science Pub Date : 2024-08-01 DOI: 10.1016/j.vrih.2023.06.012
Zizhuo WANG, Kun HU, Chaoyangfan HUANG, Zixuan HU, Shuo YANG, Xingjun WANG

Digital watermarking technology plays an essential role in the work of anti-counterfeiting and traceability. However, image watermarking algorithms are weak against hybrid attacks, especially geometric at-tacks, such as cropping attacks, rotation attacks, etc. We propose a robust blind image watermarking algorithm that combines stable interest points and deep learning networks to improve the robustness of the watermarking algorithm further. First, to extract more sparse and stable interest points, we use the Superpoint algorithm for generation and design two steps to perform the screening procedure. We first keep the points with the highest possibility in a given region to ensure the sparsity of the points and then filter the robust interest points by hybrid attacks to ensure high stability. The message is embedded in sub-blocks centered on stable interest points using a deep learning-based framework. Different kinds of attacks and simulated noise are added to the adversarial training to guarantee the robustness of embedded blocks. We use the ConvNext network for watermark extraction and determine the division threshold based on the decoded values of the unembedded sub-blocks. Through extensive experimental results, we demonstrate that our proposed algorithm can improve the accuracy of the network in extracting information while ensuring high invisibility between the embedded image and the original cover image. Comparison with previous SOTA work reveals that our algorithm can achieve better visual and numerical results on hybrid and geometric attacks.

数字水印技术在防伪和溯源工作中发挥着至关重要的作用。然而,图像水印算法对混合攻击的抵抗力较弱,尤其是几何攻击,如裁剪攻击、旋转攻击等。我们提出了一种结合稳定兴趣点和深度学习网络的鲁棒盲图像水印算法,以进一步提高水印算法的鲁棒性。首先,为了提取更多稀疏且稳定的兴趣点,我们使用超级点算法进行生成,并设计了两个步骤来执行筛选程序。我们首先保留给定区域内可能性最大的点,以确保点的稀疏性,然后通过混合攻击筛选出稳健的兴趣点,以确保高稳定性。利用基于深度学习的框架,将信息嵌入以稳定兴趣点为中心的子块中。在对抗训练中加入不同类型的攻击和模拟噪声,以保证嵌入块的鲁棒性。我们使用 ConvNext 网络提取水印,并根据未嵌入子块的解码值确定分割阈值。通过大量的实验结果,我们证明了我们提出的算法可以提高网络提取信息的准确性,同时确保嵌入图像与原始覆盖图像之间的高隐蔽性。与之前的 SOTA 工作相比,我们的算法可以在混合攻击和几何攻击中取得更好的视觉和数值结果。
{"title":"Robust blind image watermarking based on interest points","authors":"Zizhuo WANG,&nbsp;Kun HU,&nbsp;Chaoyangfan HUANG,&nbsp;Zixuan HU,&nbsp;Shuo YANG,&nbsp;Xingjun WANG","doi":"10.1016/j.vrih.2023.06.012","DOIUrl":"10.1016/j.vrih.2023.06.012","url":null,"abstract":"<div><p>Digital watermarking technology plays an essential role in the work of anti-counterfeiting and traceability. However, image watermarking algorithms are weak against hybrid attacks, especially geometric at-tacks, such as cropping attacks, rotation attacks, etc. We propose a robust blind image watermarking algorithm that combines stable interest points and deep learning networks to improve the robustness of the watermarking algorithm further. First, to extract more sparse and stable interest points, we use the Superpoint algorithm for generation and design two steps to perform the screening procedure. We first keep the points with the highest possibility in a given region to ensure the sparsity of the points and then filter the robust interest points by hybrid attacks to ensure high stability. The message is embedded in sub-blocks centered on stable interest points using a deep learning-based framework. Different kinds of attacks and simulated noise are added to the adversarial training to guarantee the robustness of embedded blocks. We use the ConvNext network for watermark extraction and determine the division threshold based on the decoded values of the unembedded sub-blocks. Through extensive experimental results, we demonstrate that our proposed algorithm can improve the accuracy of the network in extracting information while ensuring high invisibility between the embedded image and the original cover image. Comparison with previous SOTA work reveals that our algorithm can achieve better visual and numerical results on hybrid and geometric attacks.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 4","pages":"Pages 308-322"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000426/pdfft?md5=0d46d851b07db92670b0c63431ec427e&pid=1-s2.0-S2096579623000426-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S2ANet: Combining local spectral and spatial point grouping for point cloud processing S2ANet:结合局部光谱和空间点分组进行点云处理
Q1 Computer Science Pub Date : 2024-08-01 DOI: 10.1016/j.vrih.2023.06.005
Yujie LIU, Xiaorui SUN, Wenbin SHAO, Yafu YUAN

Background

Despite the recent progress in 3D point cloud processing using deep convolutional neural networks, the inability to extract local features remains a challenging problem. In addition, existing methods consider only the spatial domain in the feature extraction process.

Methods

In this paper, we propose a spectral and spatial aggregation convolutional network (S2ANet), which combines spectral and spatial features for point cloud processing. First, we calculate the local frequency of the point cloud in the spectral domain. Then, we use the local frequency to group points and provide a spectral aggregation convolution module to extract the features of the points grouped by the local frequency. We simultaneously extract the local features in the spatial domain to supplement the final features.

Results

S2ANet was applied in several point cloud analysis tasks; it achieved state-of-the-art classification accuracies of 93.8%, 88.0%, and 83.1% on the ModelNet40, ShapeNetCore, and ScanObjectNN datasets, respectively. For indoor scene segmentation, training and testing were performed on the S3DIS dataset, and the mean intersection over union was 62.4%.

Conclusions

The proposed S2ANet can effectively capture the local geometric information of point clouds, thereby improving accuracy on various tasks.

背景尽管最近在使用深度卷积神经网络进行三维点云处理方面取得了进展,但无法提取局部特征仍然是一个具有挑战性的问题。此外,现有方法在特征提取过程中只考虑了空间域。方法在本文中,我们提出了一种光谱和空间聚合卷积网络(S2ANet),它结合了光谱和空间特征,用于点云处理。首先,我们在光谱域计算点云的局部频率。然后,我们利用局部频率对点进行分组,并提供一个光谱聚合卷积模块来提取按局部频率分组的点的特征。我们同时提取了空间域的局部特征,以补充最终特征。结果S2ANet 被应用于多个点云分析任务中;它在 ModelNet40、ShapeNetCore 和 ScanObjectNN 数据集上的分类准确率分别达到了 93.8%、88.0% 和 83.1%,达到了最先进的水平。在室内场景分割方面,在 S3DIS 数据集上进行了训练和测试,平均交集超过联合的比例为 62.4%。
{"title":"S2ANet: Combining local spectral and spatial point grouping for point cloud processing","authors":"Yujie LIU,&nbsp;Xiaorui SUN,&nbsp;Wenbin SHAO,&nbsp;Yafu YUAN","doi":"10.1016/j.vrih.2023.06.005","DOIUrl":"10.1016/j.vrih.2023.06.005","url":null,"abstract":"<div><h3>Background</h3><p>Despite the recent progress in 3D point cloud processing using deep convolutional neural networks, the inability to extract local features remains a challenging problem. In addition, existing methods consider only the spatial domain in the feature extraction process.</p></div><div><h3>Methods</h3><p>In this paper, we propose a spectral and spatial aggregation convolutional network (S<sup>2</sup>ANet), which combines spectral and spatial features for point cloud processing. First, we calculate the local frequency of the point cloud in the spectral domain. Then, we use the local frequency to group points and provide a spectral aggregation convolution module to extract the features of the points grouped by the local frequency. We simultaneously extract the local features in the spatial domain to supplement the final features.</p></div><div><h3>Results</h3><p>S<sup>2</sup>ANet was applied in several point cloud analysis tasks; it achieved state-of-the-art classification accuracies of 93.8%, 88.0%, and 83.1% on the ModelNet40, ShapeNetCore, and ScanObjectNN datasets, respectively. For indoor scene segmentation, training and testing were performed on the S3DIS dataset, and the mean intersection over union was 62.4%.</p></div><div><h3>Conclusions</h3><p>The proposed S<sup>2</sup>ANet can effectively capture the local geometric information of point clouds, thereby improving accuracy on various tasks.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 4","pages":"Pages 267-279"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000360/pdfft?md5=718a7d943dc6468abf44b38521bcc2cb&pid=1-s2.0-S2096579623000360-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating animatable 3D cartoon faces from single portraits 从单个肖像生成可动画化的 3D 卡通人脸
Q1 Computer Science Pub Date : 2024-08-01 DOI: 10.1016/j.vrih.2023.06.010
Chuanyu PAN , Guowei YANG , Taijiang MU , Yu-Kun LAI

Background

With the development of virtual reality (VR) technology, there is a growing need for customized 3D avatars. However, traditional methods for 3D avatar modeling are either time-consuming or fail to retain the similarity to the person being modeled. This study presents a novel framework for generating animatable 3D cartoon faces from a single portrait image.

Methods

First, we transferred an input real-world portrait to a stylized cartoon image using StyleGAN. We then proposed a two-stage reconstruction method to recover a 3D cartoon face with detailed texture. Our two-stage strategy initially performs coarse estimation based on template models and subsequently refines the model by nonrigid deformation under landmark supervision. Finally, we proposed a semantic-preserving face-rigging method based on manually created templates and deformation transfer.

Conclusions

Compared with prior arts, the qualitative and quantitative results show that our method achieves better accuracy, aesthetics, and similarity criteria. Furthermore, we demonstrated the capability of the proposed 3D model for real-time facial animation.

背景随着虚拟现实(VR)技术的发展,人们对定制三维头像的需求越来越大。然而,传统的三维头像建模方法要么耗时,要么无法保持与被建模者的相似性。本研究提出了一种新颖的框架,用于从单个肖像图像生成可动画化的三维卡通人脸。方法首先,我们使用 StyleGAN 将输入的真实世界肖像转为风格化的卡通图像。然后,我们提出了一种两阶段重建方法,以恢复具有详细纹理的三维卡通人脸。我们的两阶段策略首先基于模板模型进行粗略估计,然后在地标监督下通过非刚性变形完善模型。最后,我们提出了一种基于人工创建模板和变形转移的语义保护人脸重建方法。结论与之前的技术相比,定性和定量结果表明我们的方法实现了更好的准确性、美观性和相似性标准。此外,我们还证明了所提出的三维模型在实时面部动画方面的能力。
{"title":"Generating animatable 3D cartoon faces from single portraits","authors":"Chuanyu PAN ,&nbsp;Guowei YANG ,&nbsp;Taijiang MU ,&nbsp;Yu-Kun LAI","doi":"10.1016/j.vrih.2023.06.010","DOIUrl":"10.1016/j.vrih.2023.06.010","url":null,"abstract":"<div><h3>Background</h3><p>With the development of virtual reality (VR) technology, there is a growing need for customized 3D avatars. However, traditional methods for 3D avatar modeling are either time-consuming or fail to retain the similarity to the person being modeled. This study presents a novel framework for generating animatable 3D cartoon faces from a single portrait image.</p></div><div><h3>Methods</h3><p>First, we transferred an input real-world portrait to a stylized cartoon image using StyleGAN. We then proposed a two-stage reconstruction method to recover a 3D cartoon face with detailed texture. Our two-stage strategy initially performs coarse estimation based on template models and subsequently refines the model by nonrigid deformation under landmark supervision. Finally, we proposed a semantic-preserving face-rigging method based on manually created templates and deformation transfer.</p></div><div><h3>Conclusions</h3><p>Compared with prior arts, the qualitative and quantitative results show that our method achieves better accuracy, aesthetics, and similarity criteria. Furthermore, we demonstrated the capability of the proposed 3D model for real-time facial animation.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 4","pages":"Pages 292-307"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000359/pdfft?md5=e0641053e4314662ffe5dca1c167d86b&pid=1-s2.0-S2096579623000359-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142039958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ARGA-Unet: Advanced U-net segmentation model using residual grouped convolution and attention mechanism for brain tumor MRI image segmentation ARGA-Unet:利用残差分组卷积和注意力机制进行脑肿瘤 MRI 图像分割的高级 U 网分割模型
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2023.05.001
Siyi XUN , Yan ZHANG , Sixu DUAN , Mingwei WANG , Jiangang CHEN , Tong TONG , Qinquan GAO , Chantong LAM , Menghan HU , Tao TAN

Background

Magnetic resonance imaging (MRI) has played an important role in the rapid growth of medical imaging diagnostic technology, especially in the diagnosis and treatment of brain tumors owing to its non-invasive characteristics and superior soft tissue contrast. However, brain tumors are characterized by high non-uniformity and non-obvious boundaries in MRI images because of their invasive and highly heterogeneous nature. In addition, the labeling of tumor areas is time-consuming and laborious.

Methods

To address these issues, this study uses a residual grouped convolution module, convolutional block attention module, and bilinear interpolation upsampling method to improve the classical segmentation network U-net. The influence of network normalization, loss function, and network depth on segmentation performance is further considered.

Results

In the experiments, the Dice score of the proposed segmentation model reached 97.581%, which is 12.438% higher than that of traditional U-net, demonstrating the effective segmentation of MRI brain tumor images.

Conclusions

In conclusion, we use the improved U-net network to achieve a good segmentation effect of brain tumor MRI images.

背景磁共振成像(MRI)在医学影像诊断技术的快速发展中发挥了重要作用,尤其是在脑肿瘤的诊断和治疗方面,因为它具有无创的特点和卓越的软组织对比度。然而,脑肿瘤由于其侵袭性和高度异质性,在核磁共振成像图像中具有高度不均匀和边界不明显的特点。为了解决这些问题,本研究使用残差分组卷积模块、卷积块注意模块和双线性插值上采样方法来改进经典的分割网络 U-net。结果在实验中,所提出的分割模型的 Dice 分数达到了 97.581%,比传统 U-net 高出 12.438%,证明了对核磁共振脑肿瘤图像的有效分割。
{"title":"ARGA-Unet: Advanced U-net segmentation model using residual grouped convolution and attention mechanism for brain tumor MRI image segmentation","authors":"Siyi XUN ,&nbsp;Yan ZHANG ,&nbsp;Sixu DUAN ,&nbsp;Mingwei WANG ,&nbsp;Jiangang CHEN ,&nbsp;Tong TONG ,&nbsp;Qinquan GAO ,&nbsp;Chantong LAM ,&nbsp;Menghan HU ,&nbsp;Tao TAN","doi":"10.1016/j.vrih.2023.05.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.05.001","url":null,"abstract":"<div><h3>Background</h3><p>Magnetic resonance imaging (MRI) has played an important role in the rapid growth of medical imaging diagnostic technology, especially in the diagnosis and treatment of brain tumors owing to its non-invasive characteristics and superior soft tissue contrast. However, brain tumors are characterized by high non-uniformity and non-obvious boundaries in MRI images because of their invasive and highly heterogeneous nature. In addition, the labeling of tumor areas is time-consuming and laborious.</p></div><div><h3>Methods</h3><p>To address these issues, this study uses a residual grouped convolution module, convolutional block attention module, and bilinear interpolation upsampling method to improve the classical segmentation network U-net. The influence of network normalization, loss function, and network depth on segmentation performance is further considered.</p></div><div><h3>Results</h3><p>In the experiments, the Dice score of the proposed segmentation model reached 97.581%, which is 12.438% higher than that of traditional U-net, demonstrating the effective segmentation of MRI brain tumor images.</p></div><div><h3>Conclusions</h3><p>In conclusion, we use the improved U-net network to achieve a good segmentation effect of brain tumor MRI images.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 203-216"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000232/pdfft?md5=5e16730452951aa1e3b2edacee01d06e&pid=1-s2.0-S2096579623000232-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face animation based on multiple sources and perspective alignment 基于多源和透视对齐的人脸动画
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2024.04.002
Yuanzong Mei , Wenyi Wang , Xi Liu , Wei Yong , Weijie Wu , Yifan Zhu , Shuai Wang , Jianwen Chen

Background

Face image animation generates a synthetic human face video that harmoniously integrates the identity derived from the source image and facial motion obtained from the driving video. This technology could be beneficial in multiple medical fields, such as diagnosis and privacy protection. Previous studies on face animation often relied on a single source image to generate an output video. With a significant pose difference between the source image and the driving frame, the quality of the generated video is likely to be suboptimal because the source image may not provide sufficient features for the warped feature map.

Methods

In this study, we propose a novel face-animation scheme based on multiple sources and perspective alignment to address these issues. We first introduce a multiple-source sampling and selection module to screen the optimal source image set from the provided driving video. We then propose an inter-frame interpolation and alignment module to further eliminate the misalignment between the selected source image and the driving frame.

Conclusions

The proposed method exhibits superior performance in terms of objective metrics and visual quality in large-angle animation scenes compared to other state-of-the-art face animation methods. It indicates the effectiveness of the proposed method in addressing the distortion issues in large-angle animation.

背景人脸图像动画生成合成人脸视频,将源图像中的身份信息和驾驶视频中的面部动作和谐地结合在一起。这项技术可用于诊断和隐私保护等多个医疗领域。以往关于人脸动画的研究通常依赖单一源图像来生成输出视频。由于源图像和驾驶帧之间存在明显的姿态差异,生成的视频质量很可能不理想,因为源图像可能无法为扭曲特征图提供足够的特征。首先,我们引入了一个多源采样和选择模块,从提供的驾驶视频中筛选出最佳源图像集。结论与其他最先进的人脸动画方法相比,所提出的方法在大角度动画场景的客观指标和视觉质量方面表现出更优越的性能。这表明所提出的方法能有效解决大角度动画中的失真问题。
{"title":"Face animation based on multiple sources and perspective alignment","authors":"Yuanzong Mei ,&nbsp;Wenyi Wang ,&nbsp;Xi Liu ,&nbsp;Wei Yong ,&nbsp;Weijie Wu ,&nbsp;Yifan Zhu ,&nbsp;Shuai Wang ,&nbsp;Jianwen Chen","doi":"10.1016/j.vrih.2024.04.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.04.002","url":null,"abstract":"<div><h3>Background</h3><p>Face image animation generates a synthetic human face video that harmoniously integrates the identity derived from the source image and facial motion obtained from the driving video. This technology could be beneficial in multiple medical fields, such as diagnosis and privacy protection<em>.</em> Previous studies on face animation often relied on a single source image to generate an output video. With a significant pose difference between the source image and the driving frame, the quality of the generated video is likely to be suboptimal because the source image may not provide sufficient features for the warped feature map.</p></div><div><h3>Methods</h3><p>In this study, we propose a novel face-animation scheme based on multiple sources and perspective alignment to address these issues. We first introduce a multiple-source sampling and selection module to screen the optimal source image set from the provided driving video. We then propose an inter-frame interpolation and alignment module to further eliminate the misalignment between the selected source image and the driving frame.</p></div><div><h3>Conclusions</h3><p>The proposed method exhibits superior performance in terms of objective metrics and visual quality in large-angle animation scenes compared to other state-of-the-art face animation methods. It indicates the effectiveness of the proposed method in addressing the distortion issues in large-angle animation.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 252-266"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579624000202/pdfft?md5=2a9475967792588ba319db5427a9033d&pid=1-s2.0-S2096579624000202-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of medical ocular image segmentation 医学眼部图像分割综述
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2024.04.001
Lai WEI, Menghan HU

Deep learning has been extensively applied to medical image segmentation, resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U-Net in 2015. However, the application of deep learning models to ocular medical image segmentation poses unique challenges, especially compared to other body parts, due to the complexity, small size, and blurriness of such images, coupled with the scarcity of data. This article aims to provide a comprehensive review of medical image segmentation from two perspectives: the development of deep network structures and the application of segmentation in ocular imaging. Initially, the article introduces an overview of medical imaging, data processing, and performance evaluation metrics. Subsequently, it analyzes recent developments in U-Net-based network structures. Finally, for the segmentation of ocular medical images, the application of deep learning is reviewed and categorized by the type of ocular tissue.

深度学习已被广泛应用于医学图像分割,自 2015 年 U-Net 取得显著成功以来,深度神经网络在医学图像分割领域取得了重大进展。然而,由于眼部图像的复杂性、小尺寸和模糊性,再加上数据的稀缺性,将深度学习模型应用于眼部医学图像分割带来了独特的挑战,尤其是与其他身体部位相比。本文旨在从深度网络结构的发展和分割在眼科成像中的应用两个角度对医学图像分割进行全面评述。文章首先介绍了医学成像、数据处理和性能评估指标的概况。随后,文章分析了基于 U-Net 的网络结构的最新发展。最后,针对眼部医学图像的分割,回顾了深度学习的应用,并按眼部组织类型进行了分类。
{"title":"A review of medical ocular image segmentation","authors":"Lai WEI,&nbsp;Menghan HU","doi":"10.1016/j.vrih.2024.04.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.04.001","url":null,"abstract":"<div><p>Deep learning has been extensively applied to medical image segmentation, resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U-Net in 2015. However, the application of deep learning models to ocular medical image segmentation poses unique challenges, especially compared to other body parts, due to the complexity, small size, and blurriness of such images, coupled with the scarcity of data. This article aims to provide a comprehensive review of medical image segmentation from two perspectives: the development of deep network structures and the application of segmentation in ocular imaging. Initially, the article introduces an overview of medical imaging, data processing, and performance evaluation metrics. Subsequently, it analyzes recent developments in U-Net-based network structures. Finally, for the segmentation of ocular medical images, the application of deep learning is reviewed and categorized by the type of ocular tissue.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 181-202"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962400010X/pdfft?md5=c30a9952442a34ae8a35e52683ed1214&pid=1-s2.0-S209657962400010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic detection of breast lesions in automated 3D breast ultrasound with cross-organ transfer learning 利用跨器官迁移学习在自动三维乳腺超声中自动检测乳腺病变
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2024.02.001
B.A.O. Lingyun , Zhengrui HUANG , Zehui LIN , Yue SUN , Hui CHEN , You LI , Zhang LI , Xiaochen YUAN , Lin XU , Tao TAN

Background

Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications, particularly in visual recognition tasks such as image and video analyses. There is a growing interest in applying this technology to diverse applications in medical image analysis. Automated three-dimensional Breast Ultrasound is a vital tool for detecting breast cancer, and computer-assisted diagnosis software, developed based on deep learning, can effectively assist radiologists in diagnosis. However, the network model is prone to overfitting during training, owing to challenges such as insufficient training data. This study attempts to solve the problem caused by small datasets and improve model detection performance.

Methods

We propose a breast cancer detection framework based on deep learning (a transfer learning method based on cross-organ cancer detection) and a contrastive learning method based on breast imaging reporting and data systems (BI-RADS).

Results

When using cross organ transfer learning and BIRADS based contrastive learning, the average sensitivity of the model increased by a maximum of 16.05%.

Conclusion

Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced, and contrastive learning method based on BI-RADS can improve the detection performance of the model.

背景深层卷积神经网络在众多机器学习应用中,尤其是在图像和视频分析等视觉识别任务中,已经引起了广泛关注。人们对将这一技术应用于医学图像分析的各种应用越来越感兴趣。自动三维乳腺超声波检查是检测乳腺癌的重要工具,基于深度学习开发的计算机辅助诊断软件可以有效地协助放射科医生进行诊断。然而,由于训练数据不足等难题,网络模型在训练过程中容易出现过拟合。方法我们提出了一种基于深度学习的乳腺癌检测框架(一种基于跨器官癌症检测的迁移学习方法)和一种基于乳腺成像报告和数据系统(BI-RADS)的对比学习方法。结果当使用跨器官转移学习和基于 BIRADS 的对比学习时,模型的平均灵敏度最高提高了 16.05%。结论我们的实验证明,跨器官癌症检测的参数和经验可以相互参考,而基于 BI-RADS 的对比学习方法可以提高模型的检测性能。
{"title":"Automatic detection of breast lesions in automated 3D breast ultrasound with cross-organ transfer learning","authors":"B.A.O. Lingyun ,&nbsp;Zhengrui HUANG ,&nbsp;Zehui LIN ,&nbsp;Yue SUN ,&nbsp;Hui CHEN ,&nbsp;You LI ,&nbsp;Zhang LI ,&nbsp;Xiaochen YUAN ,&nbsp;Lin XU ,&nbsp;Tao TAN","doi":"10.1016/j.vrih.2024.02.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2024.02.001","url":null,"abstract":"<div><h3>Background</h3><p>Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications, particularly in visual recognition tasks such as image and video analyses. There is a growing interest in applying this technology to diverse applications in medical image analysis. Automated three-dimensional Breast Ultrasound is a vital tool for detecting breast cancer, and computer-assisted diagnosis software, developed based on deep learning, can effectively assist radiologists in diagnosis. However, the network model is prone to overfitting during training, owing to challenges such as insufficient training data. This study attempts to solve the problem caused by small datasets and improve model detection performance.</p></div><div><h3>Methods</h3><p>We propose a breast cancer detection framework based on deep learning (a transfer learning method based on cross-organ cancer detection) and a contrastive learning method based on breast imaging reporting and data systems (BI-RADS).</p></div><div><h3>Results</h3><p>When using cross organ transfer learning and BIRADS based contrastive learning, the average sensitivity of the model increased by a maximum of 16.05%.</p></div><div><h3>Conclusion</h3><p>Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced, and contrastive learning method based on BI-RADS can improve the detection performance of the model.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 239-251"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S209657962400007X/pdfft?md5=a1bdf0d74f499e2548f6f5735dd9b5bf&pid=1-s2.0-S209657962400007X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining machine and deep transfer learning for mediastinal lymph node evaluation in patients with lung cancer 结合机器学习和深度传输学习评估肺癌患者的纵隔淋巴结
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2023.08.002
Hui XIE , Jianfang ZHANG , Lijuan DING , Tao TAN , Qing LI

Background

The prognosis and survival of patients with lung cancer are likely to deteriorate with metastasis. Using deep-learning in the detection of lymph node metastasis can facilitate the noninvasive calculation of the likelihood of such metastasis, thereby providing clinicians with crucial information to enhance diagnostic precision and ultimately improve patient survival and prognosis

Methods

In total, 623 eligible patients were recruited from two medical institutions. Seven deep learning models, namely Alex, GoogLeNet, Resnet18, Resnet101, Vgg16, Vgg19, and MobileNetv3 (small), were utilized to extract deep image histological features. The dimensionality of the extracted features was then reduced using the Spearman correlation coefficient (r ≥ 0.9) and Least Absolute Shrinkage and Selection Operator. Eleven machine learning methods, namely Support Vector Machine, K-nearest neighbor, Random Forest, Extra Trees, XGBoost, LightGBM, Naive Bayes, AdaBoost, Gradient Boosting Decision Tree, Linear Regression, and Multilayer Perceptron, were employed to construct classification prediction models for the filtered final features. The diagnostic performances of the models were assessed using various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value. Calibration and decision-curve analyses were also performed.

Results

The present study demonstrated that using deep radiomic features extracted from Vgg16, in conjunction with a prediction model constructed via a linear regression algorithm, effectively distinguished the status of mediastinal lymph nodes in patients with lung cancer. The performance of the model was evaluated based on various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value, which yielded values of 0.808, 0.834, 0.851, 0.745, 0.829, and 0.776, respectively. The validation set of the model was assessed using clinical decision curves, calibration curves, and confusion matrices, which collectively demonstrated the model's stability and accuracy

Conclusion

In this study, information on the deep radiomics of Vgg16 was obtained from computed tomography images, and the linear regression method was able to accurately diagnose mediastinal lymph node metastases in patients with lung cancer.

背景肺癌患者的预后和生存率很可能随着转移而恶化。利用深度学习检测淋巴结转移可以无创计算淋巴结转移的可能性,从而为临床医生提供关键信息,提高诊断精度,最终改善患者的生存和预后。利用七个深度学习模型,即 Alex、GoogLeNet、Resnet18、Resnet101、Vgg16、Vgg19 和 MobileNetv3(小型),提取深度图像组织学特征。然后使用斯皮尔曼相关系数(r ≥ 0.9)和最小绝对收缩与选择操作符对提取的特征进行降维。采用了 11 种机器学习方法,即支持向量机、K-近邻、随机森林、额外树、XGBoost、LightGBM、Naive Bayes、AdaBoost、梯度提升决策树、线性回归和多层感知器,为过滤后的最终特征构建分类预测模型。使用各种指标评估了模型的诊断性能,包括准确率、接收者操作特征曲线下面积、灵敏度、特异性、阳性预测值和阴性预测值。结果本研究表明,使用从 Vgg16 提取的深度放射学特征,结合通过线性回归算法构建的预测模型,可以有效区分肺癌患者纵隔淋巴结的状态。该模型的性能评估基于各种指标,包括准确率、接收者工作特征曲线下面积、灵敏度、特异性、阳性预测值和阴性预测值,其值分别为 0.808、0.834、0.851、0.745、0.829 和 0.776。结论本研究从计算机断层扫描图像中获取了 Vgg16 的深部放射组学信息,并利用线性回归方法准确诊断了肺癌患者的纵隔淋巴结转移。
{"title":"Combining machine and deep transfer learning for mediastinal lymph node evaluation in patients with lung cancer","authors":"Hui XIE ,&nbsp;Jianfang ZHANG ,&nbsp;Lijuan DING ,&nbsp;Tao TAN ,&nbsp;Qing LI","doi":"10.1016/j.vrih.2023.08.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.08.002","url":null,"abstract":"<div><h3>Background</h3><p>The prognosis and survival of patients with lung cancer are likely to deteriorate with metastasis. Using deep-learning in the detection of lymph node metastasis can facilitate the noninvasive calculation of the likelihood of such metastasis, thereby providing clinicians with crucial information to enhance diagnostic precision and ultimately improve patient survival and prognosis</p></div><div><h3>Methods</h3><p>In total, 623 eligible patients were recruited from two medical institutions. Seven deep learning models, namely Alex, GoogLeNet, Resnet18, Resnet101, Vgg16, Vgg19, and MobileNetv3 (small), were utilized to extract deep image histological features. The dimensionality of the extracted features was then reduced using the Spearman correlation coefficient (r ≥ 0.9) and Least Absolute Shrinkage and Selection Operator. Eleven machine learning methods, namely Support Vector Machine, K-nearest neighbor, Random Forest, Extra Trees, XGBoost, LightGBM, Naive Bayes, AdaBoost, Gradient Boosting Decision Tree, Linear Regression, and Multilayer Perceptron, were employed to construct classification prediction models for the filtered final features. The diagnostic performances of the models were assessed using various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value. Calibration and decision-curve analyses were also performed.</p></div><div><h3>Results</h3><p>The present study demonstrated that using deep radiomic features extracted from Vgg16, in conjunction with a prediction model constructed via a linear regression algorithm, effectively distinguished the status of mediastinal lymph nodes in patients with lung cancer. The performance of the model was evaluated based on various metrics, including accuracy, area under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value, which yielded values of 0.808, 0.834, 0.851, 0.745, 0.829, and 0.776, respectively. The validation set of the model was assessed using clinical decision curves, calibration curves, and confusion matrices, which collectively demonstrated the model's stability and accuracy</p></div><div><h3>Conclusion</h3><p>In this study, information on the deep radiomics of Vgg16 was obtained from computed tomography images, and the linear regression method was able to accurately diagnose mediastinal lymph node metastases in patients with lung cancer.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 226-238"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000463/pdfft?md5=d355b811e3e99356748d10c345ee1b33&pid=1-s2.0-S2096579623000463-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent diagnosis of atrial septal defect in children using echocardiography with deep learning 利用深度学习超声心动图对儿童房间隔缺损进行智能诊断
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.vrih.2023.05.002
Yiman LIU , Size HOU , Xiaoxiang HAN , Tongtong LIANG , Menghan HU , Xin WANG , Wei GU , Yuqi ZHANG , Qingli LI , Jiangang CHEN

Background

Atrial septal defect (ASD) is one of the most common congenital heart diseases. The diagnosis of ASD via transthoracic echocardiography is subjective and time-consuming.

Methods

The objective of this study was to evaluate the feasibility and accuracy of automatic detection of ASD in children based on color Doppler echocardiographic static images using end-to-end convolutional neural networks. The proposed depthwise separable convolution model identifies ASDs with static color Doppler images in a standard view. Among the standard views, we selected two echocardiographic views, i.e., the subcostal sagittal view of the atrium septum and the low parasternal four-chamber view. The developed ASD detection system was validated using a training set consisting of 396 echocardiographic images corresponding to 198 cases. Additionally, an independent test dataset of 112 images corresponding to 56 cases was used, including 101 cases with ASDs and 153 cases with normal hearts.

Results

The average area under the receiver operating characteristic curve, recall, precision, specificity, F1-score, and accuracy of the proposed ASD detection model were 91.99, 80.00, 82.22, 87.50, 79.57, and 83.04, respectively.

Conclusions

The proposed model can accurately and automatically identify ASD, providing a strong foundation for the intelligent diagnosis of congenital heart diseases.

背景房间隔缺损(ASD)是最常见的先天性心脏病之一。本研究的目的是评估使用端到端卷积神经网络根据彩色多普勒超声心动图静态图像自动检测儿童房间隔缺损的可行性和准确性。所提出的深度可分离卷积模型可通过标准视图中的静态彩色多普勒图像识别 ASD。在标准视图中,我们选择了两个超声心动图视图,即心房隔膜肋下矢状切面和胸骨旁四腔低切面。所开发的 ASD 检测系统通过由 198 个病例的 396 张超声心动图组成的训练集进行了验证。结果 ASD检测模型的平均接收者工作特征曲线下面积、召回率、精确率、特异性、F1-score和准确率分别为91.99、80.00、82.22、87.50、79.57和83.04。
{"title":"Intelligent diagnosis of atrial septal defect in children using echocardiography with deep learning","authors":"Yiman LIU ,&nbsp;Size HOU ,&nbsp;Xiaoxiang HAN ,&nbsp;Tongtong LIANG ,&nbsp;Menghan HU ,&nbsp;Xin WANG ,&nbsp;Wei GU ,&nbsp;Yuqi ZHANG ,&nbsp;Qingli LI ,&nbsp;Jiangang CHEN","doi":"10.1016/j.vrih.2023.05.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.05.002","url":null,"abstract":"<div><h3>Background</h3><p>Atrial septal defect (ASD) is one of the most common congenital heart diseases. The diagnosis of ASD via transthoracic echocardiography is subjective and time-consuming.</p></div><div><h3>Methods</h3><p>The objective of this study was to evaluate the feasibility and accuracy of automatic detection of ASD in children based on color Doppler echocardiographic static images using end-to-end convolutional neural networks. The proposed depthwise separable convolution model identifies ASDs with static color Doppler images in a standard view. Among the standard views, we selected two echocardiographic views, i.e., the subcostal sagittal view of the atrium septum and the low parasternal four-chamber view. The developed ASD detection system was validated using a training set consisting of 396 echocardiographic images corresponding to 198 cases. Additionally, an independent test dataset of 112 images corresponding to 56 cases was used, including 101 cases with ASDs and 153 cases with normal hearts.</p></div><div><h3>Results</h3><p>The average area under the receiver operating characteristic curve, recall, precision, specificity, F1-score, and accuracy of the proposed ASD detection model were 91.99, 80.00, 82.22, 87.50, 79.57, and 83.04, respectively.</p></div><div><h3>Conclusions</h3><p>The proposed model can accurately and automatically identify ASD, providing a strong foundation for the intelligent diagnosis of congenital heart diseases.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 3","pages":"Pages 217-225"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000244/pdfft?md5=3ade0d91e713f6555fd1c75181120add&pid=1-s2.0-S2096579623000244-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141484840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Virtual Reality Intelligent Hardware
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1