首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Enhancing Row-column array (RCA)-based 3D ultrasound vascular imaging with spatial-temporal similarity weighting. 利用时空相似性加权增强基于行列式阵列(RCA)的三维超声血管成像。
Pub Date : 2024-08-06 DOI: 10.1109/TMI.2024.3439615
Jingke Zhang, Chengwu Huang, U-Wai Lok, Zhijie Dong, Hui Liu, Ping Gong, Pengfei Song, Shigao Chen

Ultrasound vascular imaging (UVI) is a valuable tool for monitoring the physiological states and evaluating the pathological diseases. Advancing from conventional two-dimensional (2D) to three-dimensional (3D) UVI would enhance the vasculature visualization, thereby improving its reliability. Row-column array (RCA) has emerged as a promising approach for cost-effective ultrafast 3D imaging with a low channel count. However, ultrafast RCA imaging is often hampered by high-level sidelobe artifacts and low signal-to-noise ratio (SNR), which makes RCA-based UVI challenging. In this study, we propose a spatial-temporal similarity weighting (St-SW) method to overcome these challenges by exploiting the incoherence of sidelobe artifacts and noise between datasets acquired using orthogonal transmissions. Simulation, in vitro blood flow phantom, and in vivo experiments were conducted to compare the proposed method with existing orthogonal plane wave imaging (OPW), row-column-specific frame-multiply-and-sum beamforming (RC-FMAS), and XDoppler techniques. Qualitative and quantitative results demonstrate the superior performance of the proposed method. In simulations, the proposed method reduced the sidelobe level by 31.3 dB, 20.8 dB, and 14.0 dB, compared to OPW, XDoppler, and RC-FMAS, respectively. In the blood flow phantom experiment, the proposed method significantly improved the contrast-to-noise ratio (CNR) of the tube by 26.8 dB, 25.5 dB, and 19.7 dB, compared to OPW, XDoppler, and RC-FMAS methods, respectively. In the human submandibular gland experiment, it not only reconstructed a more complete vasculature but also improved the CNR by more than 15 dB, compared to OPW, XDoppler, and RC-FMAS methods. In summary, the proposed method effectively suppresses the side-lobe artifacts and noise in images collected using an RCA under low SNR conditions, leading to improved visualization of 3D vasculatures.

超声血管成像(UVI)是监测生理状态和评估病理疾病的重要工具。从传统的二维(2D)到三维(3D)超声血管成像(UVI)将增强血管的可视化,从而提高其可靠性。行列式阵列(RCA)是一种很有前途的方法,它能以较少的通道数进行经济高效的超快三维成像。然而,RCA 的超快成像往往受到高水平侧叶伪影和低信噪比(SNR)的影响,这使得基于 RCA 的超快三维成像具有挑战性。在这项研究中,我们提出了一种空间-时间相似性加权(St-SW)方法,利用正交传输获取的数据集之间的边瓣伪影和噪声的不一致性来克服这些挑战。通过仿真、体外血流模型和体内实验,将提出的方法与现有的正交平面波成像(OPW)、行列特定帧乘和波束成形(RC-FMAS)和 XDoppler 技术进行了比较。定性和定量结果都证明了所提方法的优越性能。在模拟实验中,与 OPW、XDoppler 和 RC-FMAS 相比,所提出的方法分别降低了 31.3 dB、20.8 dB 和 14.0 dB 的侧叶水平。在血流模型实验中,与 OPW、XDoppler 和 RC-FMAS 方法相比,所提出的方法大大提高了管道的对比度-噪声比(CNR),分别为 26.8 dB、25.5 dB 和 19.7 dB。在人体下颌下腺实验中,与 OPW、XDoppler 和 RC-FMAS 方法相比,该方法不仅重建了更完整的脉管,还将 CNR 提高了 15 分贝以上。总之,所提出的方法能有效抑制低信噪比条件下使用 RCA 采集图像中的侧叶伪影和噪声,从而改善三维血管的可视化。
{"title":"Enhancing Row-column array (RCA)-based 3D ultrasound vascular imaging with spatial-temporal similarity weighting.","authors":"Jingke Zhang, Chengwu Huang, U-Wai Lok, Zhijie Dong, Hui Liu, Ping Gong, Pengfei Song, Shigao Chen","doi":"10.1109/TMI.2024.3439615","DOIUrl":"https://doi.org/10.1109/TMI.2024.3439615","url":null,"abstract":"<p><p>Ultrasound vascular imaging (UVI) is a valuable tool for monitoring the physiological states and evaluating the pathological diseases. Advancing from conventional two-dimensional (2D) to three-dimensional (3D) UVI would enhance the vasculature visualization, thereby improving its reliability. Row-column array (RCA) has emerged as a promising approach for cost-effective ultrafast 3D imaging with a low channel count. However, ultrafast RCA imaging is often hampered by high-level sidelobe artifacts and low signal-to-noise ratio (SNR), which makes RCA-based UVI challenging. In this study, we propose a spatial-temporal similarity weighting (St-SW) method to overcome these challenges by exploiting the incoherence of sidelobe artifacts and noise between datasets acquired using orthogonal transmissions. Simulation, in vitro blood flow phantom, and in vivo experiments were conducted to compare the proposed method with existing orthogonal plane wave imaging (OPW), row-column-specific frame-multiply-and-sum beamforming (RC-FMAS), and XDoppler techniques. Qualitative and quantitative results demonstrate the superior performance of the proposed method. In simulations, the proposed method reduced the sidelobe level by 31.3 dB, 20.8 dB, and 14.0 dB, compared to OPW, XDoppler, and RC-FMAS, respectively. In the blood flow phantom experiment, the proposed method significantly improved the contrast-to-noise ratio (CNR) of the tube by 26.8 dB, 25.5 dB, and 19.7 dB, compared to OPW, XDoppler, and RC-FMAS methods, respectively. In the human submandibular gland experiment, it not only reconstructed a more complete vasculature but also improved the CNR by more than 15 dB, compared to OPW, XDoppler, and RC-FMAS methods. In summary, the proposed method effectively suppresses the side-lobe artifacts and noise in images collected using an RCA under low SNR conditions, leading to improved visualization of 3D vasculatures.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2V-CBCT: Two-Orthogonal-Projection based CBCT Reconstruction and Dose Calculation for Radiation Therapy using Real Projection Data. 2V-CBCT:使用真实投影数据进行基于双正交投影的 CBCT 重建和放射治疗剂量计算。
Pub Date : 2024-08-06 DOI: 10.1109/TMI.2024.3439573
Yikun Zhang, Dianlin Hu, Wangyao Li, Weijie Zhang, Gaoyu Chen, Ronald C Chen, Yang Chen, Hao Gao

This work demonstrates the feasibility of two-orthogonal-projection-based CBCT (2V-CBCT) reconstruction and dose calculation for radiation therapy (RT) using real projection data, which is the first 2V-CBCT feasibility study with real projection data, to the best of our knowledge. RT treatments are often delivered in multiple fractions, for which on-board CBCT is desirable to calculate the delivered dose per fraction for the purpose of RT delivery quality assurance and adaptive RT. However, not all RT treatments/fractions have CBCT acquired, but two orthogonal projections are always available. The question to be addressed in this work is the feasibility of 2V-CBCT for the purpose of RT dose calculation. 2V-CBCT is a severely ill-posed inverse problem for which we propose a coarse-to-fine learning strategy. First, a 3D deep neural network that can extract and exploit the inter-slice and intra-slice information is adopted to predict the initial 3D volumes. Then, a 2D deep neural network is utilized to fine-tune the initial 3D volumes slice-by-slice. During the fine-tuning stage, a perceptual loss based on multi-frequency features is employed to enhance the image reconstruction. Dose calculation results from both photon and proton RT demonstrate that 2V-CBCT provides comparable accuracy with full-view CBCT based on real projection data.

据我们所知,这是第一项使用真实投影数据进行的 2V-CBCT 可行性研究。放射治疗通常分多个部分进行,因此,为了保证放射治疗的质量和适应性放射治疗,需要机载 CBCT 来计算每个部分的放射剂量。然而,并非所有的 RT 治疗/分段都能获得 CBCT,但两个正交投影总是可用的。这项工作要解决的问题是 2V-CBCT 用于 RT 剂量计算的可行性。2V-CBCT 是一个严重求解困难的逆问题,为此我们提出了一种从粗到细的学习策略。首先,采用能提取和利用切片间和切片内信息的三维深度神经网络来预测初始三维体积。然后,利用二维深度神经网络对初始三维体积进行逐片微调。在微调阶段,采用基于多频率特性的感知损失来增强图像重建。光子和质子 RT 的剂量计算结果表明,基于真实投影数据的 2V-CBCT 可提供与全视角 CBCT 相当的精确度。
{"title":"2V-CBCT: Two-Orthogonal-Projection based CBCT Reconstruction and Dose Calculation for Radiation Therapy using Real Projection Data.","authors":"Yikun Zhang, Dianlin Hu, Wangyao Li, Weijie Zhang, Gaoyu Chen, Ronald C Chen, Yang Chen, Hao Gao","doi":"10.1109/TMI.2024.3439573","DOIUrl":"10.1109/TMI.2024.3439573","url":null,"abstract":"<p><p>This work demonstrates the feasibility of two-orthogonal-projection-based CBCT (2V-CBCT) reconstruction and dose calculation for radiation therapy (RT) using real projection data, which is the first 2V-CBCT feasibility study with real projection data, to the best of our knowledge. RT treatments are often delivered in multiple fractions, for which on-board CBCT is desirable to calculate the delivered dose per fraction for the purpose of RT delivery quality assurance and adaptive RT. However, not all RT treatments/fractions have CBCT acquired, but two orthogonal projections are always available. The question to be addressed in this work is the feasibility of 2V-CBCT for the purpose of RT dose calculation. 2V-CBCT is a severely ill-posed inverse problem for which we propose a coarse-to-fine learning strategy. First, a 3D deep neural network that can extract and exploit the inter-slice and intra-slice information is adopted to predict the initial 3D volumes. Then, a 2D deep neural network is utilized to fine-tune the initial 3D volumes slice-by-slice. During the fine-tuning stage, a perceptual loss based on multi-frequency features is employed to enhance the image reconstruction. Dose calculation results from both photon and proton RT demonstrate that 2V-CBCT provides comparable accuracy with full-view CBCT based on real projection data.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141899234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Poroelastography Method for High-quality Estimation of Lateral Strain, Solid Stress and Fluid Pressure In Vivo. 用于高质量估算体内侧向应变、固体应力和流体压力的新型透气造影法
Pub Date : 2024-08-05 DOI: 10.1109/TMI.2024.3438564
Md Hadiur Rahman Khan, Raffaella Righetti

Assessment of mechanical and transport properties of tissues using ultrasound elasticity imaging requires accurate estimations of the spatiotemporal distribution of volumetric strain. Due to physical constraints such as pitch limitation and the lack of phase information in the lateral direction, the quality of lateral strain estimation is typically significantly lower than the quality of axial strain estimation. In this paper, a novel lateral strain estimation technique based on the physics of compressible porous media is developed, tested and validated. This technique is referred to as "Poroelastography-based Ultrasound Lateral Strain Estimation" (PULSE). PULSE differs from previously proposed lateral strain estimators as it uses the underlying physics of internal fluid flow within a local region of the tissue as theoretical foundation. PULSE establishes a relation between spatiotemporal changes in the axial strains and corresponding spatiotemporal changes in the lateral strains, effectively allowing assessment of lateral strains with comparable quality of axial strain estimators. We demonstrate that PULSE can also be used to accurately track compression-induced solid stresses and fluid pressure in cancers using ultrasound poroelastography (USPE). In this study, we report the theoretical formulation for PULSE and validation using finite element (FE) and ultrasound simulations. PULSE-generated results exhibit less than 5% percentage relative error (PRE) and greater than 90% structural similarity index (SSIM) compared to ground truth simulations. Experimental results are included to qualitatively assess the performance of PULSE in vivo. The proposed method can be used to overcome the inherent limitations of non-axial strain imaging and improve clinical translatability of USPE.

利用超声弹性成像评估组织的机械和传输特性需要准确估计体积应变的时空分布。由于间距限制和横向相位信息缺乏等物理限制,横向应变估计的质量通常明显低于轴向应变估计的质量。本文基于可压缩多孔介质的物理学原理,开发、测试和验证了一种新型横向应变估算技术。该技术被称为 "基于超声波侧向应变估算(PULSE)"。PULSE 与之前提出的侧向应变估算器不同,它使用组织局部区域内内部流体流动的基本物理学作为理论基础。PULSE 建立了轴向应变的时空变化与侧向应变的相应时空变化之间的关系,从而有效地评估了侧向应变,其质量与轴向应变估算器相当。我们证明了 PULSE 还可用于利用超声孔弹性成像 (USPE) 精确跟踪癌症中压缩引起的固体应力和流体压力。在本研究中,我们报告了 PULSE 的理论公式,并使用有限元 (FE) 和超声模拟进行了验证。与地面实况模拟相比,PULSE 生成的结果显示出小于 5% 的百分比相对误差 (PRE),以及大于 90% 的结构相似性指数 (SSIM)。实验结果对 PULSE 在体内的性能进行了定性评估。所提出的方法可用于克服非轴应变成像的固有局限性,并提高 USPE 的临床转化能力。
{"title":"A Novel Poroelastography Method for High-quality Estimation of Lateral Strain, Solid Stress and Fluid Pressure In Vivo.","authors":"Md Hadiur Rahman Khan, Raffaella Righetti","doi":"10.1109/TMI.2024.3438564","DOIUrl":"https://doi.org/10.1109/TMI.2024.3438564","url":null,"abstract":"<p><p>Assessment of mechanical and transport properties of tissues using ultrasound elasticity imaging requires accurate estimations of the spatiotemporal distribution of volumetric strain. Due to physical constraints such as pitch limitation and the lack of phase information in the lateral direction, the quality of lateral strain estimation is typically significantly lower than the quality of axial strain estimation. In this paper, a novel lateral strain estimation technique based on the physics of compressible porous media is developed, tested and validated. This technique is referred to as \"Poroelastography-based Ultrasound Lateral Strain Estimation\" (PULSE). PULSE differs from previously proposed lateral strain estimators as it uses the underlying physics of internal fluid flow within a local region of the tissue as theoretical foundation. PULSE establishes a relation between spatiotemporal changes in the axial strains and corresponding spatiotemporal changes in the lateral strains, effectively allowing assessment of lateral strains with comparable quality of axial strain estimators. We demonstrate that PULSE can also be used to accurately track compression-induced solid stresses and fluid pressure in cancers using ultrasound poroelastography (USPE). In this study, we report the theoretical formulation for PULSE and validation using finite element (FE) and ultrasound simulations. PULSE-generated results exhibit less than 5% percentage relative error (PRE) and greater than 90% structural similarity index (SSIM) compared to ground truth simulations. Experimental results are included to qualitatively assess the performance of PULSE in vivo. The proposed method can be used to overcome the inherent limitations of non-axial strain imaging and improve clinical translatability of USPE.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegMorph: Concurrent Motion Estimation and Segmentation for Cardiac MRI Sequences. SegMorph:心脏磁共振成像序列的并发运动估计和分割。
Pub Date : 2024-08-05 DOI: 10.1109/TMI.2024.3435000
Ning Bi, Arezoo Zakeri, Yan Xia, Nina Cheng, Zeike A Taylor, Alejandro F Frangi, Ali Gooya

We propose a novel recurrent variational network, SegMorph, to perform concurrent segmentation and motion estimation on cardiac cine magnetic resonance image (CMR) sequences. Our model establishes a recurrent latent space that captures spatiotemporal features from cine-MRI sequences for multitask inference and synthesis. The proposed model follows a recurrent variational auto-encoder framework and adopts a learnt prior from the temporal inputs. We utilise a multi-branch decoder to handle bi-ventricular segmentation and motion estimation simultaneously. In addition to the spatiotemporal features from the latent space, motion estimation enriches the supervision of sequential segmentation tasks by providing pseudo-ground truth. On the other hand, the segmentation branch helps with motion estimation by predicting deformation vector fields (DVFs) based on anatomical information. Experimental results demonstrate that the proposed method performs better than state-of-the-art approaches qualitatively and quantitatively for both segmentation and motion estimation tasks. We achieved an 81% average Dice Similarity Coefficient (DSC) and a less than 3.5 mm average Hausdorff distance on segmentation. Meanwhile, we achieved a motion estimation Dice Similarity Coefficient of over 79%, with approximately 0.14% of pixels displaying a negative Jacobian determinant in the estimated DVFs.

我们提出了一种新颖的递归变异网络 SegMorph,用于同时对心脏电影磁共振图像(CMR)序列进行分割和运动估计。我们的模型建立了一个递归潜空间,可捕捉电影磁共振成像序列的时空特征,用于多任务推理和合成。所提议的模型遵循递归变异自动编码器框架,并采用从时间输入中学习的先验。我们利用多分支解码器同时处理双心室分割和运动估计。除了来自潜在空间的时空特征外,运动估计通过提供伪地面真实来丰富对顺序分割任务的监督。另一方面,分割分支根据解剖信息预测变形矢量场(DVF),从而帮助进行运动估计。实验结果表明,在分割和运动估计任务方面,所提出的方法在质量和数量上都优于最先进的方法。在分割方面,我们取得了 81% 的平均骰子相似系数(DSC)和小于 3.5 mm 的平均豪斯多夫距离。同时,我们的运动估计骰子相似系数超过了 79%,约 0.14% 的像素在估计的 DVF 中显示负雅各布行列式。
{"title":"SegMorph: Concurrent Motion Estimation and Segmentation for Cardiac MRI Sequences.","authors":"Ning Bi, Arezoo Zakeri, Yan Xia, Nina Cheng, Zeike A Taylor, Alejandro F Frangi, Ali Gooya","doi":"10.1109/TMI.2024.3435000","DOIUrl":"https://doi.org/10.1109/TMI.2024.3435000","url":null,"abstract":"<p><p>We propose a novel recurrent variational network, SegMorph, to perform concurrent segmentation and motion estimation on cardiac cine magnetic resonance image (CMR) sequences. Our model establishes a recurrent latent space that captures spatiotemporal features from cine-MRI sequences for multitask inference and synthesis. The proposed model follows a recurrent variational auto-encoder framework and adopts a learnt prior from the temporal inputs. We utilise a multi-branch decoder to handle bi-ventricular segmentation and motion estimation simultaneously. In addition to the spatiotemporal features from the latent space, motion estimation enriches the supervision of sequential segmentation tasks by providing pseudo-ground truth. On the other hand, the segmentation branch helps with motion estimation by predicting deformation vector fields (DVFs) based on anatomical information. Experimental results demonstrate that the proposed method performs better than state-of-the-art approaches qualitatively and quantitatively for both segmentation and motion estimation tasks. We achieved an 81% average Dice Similarity Coefficient (DSC) and a less than 3.5 mm average Hausdorff distance on segmentation. Meanwhile, we achieved a motion estimation Dice Similarity Coefficient of over 79%, with approximately 0.14% of pixels displaying a negative Jacobian determinant in the estimated DVFs.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141895072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OTMorph: Unsupervised Multi-domain Abdominal Medical Image Registration Using Neural Optimal Transport. OTMorph:使用神经优化传输的无监督多域腹部医学图像注册。
Pub Date : 2024-08-02 DOI: 10.1109/TMI.2024.3437295
Boah Kim, Yan Zhuang, Tejas Sudharshan Mathai, Ronald M Summers

Deformable image registration is one of the essential processes in analyzing medical images. In particular, when diagnosing abdominal diseases such as hepatic cancer and lymphoma, multi-domain images scanned from different modalities or different imaging protocols are often used. However, they are not aligned due to scanning times, patient breathing, movement, etc. Although recent learning-based approaches can provide deformations in real-time with high performance, multi-domain abdominal image registration using deep learning is still challenging since the images in different domains have different characteristics such as image contrast and intensity ranges. To address this, this paper proposes a novel unsupervised multi-domain image registration framework using neural optimal transport, dubbed OTMorph. When moving and fixed volumes are given as input, a transport module of our proposed model learns the optimal transport plan to map data distributions from the moving to the fixed volumes and estimates a domain-transported volume. Subsequently, a registration module taking the transported volume can effectively estimate the deformation field, leading to deformation performance improvement. Experimental results on multi-domain image registration using multi-modality and multi-parametric abdominal medical images demonstrate that the proposed method provides superior deformable registration via the domain-transported image that alleviates the domain gap between the input images. Also, we attain the improvement even on out-of-distribution data, which indicates the superior generalizability of our model for the registration of various medical images. Our source code is available at https://github.com/boahK/OTMorph.

可变形图像配准是分析医学图像的重要过程之一。特别是在诊断肝癌和淋巴瘤等腹部疾病时,通常会使用不同模式或不同成像方案扫描的多域图像。然而,由于扫描时间、病人呼吸、运动等原因,这些图像并不对齐。虽然最近基于学习的方法可以提供高性能的实时变形,但使用深度学习进行多域腹部图像配准仍具有挑战性,因为不同域的图像具有不同的特征,如图像对比度和强度范围。针对这一问题,本文提出了一种新颖的无监督多域图像配准框架,该框架采用神经最优传输技术,被称为 OTMorph。当输入移动体量和固定体量时,我们提出的模型中的传输模块会学习最优传输方案,将数据分布从移动体量映射到固定体量,并估算出域传输体量。随后,套准模块利用移动体可以有效地估计变形场,从而提高变形性能。使用多模态和多参数腹部医学图像进行多域图像配准的实验结果表明,所提出的方法通过域传输图像提供了卓越的可变形配准,缓解了输入图像之间的域差距。此外,我们甚至在非分布数据上也取得了改进,这表明我们的模型对各种医学图像的配准具有卓越的通用性。我们的源代码可在 https://github.com/boahK/OTMorph 上获取。
{"title":"OTMorph: Unsupervised Multi-domain Abdominal Medical Image Registration Using Neural Optimal Transport.","authors":"Boah Kim, Yan Zhuang, Tejas Sudharshan Mathai, Ronald M Summers","doi":"10.1109/TMI.2024.3437295","DOIUrl":"https://doi.org/10.1109/TMI.2024.3437295","url":null,"abstract":"<p><p>Deformable image registration is one of the essential processes in analyzing medical images. In particular, when diagnosing abdominal diseases such as hepatic cancer and lymphoma, multi-domain images scanned from different modalities or different imaging protocols are often used. However, they are not aligned due to scanning times, patient breathing, movement, etc. Although recent learning-based approaches can provide deformations in real-time with high performance, multi-domain abdominal image registration using deep learning is still challenging since the images in different domains have different characteristics such as image contrast and intensity ranges. To address this, this paper proposes a novel unsupervised multi-domain image registration framework using neural optimal transport, dubbed OTMorph. When moving and fixed volumes are given as input, a transport module of our proposed model learns the optimal transport plan to map data distributions from the moving to the fixed volumes and estimates a domain-transported volume. Subsequently, a registration module taking the transported volume can effectively estimate the deformation field, leading to deformation performance improvement. Experimental results on multi-domain image registration using multi-modality and multi-parametric abdominal medical images demonstrate that the proposed method provides superior deformable registration via the domain-transported image that alleviates the domain gap between the input images. Also, we attain the improvement even on out-of-distribution data, which indicates the superior generalizability of our model for the registration of various medical images. Our source code is available at https://github.com/boahK/OTMorph.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141879979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistency-guided Differential Decoding for Enhancing Semi-supervised Medical Image Segmentation. 一致性指导的差分解码用于增强半监督医学图像分割。
Pub Date : 2024-08-01 DOI: 10.1109/TMI.2024.3429340
Qingjie Zeng, Yutong Xie, Zilin Lu, Mengkang Lu, Jingfeng Zhang, Yuyin Zhou, Yong Xia

Semi-supervised learning (SSL) has been proven beneficial for mitigating the issue of limited labeled data, especially on volumetric medical image segmentation. Unlike previous SSL methods which focus on exploring highly confident pseudo-labels or developing consistency regularization schemes, our empirical findings suggest that differential decoder features emerge naturally when two decoders strive to generate consistent predictions. Based on the observation, we first analyze the treasure of discrepancy in learning towards consistency, under both pseudo-labeling and consistency regularization settings, and subsequently propose a novel SSL method called LeFeD, which learns the feature-level discrepancies obtained from two decoders, by feeding such information as feedback signals to the encoder. The core design of LeFeD is to enlarge the discrepancies by training differential decoders, and then learn from the differential features iteratively. We evaluate LeFeD against eight state-of-the-art (SOTA) methods on three public datasets. Experiments show LeFeD surpasses competitors without any bells and whistles, such as uncertainty estimation and strong constraints, as well as setting a new state of the art for semi-supervised medical image segmentation. Code has been released at https://github.com/maxwell0027/LeFeD.

半监督学习(SSL)已被证明有利于缓解标记数据有限的问题,尤其是在体积医学图像分割方面。与以往侧重于探索高可信度伪标签或开发一致性正则化方案的半监督学习方法不同,我们的实证研究结果表明,当两个解码器努力生成一致的预测时,差异解码器特征就会自然出现。基于这一观察结果,我们首先分析了在伪标签和一致性正则化设置下,差异在学习一致性过程中的重要性,随后提出了一种名为 LeFeD 的新型 SSL 方法,该方法可学习两个解码器获得的特征级差异,并将这些信息作为反馈信号反馈给编码器。LeFeD 的核心设计是通过训练差异解码器来扩大差异,然后从差异特征中迭代学习。我们在三个公共数据集上对 LeFeD 与八种最先进(SOTA)方法进行了评估。实验结果表明,LeFeD 在没有不确定性估计和强约束等任何附加功能的情况下就超越了竞争对手,并为半监督医学影像分割技术开创了新局面。代码已发布于 https://github.com/maxwell0027/LeFeD。
{"title":"Consistency-guided Differential Decoding for Enhancing Semi-supervised Medical Image Segmentation.","authors":"Qingjie Zeng, Yutong Xie, Zilin Lu, Mengkang Lu, Jingfeng Zhang, Yuyin Zhou, Yong Xia","doi":"10.1109/TMI.2024.3429340","DOIUrl":"10.1109/TMI.2024.3429340","url":null,"abstract":"<p><p>Semi-supervised learning (SSL) has been proven beneficial for mitigating the issue of limited labeled data, especially on volumetric medical image segmentation. Unlike previous SSL methods which focus on exploring highly confident pseudo-labels or developing consistency regularization schemes, our empirical findings suggest that differential decoder features emerge naturally when two decoders strive to generate consistent predictions. Based on the observation, we first analyze the treasure of discrepancy in learning towards consistency, under both pseudo-labeling and consistency regularization settings, and subsequently propose a novel SSL method called LeFeD, which learns the feature-level discrepancies obtained from two decoders, by feeding such information as feedback signals to the encoder. The core design of LeFeD is to enlarge the discrepancies by training differential decoders, and then learn from the differential features iteratively. We evaluate LeFeD against eight state-of-the-art (SOTA) methods on three public datasets. Experiments show LeFeD surpasses competitors without any bells and whistles, such as uncertainty estimation and strong constraints, as well as setting a new state of the art for semi-supervised medical image segmentation. Code has been released at https://github.com/maxwell0027/LeFeD.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IGU-Aug: Information-guided unsupervised augmentation and pixel-wise contrastive learning for medical image analysis. IGU-Aug:用于医学图像分析的信息引导无监督增强和像素对比学习。
Pub Date : 2024-08-01 DOI: 10.1109/TMI.2024.3436713
Quan Quan, Qingsong Yao, Heqin Zhu, S Kevin Zhou

Contrastive learning (CL) is a form of self-supervised learning and has been widely used for various tasks. Different from widely studied instance-level contrastive learning, pixel-wise contrastive learning mainly helps with pixel-wise dense prediction tasks. The counter-part to an instance in instance-level CL is a pixel, along with its neighboring context, in pixel-wise CL. Aiming to build better feature representation, there is a vast literature about designing instance augmentation strategies for instance-level CL; but there is little similar work on pixel augmentation for pixel-wise CL with a pixel granularity. In this paper, we attempt to bridge this gap. We first classify a pixel into three categories, namely low-, medium-, and high-informative, based on the information quantity the pixel contains. We then adaptively design separate augmentation strategies for each category in terms of augmentation intensity and sampling ratio. Extensive experiments validate that our information-guided pixel augmentation strategy succeeds in encoding more discriminative representations and surpassing other competitive approaches in unsupervised local feature matching. Furthermore, our pretrained model improves the performance of both one-shot and fully supervised models. To the best of our knowledge, we are the first to propose a pixel augmentation method with a pixel granularity for enhancing unsupervised pixel-wise contrastive learning. Code is available at https: //github.com/Curli-quan/IGU-Aug.

对比学习(Contrastive Learning,CL)是一种自我监督学习,已被广泛应用于各种任务中。与广泛研究的实例级对比学习不同,像素级对比学习主要用于像素级密集预测任务。在像素对比学习中,与实例级对比学习中的实例相对应的是像素及其相邻上下文。为了建立更好的特征表示,有大量文献介绍了如何为实例级 CL 设计实例增强策略;但对于像素粒度的像素级 CL,却鲜有类似的像素增强研究。在本文中,我们试图弥合这一差距。首先,我们根据像素包含的信息量将其分为三类,即低信息量、中等信息量和高信息量。然后,我们根据增强强度和采样率为每个类别设计了不同的增强策略。广泛的实验验证了我们以信息为导向的像素增强策略能够成功地编码出更具区分度的表征,并在无监督局部特征匹配中超越了其他竞争方法。此外,我们的预训练模型还提高了单次模型和完全监督模型的性能。据我们所知,我们是第一个提出以像素粒度增强无监督像素对比学习的像素增强方法的人。代码见 https://github.com/Curli-quan/IGU-Aug.
{"title":"IGU-Aug: Information-guided unsupervised augmentation and pixel-wise contrastive learning for medical image analysis.","authors":"Quan Quan, Qingsong Yao, Heqin Zhu, S Kevin Zhou","doi":"10.1109/TMI.2024.3436713","DOIUrl":"https://doi.org/10.1109/TMI.2024.3436713","url":null,"abstract":"<p><p>Contrastive learning (CL) is a form of self-supervised learning and has been widely used for various tasks. Different from widely studied instance-level contrastive learning, pixel-wise contrastive learning mainly helps with pixel-wise dense prediction tasks. The counter-part to an instance in instance-level CL is a pixel, along with its neighboring context, in pixel-wise CL. Aiming to build better feature representation, there is a vast literature about designing instance augmentation strategies for instance-level CL; but there is little similar work on pixel augmentation for pixel-wise CL with a pixel granularity. In this paper, we attempt to bridge this gap. We first classify a pixel into three categories, namely low-, medium-, and high-informative, based on the information quantity the pixel contains. We then adaptively design separate augmentation strategies for each category in terms of augmentation intensity and sampling ratio. Extensive experiments validate that our information-guided pixel augmentation strategy succeeds in encoding more discriminative representations and surpassing other competitive approaches in unsupervised local feature matching. Furthermore, our pretrained model improves the performance of both one-shot and fully supervised models. To the best of our knowledge, we are the first to propose a pixel augmentation method with a pixel granularity for enhancing unsupervised pixel-wise contrastive learning. Code is available at https: //github.com/Curli-quan/IGU-Aug.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Medical Image Segmentation Using Deep Reinforced Adaptive Masking. 利用深度强化自适应屏蔽进行自监督医学图像分割
Pub Date : 2024-08-01 DOI: 10.1109/TMI.2024.3436608
Zhenghua Xu, Yunxin Liu, Gang Xu, Thomas Lukasiewicz

Self-supervised learning aims to learn transferable representations from unlabeled data for downstream tasks. Inspired by masked language modeling in natural language processing, masked image modeling (MIM) has achieved certain success in the field of computer vision, but its effectiveness in medical images remains unsatisfactory. This is mainly due to the high redundancy and small discriminative regions in medical images compared to natural images. Therefore, this paper proposes an adaptive hard masking (AHM) approach based on deep reinforcement learning to expand the application of MIM in medical images. Unlike predefined random masks, AHM uses an asynchronous advantage actor-critic (A3C) model to predict reconstruction loss for each patch, enabling the model to learn where masking is valuable. By optimizing the non-differentiable sampling process using reinforcement learning, AHM enhances the understanding of key regions, thereby improving downstream task performance. Experimental results on two medical image datasets demonstrate that AHM outperforms state-of-the-art methods. Additional experiments under various settings validate the effectiveness of AHM in constructing masked images.

自我监督学习旨在从未标明的数据中学习可转移的表征,用于下游任务。受自然语言处理中的遮蔽语言建模启发,遮蔽图像建模(MIM)在计算机视觉领域取得了一定的成功,但其在医学图像中的效果仍不尽如人意。这主要是由于与自然图像相比,医学图像的冗余度高、辨别区域小。因此,本文提出了一种基于深度强化学习的自适应硬掩膜(AHM)方法,以拓展 MIM 在医学图像中的应用。与预定义的随机遮罩不同,AHM 使用异步优势行为批判(A3C)模型来预测每个补丁的重建损失,使模型能够学习遮罩在哪些地方是有价值的。通过使用强化学习优化无差别采样过程,AHM 增强了对关键区域的理解,从而提高了下游任务的性能。在两个医学图像数据集上的实验结果表明,AHM 的性能优于最先进的方法。在各种设置下进行的其他实验验证了 AHM 在构建遮蔽图像方面的有效性。
{"title":"Self-Supervised Medical Image Segmentation Using Deep Reinforced Adaptive Masking.","authors":"Zhenghua Xu, Yunxin Liu, Gang Xu, Thomas Lukasiewicz","doi":"10.1109/TMI.2024.3436608","DOIUrl":"https://doi.org/10.1109/TMI.2024.3436608","url":null,"abstract":"<p><p>Self-supervised learning aims to learn transferable representations from unlabeled data for downstream tasks. Inspired by masked language modeling in natural language processing, masked image modeling (MIM) has achieved certain success in the field of computer vision, but its effectiveness in medical images remains unsatisfactory. This is mainly due to the high redundancy and small discriminative regions in medical images compared to natural images. Therefore, this paper proposes an adaptive hard masking (AHM) approach based on deep reinforcement learning to expand the application of MIM in medical images. Unlike predefined random masks, AHM uses an asynchronous advantage actor-critic (A3C) model to predict reconstruction loss for each patch, enabling the model to learn where masking is valuable. By optimizing the non-differentiable sampling process using reinforcement learning, AHM enhances the understanding of key regions, thereby improving downstream task performance. Experimental results on two medical image datasets demonstrate that AHM outperforms state-of-the-art methods. Additional experiments under various settings validate the effectiveness of AHM in constructing masked images.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Nuclear Science Symposium 电气和电子工程师学会核科学研讨会
Pub Date : 2024-08-01 DOI: 10.1109/TMI.2024.3372491
{"title":"IEEE Nuclear Science Symposium","authors":"","doi":"10.1109/TMI.2024.3372491","DOIUrl":"10.1109/TMI.2024.3372491","url":null,"abstract":"","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"43 8","pages":"3057-3057"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10620001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141877535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatially-constrained and -unconstrained bi-graph interaction network for multi-organ pathology image classification. 用于多器官病理图像分类的空间受限和非受限双图交互网络
Pub Date : 2024-07-31 DOI: 10.1109/TMI.2024.3436080
Doanh C Bui, Boram Song, Kyungeun Kim, Jin Tae Kwak

In computational pathology, graphs have shown to be promising for pathology image analysis. There exist various graph structures that can discover differing features of pathology images. However, the combination and interaction between differing graph structures have not been fully studied and utilized for pathology image analysis. In this study, we propose a parallel, bi-graph neural network, designated as SCUBa-Net, equipped with both graph convolutional networks and Transformers, that processes a pathology image as two distinct graphs, including a spatially-constrained graph and a spatially-unconstrained graph. For efficient and effective graph learning, we introduce two inter-graph interaction blocks and an intra-graph interaction block. The inter-graph interaction blocks learn the node-to-node interactions within each graph. The intra-graph interaction block learns the graph-to-graph interactions at both global- and local-levels with the help of the virtual nodes that collect and summarize the information from the entire graphs. SCUBa-Net is systematically evaluated on four multi-organ datasets, including colorectal, prostate, gastric, and bladder cancers. The experimental results demonstrate the effectiveness of SCUBa-Net in comparison to the state-of-the-art convolutional neural networks, Transformer, and graph neural networks.

在计算病理学领域,图形在病理图像分析方面大有可为。现有的各种图结构可以发现病理图像的不同特征。然而,在病理图像分析中,不同图结构之间的组合和相互作用尚未得到充分研究和利用。在本研究中,我们提出了一种并行的双图神经网络,命名为 SCUBa-Net,它配备了图卷积网络和变换器,可将病理图像处理为两个不同的图,包括空间受限图和空间非受限图。为了实现高效的图学习,我们引入了两个图间交互块和一个图内交互块。图间交互块学习每个图内节点到节点的交互。图内交互块则借助收集和汇总整个图信息的虚拟节点,学习全局和局部层面的图与图之间的交互。SCUBa-Net 在四个多器官数据集上进行了系统评估,包括结直肠癌、前列腺癌、胃癌和膀胱癌。实验结果表明,与最先进的卷积神经网络、Transformer 和图神经网络相比,SCUBa-Net 非常有效。
{"title":"Spatially-constrained and -unconstrained bi-graph interaction network for multi-organ pathology image classification.","authors":"Doanh C Bui, Boram Song, Kyungeun Kim, Jin Tae Kwak","doi":"10.1109/TMI.2024.3436080","DOIUrl":"https://doi.org/10.1109/TMI.2024.3436080","url":null,"abstract":"<p><p>In computational pathology, graphs have shown to be promising for pathology image analysis. There exist various graph structures that can discover differing features of pathology images. However, the combination and interaction between differing graph structures have not been fully studied and utilized for pathology image analysis. In this study, we propose a parallel, bi-graph neural network, designated as SCUBa-Net, equipped with both graph convolutional networks and Transformers, that processes a pathology image as two distinct graphs, including a spatially-constrained graph and a spatially-unconstrained graph. For efficient and effective graph learning, we introduce two inter-graph interaction blocks and an intra-graph interaction block. The inter-graph interaction blocks learn the node-to-node interactions within each graph. The intra-graph interaction block learns the graph-to-graph interactions at both global- and local-levels with the help of the virtual nodes that collect and summarize the information from the entire graphs. SCUBa-Net is systematically evaluated on four multi-organ datasets, including colorectal, prostate, gastric, and bladder cancers. The experimental results demonstrate the effectiveness of SCUBa-Net in comparison to the state-of-the-art convolutional neural networks, Transformer, and graph neural networks.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141861927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1