首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Development of Effective Connectome from Infancy to Adolescence. 从婴儿期到青春期有效连接组的发展。
Guoshi Li, Kim-Han Thung, Hoyt Taylor, Zhengwang Wu, Gang Li, Li Wang, Weili Lin, Sahar Ahmad, Pew-Thian Yap

Delineating the normative developmental profile of functional connectome is important for both standardized assessment of individual growth and early detection of diseases. However, functional connectome has been mostly studied using functional connectivity (FC), where undirected connectivity strengths are estimated from statistical correlation of resting-state functional MRI (rs-fMRI) signals. To address this limitation, we applied regression dynamic causal modeling (rDCM) to delineate the developmental trajectories of effective connectivity (EC), the directed causal influence among neuronal populations, in whole-brain networks from infancy to adolescence (0-22 years old) based on high-quality rs-fMRI data from Baby Connectome Project (BCP) and Human Connectome Project Development (HCP-D). Analysis with linear mixed model demonstrates significant age effect on the mean nodal EC which is best fit by a "U" shaped quadratic curve with minimal EC at around 2 years old. Further analysis indicates that five brain regions including the left and right cuneus, left precuneus, left supramarginal gyrus and right inferior temporal gyrus have the most significant age effect on nodal EC (p < 0.05, FDR corrected). Moreover, the frontoparietal control (FPC) network shows the fastest increase from early childhood to adolescence followed by the visual and salience networks. Our findings suggest complex nonlinear developmental profile of EC from infancy to adolescence, which may reflect dynamic structural and functional maturation during this critical growth period.

描述功能连接体的规范性发育特征对于个体生长的标准化评估和疾病的早期发现都很重要。然而,功能连接组的研究主要是使用功能连接(FC),其中无向连接强度是通过静息状态功能MRI (rs-fMRI)信号的统计相关性来估计的。为了解决这一局限性,我们基于婴儿连接组项目(BCP)和人类连接组项目发展(HCP-D)的高质量rs-fMRI数据,应用回归动态因果模型(rDCM)来描述婴儿期到青春期(0-22岁)全脑网络中有效连接(EC)的发展轨迹,即神经元群体之间的直接因果影响。线性混合模型分析表明,年龄对平均节点电导率的影响显著,其最佳拟合曲线为“U”型,2岁左右电导率最小。进一步分析发现,左右楔叶、左楔前叶、左边缘上回、右颞下回等5个脑区对结性EC的年龄影响最为显著(p < 0.05, FDR校正)。此外,额顶叶控制(FPC)网络在儿童早期到青少年时期增长最快,其次是视觉和显著性网络。我们的研究结果表明,从婴儿期到青春期,EC复杂的非线性发育特征可能反映了这一关键生长时期结构和功能的动态成熟。
{"title":"Development of Effective Connectome from Infancy to Adolescence.","authors":"Guoshi Li, Kim-Han Thung, Hoyt Taylor, Zhengwang Wu, Gang Li, Li Wang, Weili Lin, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-72384-1_13","DOIUrl":"10.1007/978-3-031-72384-1_13","url":null,"abstract":"<p><p>Delineating the normative developmental profile of functional connectome is important for both standardized assessment of individual growth and early detection of diseases. However, functional connectome has been mostly studied using functional connectivity (FC), where undirected connectivity strengths are estimated from statistical correlation of resting-state functional MRI (rs-fMRI) signals. To address this limitation, we applied regression dynamic causal modeling (rDCM) to delineate the developmental trajectories of effective connectivity (EC), the directed causal influence among neuronal populations, in whole-brain networks from infancy to adolescence (0-22 years old) based on high-quality rs-fMRI data from Baby Connectome Project (BCP) and Human Connectome Project Development (HCP-D). Analysis with linear mixed model demonstrates significant age effect on the mean nodal EC which is best fit by a \"U\" shaped quadratic curve with minimal EC at around 2 years old. Further analysis indicates that five brain regions including the left and right cuneus, left precuneus, left supramarginal gyrus and right inferior temporal gyrus have the most significant age effect on nodal EC (<i>p</i> < 0.05, FDR corrected). Moreover, the frontoparietal control (FPC) network shows the fastest increase from early childhood to adolescence followed by the visual and salience networks. Our findings suggest complex nonlinear developmental profile of EC from infancy to adolescence, which may reflect dynamic structural and functional maturation during this critical growth period.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15003 ","pages":"131-140"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758277/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Extraction for Generative Medical Imaging Evaluation: New Evidence Against an Evolving Trend. 特征提取用于生成医学成像评价:新证据反对一个不断发展的趋势。
McKell Woodland, Austin Castelo, Mais Al Taie, Jessica Albuquerque Marques Silva, Mohamed Eltaher, Frank Mohn, Alexander Shieh, Suprateek Kundu, Joshua P Yung, Ankit B Patel, Kristy K Brock

Fréchet Inception Distance (FID) is a widely used metric for assessing synthetic image quality. It relies on an ImageNet-based feature extractor, making its applicability to medical imaging unclear. A recent trend is to adapt FID to medical imaging through feature extractors trained on medical images. Our study challenges this practice by demonstrating that ImageNet-based extractors are more consistent and aligned with human judgment than their RadImageNet counterparts. We evaluated sixteen StyleGAN2 networks across four medical imaging modalities and four data augmentation techniques with Fréchet distances (FDs) computed using eleven ImageNet or RadImageNet-trained feature extractors. Comparison with human judgment via visual Turing tests revealed that ImageNet-based extractors produced rankings consistent with human judgment, with the FD derived from the ImageNet-trained SwAV extractor significantly correlating with expert evaluations. In contrast, RadImageNet-based rankings were volatile and inconsistent with human judgment. Our findings challenge prevailing assumptions, providing novel evidence that medical image-trained feature extractors do not inherently improve FDs and can even compromise their reliability. Our code is available at https://github.com/mckellwoodland/fid-med-eval.

起始距离(FID)是一种广泛用于评价合成图像质量的度量。它依赖于基于imagenet的特征提取器,这使得它对医学成像的适用性不明确。最近的一个趋势是通过对医学图像进行训练的特征提取器使FID适应医学成像。我们的研究通过证明基于imagenet的提取器比基于RadImageNet的提取器更符合人类的判断,从而挑战了这种做法。我们评估了16个StyleGAN2网络跨越4种医学成像模式和4种数据增强技术,使用11个ImageNet或radimagenet训练的特征提取器计算了fr距离(fd)。通过视觉图灵测试与人类判断的比较显示,基于imagenet的提取器产生的排名与人类判断一致,从imagenet训练的SwAV提取器获得的FD与专家评估显着相关。相比之下,基于radimagenet的排名是不稳定的,与人类的判断不一致。我们的研究结果挑战了普遍的假设,提供了新的证据,证明医学图像训练的特征提取器并不能内在地改善fd,甚至可能损害其可靠性。我们的代码可在https://github.com/mckellwoodland/fid-med-eval上获得。
{"title":"Feature Extraction for Generative Medical Imaging Evaluation: New Evidence Against an Evolving Trend.","authors":"McKell Woodland, Austin Castelo, Mais Al Taie, Jessica Albuquerque Marques Silva, Mohamed Eltaher, Frank Mohn, Alexander Shieh, Suprateek Kundu, Joshua P Yung, Ankit B Patel, Kristy K Brock","doi":"10.1007/978-3-031-72390-2_9","DOIUrl":"10.1007/978-3-031-72390-2_9","url":null,"abstract":"<p><p>Fréchet Inception Distance (FID) is a widely used metric for assessing synthetic image quality. It relies on an ImageNet-based feature extractor, making its applicability to medical imaging unclear. A recent trend is to adapt FID to medical imaging through feature extractors trained on medical images. Our study challenges this practice by demonstrating that ImageNet-based extractors are more consistent and aligned with human judgment than their RadImageNet counterparts. We evaluated sixteen StyleGAN2 networks across four medical imaging modalities and four data augmentation techniques with Fréchet distances (FDs) computed using eleven ImageNet or RadImageNet-trained feature extractors. Comparison with human judgment via visual Turing tests revealed that ImageNet-based extractors produced rankings consistent with human judgment, with the FD derived from the ImageNet-trained SwAV extractor significantly correlating with expert evaluations. In contrast, RadImageNet-based rankings were volatile and inconsistent with human judgment. Our findings challenge prevailing assumptions, providing novel evidence that medical image-trained feature extractors do not inherently improve FDs and can even compromise their reliability. Our code is available at https://github.com/mckellwoodland/fid-med-eval.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"87-97"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12117514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Histology Slide Digitization Workflows for Low-Resource Settings. 重新思考低资源环境下的组织学切片数字化工作流程。
Talat Zehra, Joseph Marino, Wendy Wang, Grigoriy Frantsuzov, Saad Nadeem

Histology slide digitization is becoming essential for telepathology (remote consultation), knowledge sharing (education), and using the state-of-the-art artificial intelligence algorithms (augmented/automated end-to-end clinical workflows). However, the cumulative costs of digital multi-slide high-speed brightfield scanners, cloud/on-premises storage, and personnel (IT and technicians) make the current slide digitization workflows out-of-reach for limited-resource settings, further widening the health equity gap; even single-slide manual scanning commercial solutions are costly due to hardware requirements (high-resolution cameras, high-spec PC/workstation, and support for only high-end microscopes). In this work, we present a new cloud slide digitization workflow for creating scanner-quality whole-slide images (WSIs) from uploaded low-quality videos, acquired from cheap and inexpensive microscopes with built-in cameras. Specifically, we present a pipeline to create stitched WSIs while automatically deblurring out-of-focus regions, upsampling input 10X images to 40X resolution, and reducing brightness/contrast and light-source illumination variations. We demonstrate the WSI creation efficacy from our workflow on World Health Organization-declared neglected tropical disease, Cutaneous Leishmaniasis (prevalent only in the poorest regions of the world and only diagnosed by sub-specialist dermatopathologists, rare in poor countries), as well as other common pathologies on core biopsies of breast, liver, duodenum, stomach and lymph node. The code and pretrained models will be accessible via our GitHub (https://github.com/nadeemlab/DeepLIIF), and the cloud platform will be available at https://deepliif.org for uploading microscope videos and downloading/viewing WSIs with shareable links (no sign-in required) for telepathology and knowledge sharing.

组织学切片数字化对于远程病理学(远程会诊)、知识共享(教育)和使用最先进的人工智能算法(增强/自动化端到端临床工作流程)变得至关重要。然而,数字多幻灯片高速明场扫描仪、云/本地存储和人员(IT和技术人员)的累积成本使得目前的幻灯片数字化工作流程在资源有限的环境中遥不可及,进一步扩大了健康公平差距;由于硬件要求(高分辨率相机,高规格PC/工作站,只支持高端显微镜),即使是单片手动扫描商业解决方案也很昂贵。在这项工作中,我们提出了一种新的云幻灯片数字化工作流程,用于从上载的低质量视频中创建扫描仪质量的全幻灯片图像(wsi),这些视频来自内置摄像头的廉价和廉价显微镜。具体来说,我们提出了一个流水线来创建缝合的wsi,同时自动去模糊失焦区域,将输入的10倍图像上采样到40倍分辨率,并减少亮度/对比度和光源照明变化。我们从世界卫生组织宣布被忽视的热带病、皮肤利什曼病(仅在世界上最贫穷的地区流行,仅由亚专科皮肤病理学家诊断,在贫穷国家很少见)以及乳房、肝脏、十二指肠、胃和淋巴结核心活检的其他常见病理的工作流程中证明了WSI的创建效果。代码和预训练模型可通过GitHub (https://github.com/nadeemlab/DeepLIIF)访问,云平台https://deepliif.org可用于上传显微镜视频和下载/查看具有可共享链接的WSIs(无需登录),用于心灵病理学和知识共享。
{"title":"Rethinking Histology Slide Digitization Workflows for Low-Resource Settings.","authors":"Talat Zehra, Joseph Marino, Wendy Wang, Grigoriy Frantsuzov, Saad Nadeem","doi":"10.1007/978-3-031-72083-3_40","DOIUrl":"10.1007/978-3-031-72083-3_40","url":null,"abstract":"<p><p>Histology slide digitization is becoming essential for telepathology (remote consultation), knowledge sharing (education), and using the state-of-the-art artificial intelligence algorithms (augmented/automated end-to-end clinical workflows). However, the cumulative costs of digital multi-slide high-speed brightfield scanners, cloud/on-premises storage, and personnel (IT and technicians) make the current slide digitization workflows out-of-reach for limited-resource settings, further widening the health equity gap; even single-slide manual scanning commercial solutions are costly due to hardware requirements (high-resolution cameras, high-spec PC/workstation, and support for only high-end microscopes). In this work, we present a new cloud slide digitization workflow for creating scanner-quality whole-slide images (WSIs) from uploaded low-quality videos, acquired from cheap and inexpensive microscopes with built-in cameras. Specifically, we present a pipeline to create stitched WSIs while automatically deblurring out-of-focus regions, upsampling input 10X images to 40X resolution, and reducing brightness/contrast and light-source illumination variations. We demonstrate the WSI creation efficacy from our workflow on World Health Organization-declared neglected tropical disease, Cutaneous Leishmaniasis (prevalent only in the poorest regions of the world and only diagnosed by sub-specialist dermatopathologists, rare in poor countries), as well as other common pathologies on core biopsies of breast, liver, duodenum, stomach and lymph node. The code and pretrained models will be accessible via our GitHub (https://github.com/nadeemlab/DeepLIIF), and the cloud platform will be available at https://deepliif.org for uploading microscope videos and downloading/viewing WSIs with shareable links (no sign-in required) for telepathology and knowledge sharing.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"427-436"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11786607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143082977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surface-based and Shape-informed U-fiber Atlasing for Robust Superficial White Matter Connectivity Analysis. 基于表面和形状信息的u型光纤分层鲁棒浅层白质连通性分析。
Yuan Li, Xinyu Nie, Jianwei Zhang, Yonggang Shi

Superficial white matter (SWM) U-fibers contain considerable structural connectivity in the human brain; however, related studies are not well-developed compared to the well-studied deep white matter (DWM). Conventionally, SWM U-fiber is obtained through DWM tracking, which is inaccurate on the cortical surface. The significant variability in the cortical folding patterns of the human brain renders a conventional template-based atlas unsuitable for accurately mapping U-fibers within the thin layer of SWM beneath the cortical surface. Recently, new surface-based tracking methods have been developed to reconstruct more complete and reliable U-fibers. To leverage surface-based U-fiber tracking methods, we propose to create a surface-based U-fiber dictionary using high-resolution diffusion MRI (dMRI) data from the Human Connectome Project (HCP). We first identify the major U-fiber bundles and then build a dictionary containing subjects with high groupwise consistency of major U-fiber bundles. Finally, we propose a shape-informed U-fiber atlasing method for robust SWM connectivity analysis. Through experiments, we demonstrate that our shape-informed atlasing method can obtain anatomically more accurate U-fiber representations than state-of-the-art atlas. Additionally, our method is capable of restoring incomplete U-fibers in low-resolution dMRI, thus helping better characterize SWM connectivity in clinical studies such as the Alzheimer's Disease Neuroimaging Initiative (ADNI).

浅表白质(SWM) u -纤维在人脑中包含大量的结构连接;然而,与深度白质(DWM)相比,相关研究并不发达。传统的SWM u -光纤是通过DWM跟踪获得的,在皮质表面是不准确的。人类大脑皮层折叠模式的显著可变性使得传统的基于模板的图谱不适合精确地绘制皮层表面下SWM薄层内的u -纤维。最近,新的基于表面的跟踪方法被开发出来,以重建更完整和可靠的u -纤维。为了利用基于表面的u -纤维跟踪方法,我们建议使用来自人类连接组计划(HCP)的高分辨率扩散MRI (dMRI)数据创建一个基于表面的u -纤维字典。我们首先对主要的U-fiber束进行了识别,然后建立了包含主要的U-fiber束具有高群一致性的主题的字典。最后,我们提出了一种形状知情的u型光纤atlasing方法,用于稳健的SWM连通性分析。通过实验,我们证明了我们的形状信息图谱方法可以获得比最先进的图谱更准确的解剖学u -纤维表征。此外,我们的方法能够在低分辨率dMRI中恢复不完整的u -纤维,从而有助于在阿尔茨海默病神经成像倡议(ADNI)等临床研究中更好地表征SWM连接。
{"title":"Surface-based and Shape-informed U-fiber Atlasing for Robust Superficial White Matter Connectivity Analysis.","authors":"Yuan Li, Xinyu Nie, Jianwei Zhang, Yonggang Shi","doi":"10.1007/978-3-031-72069-7_40","DOIUrl":"10.1007/978-3-031-72069-7_40","url":null,"abstract":"<p><p>Superficial white matter (SWM) U-fibers contain considerable structural connectivity in the human brain; however, related studies are not well-developed compared to the well-studied deep white matter (DWM). Conventionally, SWM U-fiber is obtained through DWM tracking, which is inaccurate on the cortical surface. The significant variability in the cortical folding patterns of the human brain renders a conventional template-based atlas unsuitable for accurately mapping U-fibers within the thin layer of SWM beneath the cortical surface. Recently, new surface-based tracking methods have been developed to reconstruct more complete and reliable U-fibers. To leverage surface-based U-fiber tracking methods, we propose to create a surface-based U-fiber dictionary using high-resolution diffusion MRI (dMRI) data from the Human Connectome Project (HCP). We first identify the major U-fiber bundles and then build a dictionary containing subjects with high groupwise consistency of major U-fiber bundles. Finally, we propose a shape-informed U-fiber atlasing method for robust SWM connectivity analysis. Through experiments, we demonstrate that our shape-informed atlasing method can obtain anatomically more accurate U-fiber representations than state-of-the-art atlas. Additionally, our method is capable of restoring incomplete U-fibers in low-resolution dMRI, thus helping better characterize SWM connectivity in clinical studies such as the Alzheimer's Disease Neuroimaging Initiative (ADNI).</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"422-432"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12448713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Diffusion for Cell Layout Generation. 空间扩散单元布局生成。
Chen Li, Xiaoling Hu, Shahira Abousamra, Meilong Xu, Chao Chen

Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at https://github.com/superlc1995/Diffusion-cell.

生成模型,如gan和扩散模型,已被用于增强训练集和提高不同任务的性能。我们专注于细胞检测的生成模型,即在给定的病理图像中定位和分类细胞。一个很大程度上被忽视的重要信息是细胞的空间模式。在本文中,我们提出了一个空间模式导向的单元布局生成模型。具体而言,提出了一种以空间特征为导向的扩散模型,并生成了真实的细胞布局。我们探索了不同密度模型作为扩散模型的空间特征。在下游任务中,我们表明生成的细胞布局可用于指导高质量病理图像的生成。对这些图像进行增强可以显著提高SOTA细胞检测方法的性能。代码可在https://github.com/superlc1995/Diffusion-cell上获得。
{"title":"Spatial Diffusion for Cell Layout Generation.","authors":"Chen Li, Xiaoling Hu, Shahira Abousamra, Meilong Xu, Chao Chen","doi":"10.1007/978-3-031-72083-3_45","DOIUrl":"10.1007/978-3-031-72083-3_45","url":null,"abstract":"<p><p>Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at https://github.com/superlc1995/Diffusion-cell.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"481-491"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12206494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144532224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation. 用常微分方程对大脑结构-效应网络进行可解释的时空嵌入
Haoteng Tang, Guodong Liu, Siyuan Dai, Kai Ye, Kun Zhao, Wenlu Wang, Carl Yang, Lifang He, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan

The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics. In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic inter-play between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.

核磁共振成像(MRI)衍生的大脑网络是阐明大脑结构和功能方面的重要工具,包括疾病和发育过程的影响。然而,现有的方法通常侧重于功能磁共振成像(fMRI)的同步BOLD信号,可能无法捕捉到脑区之间的定向影响,也很少处理时间功能动态。在本研究中,我们首先通过动态因果模型构建了脑效网络。随后,我们引入了一个可解释的图学习框架,称为时空嵌入式 ODE(STE-ODE)。该框架包含专门设计的有向节点嵌入层,旨在通过常微分方程(ODE)模型捕捉结构网络和有效网络之间的动态相互作用,从而描述大脑的时空动态。我们的框架利用两个独立的公开数据集(HCP 和 OASIS)在多个临床表型预测任务中进行了验证。实验结果清楚地表明,与几种最先进的方法相比,我们的模型更具优势。
{"title":"Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation.","authors":"Haoteng Tang, Guodong Liu, Siyuan Dai, Kai Ye, Kun Zhao, Wenlu Wang, Carl Yang, Lifang He, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan","doi":"10.1007/978-3-031-72069-7_22","DOIUrl":"10.1007/978-3-031-72069-7_22","url":null,"abstract":"<p><p>The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics. In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic inter-play between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"227-237"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513182/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hallucination Index: An Image Quality Metric for Generative Reconstruction Models. 幻觉指数:生成式重建模型的图像质量指标
Matthew Tivnan, Siyeop Yoon, Zhennong Chen, Xiang Li, Dufan Wu, Quanzheng Li

Generative image reconstruction algorithms such as measurement conditioned diffusion models are increasingly popular in the field of medical imaging. These powerful models can transform low signal-to-noise ratio (SNR) inputs into outputs with the appearance of high SNR. However, the outputs can have a new type of error called hallucinations. In medical imaging, these hallucinations may not be obvious to a Radiologist but could cause diagnostic errors. Generally, hallucination refers to error in estimation of object structure caused by a machine learning model, but there is no widely accepted method to evaluate hallucination magnitude. In this work, we propose a new image quality metric called the hallucination index. Our approach is to compute the Hellinger distance from the distribution of reconstructed images to a zero hallucination reference distribution. To evaluate our approach, we conducted a numerical experiment with electron microscopy images, simulated noisy measurements, and applied diffusion based reconstructions. We sampled the measurements and the generative reconstructions repeatedly to compute the sample mean and covariance. For the zero hallucination reference, we used the forward diffusion process applied to ground truth. Our results show that higher measurement SNR leads to lower hallucination index for the same apparent image quality. We also evaluated the impact of early stopping in the reverse diffusion process and found that more modest denoising strengths can reduce hallucination. We believe this metric could be useful for evaluation of generative image reconstructions or as a warning label to inform radiologists about the degree of hallucinations in medical images.

生成式图像重建算法,如测量条件扩散模型,在医学成像领域越来越受欢迎。这些强大的模型可以将低信噪比(SNR)的输入转换成高信噪比的输出。然而,输出可能有一种叫做幻觉的新型错误。在医学成像中,这些幻觉对放射科医生来说可能并不明显,但可能导致诊断错误。通常,幻觉是指由机器学习模型引起的对物体结构的估计误差,但目前还没有被广泛接受的方法来评估幻觉的大小。在这项工作中,我们提出了一种新的图像质量度量,称为幻觉指数。我们的方法是计算从重建图像分布到零幻觉参考分布的海灵格距离。为了评估我们的方法,我们对电子显微镜图像进行了数值实验,模拟了噪声测量,并应用了基于扩散的重建。我们对测量和生成重建进行了多次采样,以计算样本均值和协方差。对于零幻觉参考,我们使用了适用于地面真值的正向扩散过程。结果表明,在相同的视图像质量下,较高的测量信噪比会导致较低的幻觉指数。我们还评估了在反向扩散过程中早期停止的影响,发现更适度的去噪强度可以减少幻觉。我们相信这个度量可以用于生成图像重建的评估,或者作为警告标签告知放射科医生关于医学图像中的幻觉程度。
{"title":"Hallucination Index: An Image Quality Metric for Generative Reconstruction Models.","authors":"Matthew Tivnan, Siyeop Yoon, Zhennong Chen, Xiang Li, Dufan Wu, Quanzheng Li","doi":"10.1007/978-3-031-72117-5_42","DOIUrl":"10.1007/978-3-031-72117-5_42","url":null,"abstract":"<p><p>Generative image reconstruction algorithms such as measurement conditioned diffusion models are increasingly popular in the field of medical imaging. These powerful models can transform low signal-to-noise ratio (SNR) inputs into outputs with the appearance of high SNR. However, the outputs can have a new type of error called hallucinations. In medical imaging, these hallucinations may not be obvious to a Radiologist but could cause diagnostic errors. Generally, hallucination refers to error in estimation of object structure caused by a machine learning model, but there is no widely accepted method to evaluate hallucination magnitude. In this work, we propose a new image quality metric called the hallucination index. Our approach is to compute the Hellinger distance from the distribution of reconstructed images to a zero hallucination reference distribution. To evaluate our approach, we conducted a numerical experiment with electron microscopy images, simulated noisy measurements, and applied diffusion based reconstructions. We sampled the measurements and the generative reconstructions repeatedly to compute the sample mean and covariance. For the zero hallucination reference, we used the forward diffusion process applied to ground truth. Our results show that higher measurement SNR leads to lower hallucination index for the same apparent image quality. We also evaluated the impact of early stopping in the reverse diffusion process and found that more modest denoising strengths can reduce hallucination. We believe this metric could be useful for evaluation of generative image reconstructions or as a warning label to inform radiologists about the degree of hallucinations in medical images.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15010 ","pages":"449-458"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An approach to building foundation models for brain image analysis. 建立脑图像分析基础模型的方法。
Davood Karimi

Existing machine learning methods for brain image analysis are mostly based on supervised training. They require large labeled datasets, which can be costly or impossible to obtain. Moreover, the trained models are useful only for the narrow task defined by the labels. In this work, we developed a new method, based on the concept of foundation models, to overcome these limitations. Our model is an attention-based neural network that is trained using a novel self-supervised approach. Specifically, the model is trained to generate brain images in a patch-wise manner, thereby learning the brain structure. To facilitate learning of image details, we propose a new method that encodes high-frequency information using convolutional kernels with random weights. We trained our model on a pool of 10 public datasets. We then applied the model on five independent datasets to perform segmentation, lesion detection, denoising, and brain age estimation. Results showed that the foundation model achieved competitive or better results on all tasks, while significantly reducing the required amount of labeled training data. Our method enables leveraging large unlabeled neuroimaging datasets to effectively address diverse brain image analysis tasks and reduce the time and cost requirements of acquiring labels.

现有的脑图像分析机器学习方法大多基于监督训练。它们需要大量的标记数据集,这可能是昂贵的或不可能获得。此外,训练好的模型仅对标签定义的狭窄任务有用。在这项工作中,我们开发了一种基于基础模型概念的新方法来克服这些限制。我们的模型是一个基于注意力的神经网络,使用一种新颖的自监督方法进行训练。具体来说,该模型被训练成以贴片方式生成大脑图像,从而学习大脑结构。为了便于图像细节的学习,我们提出了一种使用随机权值的卷积核编码高频信息的新方法。我们在10个公共数据集上训练我们的模型。然后,我们将该模型应用于五个独立的数据集上,进行分割、病变检测、去噪和脑年龄估计。结果表明,基础模型在所有任务上都取得了相当或更好的结果,同时显著减少了所需的标记训练数据量。我们的方法能够利用大型未标记的神经成像数据集,有效地解决各种脑图像分析任务,并减少获取标签的时间和成本要求。
{"title":"An approach to building foundation models for brain image analysis.","authors":"Davood Karimi","doi":"10.1007/978-3-031-72390-2_40","DOIUrl":"https://doi.org/10.1007/978-3-031-72390-2_40","url":null,"abstract":"<p><p>Existing machine learning methods for brain image analysis are mostly based on supervised training. They require large labeled datasets, which can be costly or impossible to obtain. Moreover, the trained models are useful only for the narrow task defined by the labels. In this work, we developed a new method, based on the concept of foundation models, to overcome these limitations. Our model is an attention-based neural network that is trained using a novel self-supervised approach. Specifically, the model is trained to generate brain images in a patch-wise manner, thereby learning the brain structure. To facilitate learning of image details, we propose a new method that encodes high-frequency information using convolutional kernels with random weights. We trained our model on a pool of 10 public datasets. We then applied the model on five independent datasets to perform segmentation, lesion detection, denoising, and brain age estimation. Results showed that the foundation model achieved competitive or better results on all tasks, while significantly reducing the required amount of labeled training data. Our method enables leveraging large unlabeled neuroimaging datasets to effectively address diverse brain image analysis tasks and reduce the time and cost requirements of acquiring labels.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"421-431"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12033034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vessel-aware aneurysm detection using multi-scale deformable 3D attention. 血管感知动脉瘤的多尺度可变形三维关注检测。
Alberto M Ceballos-Arroyo, Hieu T Nguyen, Fangrui Zhu, Shrikanth M Yadav, Jisoo Kim, Lei Qin, Geoffrey Young, Huaizu Jiang

Manual detection of intracranial aneurysms (IAs) in computed tomography (CT) scans is a complex, time-consuming task even for expert clinicians, and automating the process is no less challenging. Critical difficulties associated with detecting aneurysms include their small (yet varied) size compared to scans and a high potential for false positive (FP) predictions. To address these issues, we propose a 3D, multi-scale neural architecture that detects aneurysms via a deformable attention mechanism that operates on vessel distance maps derived from vessel segmentations and 3D features extracted from the layers of a convolutional network. Likewise, we reformulate aneurysm segmentation as bounding cuboid prediction using binary cross entropy and three localization losses (location, size, IoU). Given three validation sets comprised of 152/138/38 CT scans and containing 126/101/58 aneurysms, we achieved a Sensitivity of 91.3%/97.0%/74.1% @ FP rates 0.53/0.56/0.87, with Sensitivity around 80% on small aneurysms. Manual inspection of outputs by experts showed our model only tends to miss aneurysms located in unusual locations. Code and model weights are available online.

在计算机断层扫描(CT)中手工检测颅内动脉瘤(IAs)是一项复杂且耗时的任务,即使对专业临床医生来说也是如此,而自动化这一过程也同样具有挑战性。与检测动脉瘤相关的关键困难包括与扫描相比,动脉瘤的尺寸较小(但变化不定),并且有很高的假阳性(FP)预测的可能性。为了解决这些问题,我们提出了一种3D、多尺度的神经结构,通过一种可变形的注意力机制来检测动脉瘤,该机制基于血管分割得出的血管距离图和从卷积网络层中提取的3D特征。同样,我们将动脉瘤分割重新定义为使用二值交叉熵和三个定位损失(位置,大小,IoU)的边界长方体预测。在包含152/138/38个CT扫描和126/101/58个动脉瘤的三个验证集中,我们获得了91.3%/97.0%/74.1% @ FP率0.53/0.56/0.87的灵敏度,对小动脉瘤的灵敏度约为80%。专家对输出的人工检查表明,我们的模型只倾向于遗漏位于不寻常位置的动脉瘤。代码和模型权重可以在线获得。
{"title":"Vessel-aware aneurysm detection using multi-scale deformable 3D attention.","authors":"Alberto M Ceballos-Arroyo, Hieu T Nguyen, Fangrui Zhu, Shrikanth M Yadav, Jisoo Kim, Lei Qin, Geoffrey Young, Huaizu Jiang","doi":"10.1007/978-3-031-72086-4_71","DOIUrl":"https://doi.org/10.1007/978-3-031-72086-4_71","url":null,"abstract":"<p><p>Manual detection of intracranial aneurysms (IAs) in computed tomography (CT) scans is a complex, time-consuming task even for expert clinicians, and automating the process is no less challenging. Critical difficulties associated with detecting aneurysms include their small (yet varied) size compared to scans and a high potential for false positive (FP) predictions. To address these issues, we propose a 3D, multi-scale neural architecture that detects aneurysms via a deformable attention mechanism that operates on vessel distance maps derived from vessel segmentations and 3D features extracted from the layers of a convolutional network. Likewise, we reformulate aneurysm segmentation as bounding cuboid prediction using binary cross entropy and three localization losses (location, size, IoU). Given three validation sets comprised of 152/138/38 CT scans and containing 126/101/58 aneurysms, we achieved a Sensitivity of 91.3%/97.0%/74.1% @ FP rates 0.53/0.56/0.87, with Sensitivity around 80% on small aneurysms. Manual inspection of outputs by experts showed our model only tends to miss aneurysms located in unusual locations. Code and model weights are available online.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15005 ","pages":"754-765"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11986933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images. FastSAM3D:一个有效的分割任何模型的3D体积医学图像。
Yiqing Shen, Jingxing Li, Xinyuan Shao, Blanca Inigo Romillo, Ankush Jindal, David Dreizin, Mathias Unberath

Segment anything models (SAMs) are gaining attention for their zero-shot generalization capability in segmenting objects of unseen classes and in unseen domains when properly prompted. Interactivity is a key strength of SAMs, allowing users to iteratively provide prompts that specify objects of interest to refine outputs. However, to realize the interactive use of SAMs for 3D medical imaging tasks, rapid inference times are necessary. High memory requirements and long processing delays remain constraints that hinder the adoption of SAMs for this purpose. Specifically, while 2D SAMs applied to 3D volumes contend with repetitive computation to process all slices independently, 3D SAMs suffer from an exponential increase in model parameters and FLOPS. To address these challenges, we present FastSAM3D which accelerates SAM inference to 8 milliseconds per 128 × 128 × 128 3D volumetric image on an NVIDIA A100 GPU. This speedup is accomplished through 1) a novel layer-wise progressive distillation scheme that enables knowledge transfer from a complex 12-layer ViT-B to a lightweight 6-layer ViT-Tiny variant encoder without training from scratch; and 2) a novel 3D sparse flash attention to replace vanilla attention operators, substantially reducing memory needs and improving parallelization. Experiments on three diverse datasets reveal that FastSAM3D achieves a remarkable speedup of 527.38× compared to 2D SAMs and 8.75× compared to 3D SAMs on the same volumes without significant performance decline. Thus, FastSAM3D opens the door for low-cost truly interactive SAM-based 3D medical imaging segmentation with commonly used GPU hardware. Code is available at https://github.com/arcadelab/FastSAM3D.

在适当提示的情况下,任意分割模型(sam)因其在未知类别和未知领域中分割对象的零概率泛化能力而受到关注。交互性是sam的一个关键优势,它允许用户迭代地提供提示,指定感兴趣的对象,以改进输出。然而,为了实现sam在三维医学成像任务中的交互式使用,需要快速的推理时间。高内存需求和长处理延迟仍然是阻碍采用sam用于此目的的制约因素。具体来说,应用于3D体积的2D SAMs需要重复计算来独立处理所有切片,而3D SAMs的模型参数和FLOPS呈指数增长。为了解决这些挑战,我们提出了FastSAM3D,它在NVIDIA A100 GPU上将SAM推理加速到每128 × 128 × 128 3D体积图像8毫秒。这种加速是通过1)一种新颖的分层渐进蒸馏方案实现的,该方案使知识能够从复杂的12层viti - b转移到轻量级的6层viti - tiny变型编码器,而无需从头开始训练;2)一种新颖的3D稀疏闪光注意取代了传统的注意算子,大大降低了内存需求并提高了并行化。在三个不同的数据集上进行的实验表明,在相同的体积上,FastSAM3D比2D SAMs的速度提高了527.38倍,比3D SAMs的速度提高了8.75倍,而性能没有明显下降。因此,FastSAM3D为使用常用GPU硬件的低成本真正交互式基于sam的3D医学成像分割打开了大门。代码可从https://github.com/arcadelab/FastSAM3D获得。
{"title":"FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images.","authors":"Yiqing Shen, Jingxing Li, Xinyuan Shao, Blanca Inigo Romillo, Ankush Jindal, David Dreizin, Mathias Unberath","doi":"10.1007/978-3-031-72390-2_51","DOIUrl":"10.1007/978-3-031-72390-2_51","url":null,"abstract":"<p><p>Segment anything models (SAMs) are gaining attention for their zero-shot generalization capability in segmenting objects of unseen classes and in unseen domains when properly prompted. Interactivity is a key strength of SAMs, allowing users to iteratively provide prompts that specify objects of interest to refine outputs. However, to realize the interactive use of SAMs for 3D medical imaging tasks, rapid inference times are necessary. High memory requirements and long processing delays remain constraints that hinder the adoption of SAMs for this purpose. Specifically, while 2D SAMs applied to 3D volumes contend with repetitive computation to process all slices independently, 3D SAMs suffer from an exponential increase in model parameters and FLOPS. To address these challenges, we present FastSAM3D which accelerates SAM inference to 8 milliseconds per 128 × 128 × 128 3D volumetric image on an NVIDIA A100 GPU. This speedup is accomplished through 1) a novel layer-wise progressive distillation scheme that enables knowledge transfer from a complex 12-layer ViT-B to a lightweight 6-layer ViT-Tiny variant encoder without training from scratch; and 2) a novel 3D sparse flash attention to replace vanilla attention operators, substantially reducing memory needs and improving parallelization. Experiments on three diverse datasets reveal that FastSAM3D achieves a remarkable speedup of 527.38× compared to 2D SAMs and 8.75× compared to 3D SAMs on the same volumes without significant performance decline. Thus, FastSAM3D opens the door for low-cost truly interactive SAM-based 3D medical imaging segmentation with commonly used GPU hardware. Code is available at https://github.com/arcadelab/FastSAM3D.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"542-552"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12377522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144984624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1