首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Towards Accurate Microstructure Estimation via 3D Hybrid Graph Transformer. 通过三维混合图变换器实现精确的微观结构估算
Junqing Yang, Haotian Jiang, Tewodros Tassew, Peng Sun, Jiquan Ma, Yong Xia, Pew-Thian Yap, Geng Chen

Deep learning has drawn increasing attention in microstructure estimation with undersampled diffusion MRI (dMRI) data. A representative method is the hybrid graph transformer (HGT), which achieves promising performance by integrating q -space graph learning and x -space transformer learning into a unified framework. However, this method overlooks the 3D spatial information as it relies on training with 2D slices. To address this limitation, we propose 3D hybrid graph transformer (3D-HGT), an advanced microstructure estimation model capable of making full use of 3D spatial information and angular information. To tackle the large computation burden associated with 3D x -space learning, we propose an efficient q -space learning model based on simplified graph neural networks. Furthermore, we propose a 3D x -space learning module based on the transformer. Extensive experiments on data from the human connectome project show that our 3D-HGT outperforms state-of-the-art methods, including HGT, in both quantitative and qualitative evaluations.

深度学习在利用采样不足的弥散核磁共振成像(dMRI)数据进行微观结构估计方面引起了越来越多的关注。混合图变换器(HGT)是一种具有代表性的方法,它将 q 空间图学习和 x 空间变换器学习整合到一个统一的框架中,从而实现了良好的性能。然而,由于这种方法依赖于二维切片的训练,因此忽略了三维空间信息。针对这一局限性,我们提出了三维混合图变换器(3D-HGT),这是一种能够充分利用三维空间信息和角度信息的先进微结构估计模型。为了解决三维 x 空间学习带来的巨大计算负担,我们提出了一种基于简化图神经网络的高效 q 空间学习模型。此外,我们还提出了基于变换器的三维 x 空间学习模块。在人类连接组项目数据上进行的大量实验表明,我们的 3D-HGT 在定量和定性评估方面都优于包括 HGT 在内的最先进方法。
{"title":"Towards Accurate Microstructure Estimation via 3D Hybrid Graph Transformer.","authors":"Junqing Yang, Haotian Jiang, Tewodros Tassew, Peng Sun, Jiquan Ma, Yong Xia, Pew-Thian Yap, Geng Chen","doi":"10.1007/978-3-031-43993-3_3","DOIUrl":"10.1007/978-3-031-43993-3_3","url":null,"abstract":"<p><p>Deep learning has drawn increasing attention in microstructure estimation with undersampled diffusion MRI (dMRI) data. A representative method is the hybrid graph transformer (HGT), which achieves promising performance by integrating <math><mi>q</mi></math> -space graph learning and <math><mi>x</mi></math> -space transformer learning into a unified framework. However, this method overlooks the 3D spatial information as it relies on training with 2D slices. To address this limitation, we propose 3D hybrid graph transformer (3D-HGT), an advanced microstructure estimation model capable of making full use of 3D spatial information and angular information. To tackle the large computation burden associated with 3D <math><mi>x</mi></math> -space learning, we propose an efficient <math><mi>q</mi></math> -space learning model based on simplified graph neural networks. Furthermore, we propose a 3D <math><mi>x</mi></math> -space learning module based on the transformer. Extensive experiments on data from the human connectome project show that our 3D-HGT outperforms state-of-the-art methods, including HGT, in both quantitative and qualitative evaluations.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"25-34"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11361334/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142116657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Functional Connectome Harmonics. 动态功能连接组谐波。
Hoyt Patrick Taylor, Pew-Thian Yap

Functional connectivity (FC) "gradients" enable investigation of connection topography in relation to cognitive hierarchy, and yield the primary axes along which FC is organized. In this work, we employ a variant of the "gradient" approach wherein we solve for the normal modes of FC, yielding functional connectome harmonics. Until now, research in this vein has only considered static FC, neglecting the possibility that the principal axes of FC may depend on the timescale at which they are computed. Recent work suggests that momentary activation patterns, or brain states, mediate the dominant components of functional connectivity, suggesting that the principal axes may be invariant to change in timescale. In light of this, we compute functional connectome harmonics using time windows of varying lengths and demonstrate that they are stable across timescales. Our connectome harmonics correspond to meaningful brain states. The activation strength of the brain states, as well as their inter-relationships, are found to be reproducible for individuals. Further, we utilize our time-varying functional connectome harmonics to formulate a simple and elegant method for computing cortical flexibility at vertex resolution and demonstrate qualitative similarity between flexibility maps from our method and a method standard in the literature.

功能连通性(FC)"梯度 "有助于研究与认知层次相关的连通性拓扑结构,并得出功能连通性的主要组织轴线。在这项工作中,我们采用了 "梯度 "方法的一种变体,即求解功能连接的正常模式,从而得到功能连接组谐波。到目前为止,这方面的研究只考虑了静态功能连接,忽略了功能连接的主轴可能取决于其计算的时间尺度。最近的研究表明,瞬间激活模式或大脑状态介导了功能连通性的主要成分,这表明主轴可能不受时间尺度变化的影响。有鉴于此,我们使用不同长度的时间窗计算了功能连接组谐波,并证明它们在不同时间尺度上是稳定的。我们的连接组谐波对应于有意义的大脑状态。我们发现,大脑状态的激活强度以及它们之间的相互关系对个体来说是可重现的。此外,我们还利用时变功能连接组谐波制定了一种简单而优雅的方法,用于计算顶点分辨率下的大脑皮层灵活性,并证明了我们的方法和文献中标准方法的灵活性图之间在质量上的相似性。
{"title":"Dynamic Functional Connectome Harmonics.","authors":"Hoyt Patrick Taylor, Pew-Thian Yap","doi":"10.1007/978-3-031-43993-3_26","DOIUrl":"10.1007/978-3-031-43993-3_26","url":null,"abstract":"<p><p>Functional connectivity (FC) \"gradients\" enable investigation of connection topography in relation to cognitive hierarchy, and yield the primary axes along which FC is organized. In this work, we employ a variant of the \"gradient\" approach wherein we solve for the normal modes of FC, yielding functional connectome harmonics. Until now, research in this vein has only considered static FC, neglecting the possibility that the principal axes of FC may depend on the timescale at which they are computed. Recent work suggests that momentary activation patterns, or brain states, mediate the dominant components of functional connectivity, suggesting that the principal axes may be invariant to change in timescale. In light of this, we compute functional connectome harmonics using time windows of varying lengths and demonstrate that they are stable across timescales. Our connectome harmonics correspond to meaningful brain states. The activation strength of the brain states, as well as their inter-relationships, are found to be reproducible for individuals. Further, we utilize our time-varying functional connectome harmonics to formulate a simple and elegant method for computing cortical flexibility at vertex resolution and demonstrate qualitative similarity between flexibility maps from our method and a method standard in the literature.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"268-276"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurfFlow: A Flow-Based Approach for Rapid and Accurate Cortical Surface Reconstruction from Infant Brain MRI. SurfFlow:一种基于流的方法,用于从婴儿脑磁共振成像中快速、准确地重建皮质表面。
Xiaoyang Chen, Junjie Zhao, Siyuan Liu, Sahar Ahmad, Pew-Thian Yap

The infant brain undergoes rapid changes in volume, shape, and structural organization during the first postnatal year. Accurate cortical surface reconstruction (CSR) is essential for understanding rapid changes in cortical morphometry during early brain development. However, existing CSR methods, designed for adult brain MRI, fall short in reconstructing cortical surfaces from infant MRI, owing to the poor tissue contrasts, partial volume effects, and rapid changes in cortical folding patterns. Here, we introduce an infant-centric CSR method in light of these challenges. Our method, SurfFlow, utilizes three seamlessly connected deformation blocks to sequentially deform an initial template mesh to target cortical surfaces. Remarkably, our method can rapidly reconstruct a high-resolution cortical surface mesh with 360k vertices in approximately one second. Performance evaluation based on an MRI dataset of infants 0 to 12 months of age indicates that SurfFlow significantly reduces geometric errors and substantially improves mesh regularity compared with state-of-the-art deep learning approaches.

在出生后的第一年,婴儿大脑的体积、形状和结构组织会发生快速变化。准确的皮质表面重建(CSR)对于了解大脑早期发育过程中皮质形态的快速变化至关重要。然而,由于组织对比度差、部分体积效应和皮质折叠模式的快速变化,现有的针对成人大脑 MRI 设计的 CSR 方法在重建婴儿 MRI 的皮质表面方面存在不足。鉴于这些挑战,我们在此介绍一种以婴儿为中心的 CSR 方法。我们的方法--SurfFlow--利用三个无缝连接的变形块,按顺序将初始模板网格变形为目标皮质表面。值得注意的是,我们的方法可以在大约一秒钟内快速重建 360k 个顶点的高分辨率皮质表面网格。基于 0 到 12 个月婴儿核磁共振成像数据集的性能评估表明,与最先进的深度学习方法相比,SurfFlow 能显著减少几何误差,并大幅提高网格的规则性。
{"title":"SurfFlow: A Flow-Based Approach for Rapid and Accurate Cortical Surface Reconstruction from Infant Brain MRI.","authors":"Xiaoyang Chen, Junjie Zhao, Siyuan Liu, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-43993-3_37","DOIUrl":"10.1007/978-3-031-43993-3_37","url":null,"abstract":"<p><p>The infant brain undergoes rapid changes in volume, shape, and structural organization during the first postnatal year. Accurate cortical surface reconstruction (CSR) is essential for understanding rapid changes in cortical morphometry during early brain development. However, existing CSR methods, designed for adult brain MRI, fall short in reconstructing cortical surfaces from infant MRI, owing to the poor tissue contrasts, partial volume effects, and rapid changes in cortical folding patterns. Here, we introduce an infant-centric CSR method in light of these challenges. Our method, <i>SurfFlow</i>, utilizes three seamlessly connected deformation blocks to sequentially deform an initial template mesh to target cortical surfaces. Remarkably, our method can rapidly reconstruct a high-resolution cortical surface mesh with 360k vertices in approximately one second. Performance evaluation based on an MRI dataset of infants 0 to 12 months of age indicates that SurfFlow significantly reduces geometric errors and substantially improves mesh regularity compared with state-of-the-art deep learning approaches.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"380-388"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460795/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image2SSM: Reimagining Statistical Shape Models from Images with Radial Basis Functions. Image2SSM:利用径向基函数从图像中重塑统计形状模型。
Hong Xu, Shireen Y Elhabian

Statistical shape modeling (SSM) is an essential tool for analyzing variations in anatomical morphology. In a typical SSM pipeline, 3D anatomical images, gone through segmentation and rigid registration, are represented using lower-dimensional shape features, on which statistical analysis can be performed. Various methods for constructing compact shape representations have been proposed, but they involve laborious and costly steps. We propose Image2SSM, a novel deep-learning-based approach for SSM that leverages image-segmentation pairs to learn a radial-basis-function (RBF)-based representation of shapes directly from images. This RBF-based shape representation offers a rich self-supervised signal for the network to estimate a continuous, yet compact representation of the underlying surface that can adapt to complex geometries in a data-driven manner. Image2SSM can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes while requiring minimal parameter tuning and no user assistance. Once trained, Image2SSM can be used to infer low-dimensional shape representations from new unsegmented images, paving the way toward scalable approaches for SSM, especially when dealing with large cohorts. Experiments on synthetic and real datasets show the efficacy of the proposed method compared to the state-of-art correspondence-based method for SSM.

统计形状建模(SSM)是分析解剖形态变化的重要工具。在典型的统计形状建模流程中,三维解剖图像经过分割和刚性配准后,使用低维形状特征来表示,并在此基础上进行统计分析。目前已提出了多种构建紧凑形状表示的方法,但这些方法都涉及费力且昂贵的步骤。我们提出的 Image2SSM 是一种基于深度学习的新型 SSM 方法,它利用图像分割对直接从图像中学习基于径向基函数(RBF)的形状表示。这种基于 RBF 的形状表示法为网络提供了丰富的自监督信号,以估计底层表面的连续而紧凑的表示法,并能以数据驱动的方式适应复杂的几何形状。Image2SSM 可以通过构建解剖形状集合的基于统计地标的形状模型来描述感兴趣的生物结构群,同时只需极少的参数调整,无需用户协助。训练完成后,Image2SSM 可用于从新的未分割图像中推断低维形状表示,为 SSM 的可扩展方法铺平道路,尤其是在处理大型队列时。在合成和真实数据集上的实验表明,与基于对应关系的 SSM 方法相比,所提出的方法非常有效。
{"title":"Image2SSM: Reimagining Statistical Shape Models from Images with Radial Basis Functions.","authors":"Hong Xu, Shireen Y Elhabian","doi":"10.1007/978-3-031-43907-0_49","DOIUrl":"https://doi.org/10.1007/978-3-031-43907-0_49","url":null,"abstract":"<p><p>Statistical shape modeling (SSM) is an essential tool for analyzing variations in anatomical morphology. In a typical SSM pipeline, 3D anatomical images, gone through segmentation and rigid registration, are represented using lower-dimensional shape features, on which statistical analysis can be performed. Various methods for constructing compact shape representations have been proposed, but they involve laborious and costly steps. We propose Image2SSM, a novel deep-learning-based approach for SSM that leverages image-segmentation pairs to learn a radial-basis-function (RBF)-based representation of shapes directly from images. This RBF-based shape representation offers a rich self-supervised signal for the network to estimate a continuous, yet compact representation of the underlying surface that can adapt to complex geometries in a data-driven manner. Image2SSM can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes while requiring minimal parameter tuning and no user assistance. Once trained, Image2SSM can be used to infer low-dimensional shape representations from new unsegmented images, paving the way toward scalable approaches for SSM, especially when dealing with large cohorts. Experiments on synthetic and real datasets show the efficacy of the proposed method compared to the state-of-art correspondence-based method for SSM.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"508-517"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11555643/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSOR: Longitudinally-Consistent Self-Organized Representation Learning. 纵向一致自组织表征学习。
Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Wei Peng, Greg Zaharchuk, Kilian M Pohl

Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called Longitudinally-consistent Self-Organized Representation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, N=632), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at https://github.com/ouyangjiahong/longitudinal-som-single-modality.

将深度学习模型应用于纵向脑核磁共振成像时,可解释性是一个关键问题。解决这个问题的一种方法是通过自组织地图(SOM)可视化深度学习产生的高维潜在空间。SOM将潜在空间分成簇,然后将簇中心映射到一个离散的(通常是二维的)网格,以保持簇之间的高维关系。然而,在高维潜在空间中学习SOM往往是不稳定的,尤其是在自我监督的环境中。此外,习得的SOM网格不一定能捕捉到临床上有趣的信息,比如大脑年龄。为了解决这些问题,我们提出了第一种自我监督的SOM方法,该方法仅基于纵向脑mri(即没有人口统计学或认知信息)获得高维,可解释的脑年龄分层表示。这种方法被称为纵向一致自组织表示学习(LSOR),在训练期间是稳定的,因为它依赖于软聚类(相对于现有SOM使用的硬聚类分配)。此外,我们的方法通过将从纵向mri推断的轨迹与相应SOM集群相关的参考向量对齐,生成了一个根据脑年龄分层的潜在空间。当应用于阿尔茨海默病神经成像计划(ADNI, N=632)的纵向mri时,LSOR产生了一个可解释的潜在空间,并且在分类(静态与进行性轻度认知障碍)和回归(确定所有受试者的ADAS-Cog评分)的下游任务方面达到了与最先进的表征相当或更高的准确性。代码可在https://github.com/ouyangjiahong/longitudinal-som-single-modality上获得。
{"title":"LSOR: Longitudinally-Consistent Self-Organized Representation Learning.","authors":"Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Wei Peng, Greg Zaharchuk, Kilian M Pohl","doi":"10.1007/978-3-031-43907-0_27","DOIUrl":"10.1007/978-3-031-43907-0_27","url":null,"abstract":"<p><p>Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called <b>L</b>ongitudinally-consistent <b>S</b>elf-<b>O</b>rganized <b>R</b>epresentation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, <math><mi>N</mi><mspace></mspace><mo>=</mo><mspace></mspace><mn>632</mn></math>), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at https://github.com/ouyangjiahong/longitudinal-som-single-modality.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"279-289"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642576/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92158078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation. Pelphix:从 X 光图像识别经皮骨盆固定术中的手术期。
Benjamin D Killeen, Han Zhang, Jan Mangulabnan, Mehran Armand, Russell H Taylor, Greg Osgood, Mathias Unberath

Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.

手术相位识别(SPR)是现代手术室数字化转型的关键因素。虽然基于视频源的 SPR 已经得到广泛认可,但将介入性 X 射线序列纳入其中的做法尚未得到探索。本文介绍了 Pelphix,这是第一种用于 X 光引导下经皮骨盆骨折固定的 SPR 方法,它从走廊、活动、视图和帧值四个粒度层面对手术过程进行建模,将骨盆骨折固定工作流程模拟为马尔可夫过程,从而提供完全注释的训练数据。通过对骨走廊、工具和解剖结构的检测,我们学习了图像表征,并将其输入变换器模型,从而在四个粒度水平上对手术阶段进行回归。我们的方法证明了基于 X 射线的 SPR 的可行性,在所有粒度水平上,模拟序列的平均准确率达到 99.2%,在尸体中达到 71.7%,在真实数据中,目标走廊的准确率高达 84%。这项工作迈出了 X 射线领域 SPR 的第一步,建立了 X 射线引导手术中阶段分类的方法,模拟了真实的图像序列以实现机器学习模型的开发,并证明了这种方法在真实手术分析中的可行性。随着基于 X 射线的 SPR 技术的不断成熟,它将通过为智能手术系统配备手术室中的态势感知功能,使骨科手术、血管造影术和介入放射学手术受益匪浅。
{"title":"Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation.","authors":"Benjamin D Killeen, Han Zhang, Jan Mangulabnan, Mehran Armand, Russell H Taylor, Greg Osgood, Mathias Unberath","doi":"10.1007/978-3-031-43996-4_13","DOIUrl":"https://doi.org/10.1007/978-3-031-43996-4_13","url":null,"abstract":"<p><p>Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"133-143"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11016332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTFlow: Mitigating Effects of Computed Tomography Acquisition and Reconstruction with Normalizing Flows. CTFlow:利用归一化流量减轻计算机断层扫描采集和重建的影响。
Leihao Wei, Anil Yadav, William Hsu

Mitigating the effects of image appearance due to variations in computed tomography (CT) acquisition and reconstruction parameters is a challenging inverse problem. We present CTFlow, a normalizing flows-based method for harmonizing CT scans acquired and reconstructed using different doses and kernels to a target scan. Unlike existing state-of-the-art image harmonization approaches that only generate a single output, flow-based methods learn the explicit conditional density and output the entire spectrum of plausible reconstruction, reflecting the underlying uncertainty of the problem. We demonstrate how normalizing flows reduces variability in image quality and the performance of a machine learning algorithm for lung nodule detection. We evaluate the performance of CTFlow by 1) comparing it with other techniques on a denoising task using the AAPM-Mayo Clinical Low-Dose CT Grand Challenge dataset, and 2) demonstrating consistency in nodule detection performance across 186 real-world low-dose CT chest scans acquired at our institution. CTFlow performs better in the denoising task for both peak signal-to-noise ratio and perceptual quality metrics. Moreover, CTFlow produces more consistent predictions across all dose and kernel conditions than generative adversarial network (GAN)-based image harmonization on a lung nodule detection task. The code is available at https://github.com/hsu-lab/ctflow.

减轻因计算机断层扫描(CT)采集和重建参数变化而造成的图像外观影响是一个具有挑战性的逆问题。我们提出的 CTFlow 是一种基于归一化流量的方法,用于协调使用不同剂量和内核采集和重建的 CT 扫描与目标扫描。现有的先进图像协调方法只能生成单一输出,而基于流量的方法则不同,它能学习明确的条件密度,并输出整个可信重建谱,从而反映出问题的潜在不确定性。我们展示了流量归一化如何减少图像质量的变化以及肺结节检测机器学习算法的性能。我们通过以下方法评估 CTFlow 的性能:1)使用 AAPM-Mayo 临床低剂量 CT 大挑战数据集,在去噪任务中将 CTFlow 与其他技术进行比较;2)在本机构获取的 186 个真实世界低剂量 CT 胸部扫描中证明结节检测性能的一致性。在峰值信噪比和感知质量指标方面,CTFlow 在去噪任务中表现更好。此外,与基于生成式对抗网络(GAN)的图像协调相比,CTFlow 在肺结节检测任务中的所有剂量和内核条件下都能产生更一致的预测结果。代码见 https://github.com/hsu-lab/ctflow。
{"title":"CTFlow: Mitigating Effects of Computed Tomography Acquisition and Reconstruction with Normalizing Flows.","authors":"Leihao Wei, Anil Yadav, William Hsu","doi":"10.1007/978-3-031-43990-2_39","DOIUrl":"10.1007/978-3-031-43990-2_39","url":null,"abstract":"<p><p>Mitigating the effects of image appearance due to variations in computed tomography (CT) acquisition and reconstruction parameters is a challenging inverse problem. We present CTFlow, a normalizing flows-based method for harmonizing CT scans acquired and reconstructed using different doses and kernels to a target scan. Unlike existing state-of-the-art image harmonization approaches that only generate a single output, flow-based methods learn the explicit conditional density and output the entire spectrum of plausible reconstruction, reflecting the underlying uncertainty of the problem. We demonstrate how normalizing flows reduces variability in image quality and the performance of a machine learning algorithm for lung nodule detection. We evaluate the performance of CTFlow by 1) comparing it with other techniques on a denoising task using the AAPM-Mayo Clinical Low-Dose CT Grand Challenge dataset, and 2) demonstrating consistency in nodule detection performance across 186 real-world low-dose CT chest scans acquired at our institution. CTFlow performs better in the denoising task for both peak signal-to-noise ratio and perceptual quality metrics. Moreover, CTFlow produces more consistent predictions across all dose and kernel conditions than generative adversarial network (GAN)-based image harmonization on a lung nodule detection task. The code is available at https://github.com/hsu-lab/ctflow.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14226 ","pages":"413-422"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086056/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit Anatomical Rendering for Medical Image Segmentation with Stochastic Experts. 利用随机专家为医学图像分割进行隐式解剖渲染
Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, James S Duncan

Integrating high-level semantically correlated contents and low-level anatomical features is of central importance in medical image segmentation. Towards this end, recent deep learning-based medical segmentation methods have shown great promise in better modeling such information. However, convolution operators for medical segmentation typically operate on regular grids, which inherently blur the high-frequency regions, i.e., boundary regions. In this work, we propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation. Our method is motivated by the fact that implicit neural representation has been shown to be more effective in fitting complex signals and solving computer graphics problems than discrete grid-based representation. The core of our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner. Specifically, we continuously align the coarse segmentation prediction with the ambiguous coordinate-based point representations and aggregate these features to adaptively refine the boundary region. To parallelly optimize multi-scale pixel-level features, we leverage the idea from Mixture-of-Expert (MoE) to design and train our MORSE with a stochastic gating mechanism. Our experiments demonstrate that MORSE can work well with different medical segmentation backbones, consistently achieving competitive performance improvements in both 2D and 3D supervised medical segmentation methods. We also theoretically analyze the superiority of MORSE.

整合高级语义相关内容和低级解剖特征在医学图像分割中至关重要。为此,最近基于深度学习的医学分割方法在更好地模拟此类信息方面大有可为。然而,用于医学分割的卷积算子通常是在规则网格上运行的,这就从本质上模糊了高频区域,即边界区域。在这项工作中,我们提出了 MORSE,这是一种在解剖学层面设计的通用隐式神经渲染框架,用于辅助医学图像分割的学习。与基于离散网格的表示法相比,隐式神经表示法在拟合复杂信号和解决计算机图形问题方面更有效。我们方法的核心是以端到端的方式将医学图像分割表述为渲染问题。具体来说,我们不断将粗略的分割预测与模糊的基于坐标的点表示相一致,并将这些特征汇总以自适应地完善边界区域。为了并行优化多尺度像素级特征,我们利用专家混合(MoE)的理念,设计并训练具有随机门控机制的 MORSE。我们的实验证明,MORSE 可以与不同的医疗分割骨干技术很好地配合使用,在二维和三维监督医疗分割方法中不断取得具有竞争力的性能改进。我们还从理论上分析了 MORSE 的优越性。
{"title":"Implicit Anatomical Rendering for Medical Image Segmentation with Stochastic Experts.","authors":"Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, James S Duncan","doi":"10.1007/978-3-031-43898-1_54","DOIUrl":"10.1007/978-3-031-43898-1_54","url":null,"abstract":"<p><p>Integrating high-level semantically correlated contents and low-level anatomical features is of central importance in medical image segmentation. Towards this end, recent deep learning-based medical segmentation methods have shown great promise in better modeling such information. However, convolution operators for medical segmentation typically operate on regular grids, which inherently blur the high-frequency regions, <i>i.e</i>., boundary regions. In this work, we propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation. Our method is motivated by the fact that implicit neural representation has been shown to be more effective in fitting complex signals and solving computer graphics problems than discrete grid-based representation. The core of our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner. Specifically, we continuously align the coarse segmentation prediction with the ambiguous coordinate-based point representations and aggregate these features to adaptively refine the boundary region. To parallelly optimize multi-scale pixel-level features, we leverage the idea from Mixture-of-Expert (MoE) to design and train our MORSE with a stochastic gating mechanism. Our experiments demonstrate that MORSE can work well with different medical segmentation backbones, consistently achieving competitive performance improvements in both 2D and 3D supervised medical segmentation methods. We also theoretically analyze the superiority of MORSE.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14222 ","pages":"561-571"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11151725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141262863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can point cloud networks learn statistical shape models of anatomies? 点云网络能否学习解剖学的统计形状模型?
Jadie Adams, Shireen Elhabian

Statistical Shape Modeling (SSM) is a valuable tool for investigating and quantifying anatomical variations within populations of anatomies. However, traditional correspondence-based SSM generation methods have a prohibitive inference process and require complete geometric proxies (e.g., high-resolution binary volumes or surface meshes) as input shapes to construct the SSM. Unordered 3D point cloud representations of shapes are more easily acquired from various medical imaging practices (e.g., thresholded images and surface scanning). Point cloud deep networks have recently achieved remarkable success in learning permutation-invariant features for different point cloud tasks (e.g., completion, semantic segmentation, classification). However, their application to learning SSM from point clouds is to-date unexplored. In this work, we demonstrate that existing point cloud encoder-decoder-based completion networks can provide an untapped potential for SSM, capturing population-level statistical representations of shapes while reducing the inference burden and relaxing the input requirement. We discuss the limitations of these techniques to the SSM application and suggest future improvements. Our work paves the way for further exploration of point cloud deep learning for SSM, a promising avenue for advancing shape analysis literature and broadening SSM to diverse use cases.

统计形状建模(SSM)是研究和量化解剖群体内部解剖变异的重要工具。然而,传统的基于对应关系的 SSM 生成方法推理过程繁琐,需要完整的几何代型(如高分辨率二元体积或表面网格)作为输入形状来构建 SSM。形状的无序三维点云表示更容易从各种医学成像实践(如阈值图像和表面扫描)中获取。最近,点云深度网络在为不同的点云任务(如补全、语义分割、分类)学习包络不变特征方面取得了显著的成功。然而,它们在从点云学习 SSM 方面的应用至今尚未得到探索。在这项工作中,我们证明了现有的基于点云编码器-解码器的补全网络可以为 SSM 提供尚未开发的潜力,在捕捉形状的群体级统计表示的同时,减轻推理负担并放宽输入要求。我们讨论了这些技术在 SSM 应用中的局限性,并提出了未来的改进建议。我们的工作为进一步探索用于 SSM 的点云深度学习铺平了道路,这是推进形状分析文献和将 SSM 扩展到各种使用案例的一条大有可为的途径。
{"title":"Can point cloud networks learn statistical shape models of anatomies?","authors":"Jadie Adams, Shireen Elhabian","doi":"10.1007/978-3-031-43907-0_47","DOIUrl":"10.1007/978-3-031-43907-0_47","url":null,"abstract":"<p><p>Statistical Shape Modeling (SSM) is a valuable tool for investigating and quantifying anatomical variations within populations of anatomies. However, traditional correspondence-based SSM generation methods have a prohibitive inference process and require complete geometric proxies (e.g., high-resolution binary volumes or surface meshes) as input shapes to construct the SSM. Unordered 3D point cloud representations of shapes are more easily acquired from various medical imaging practices (e.g., thresholded images and surface scanning). Point cloud deep networks have recently achieved remarkable success in learning permutation-invariant features for different point cloud tasks (e.g., completion, semantic segmentation, classification). However, their application to learning SSM from point clouds is to-date unexplored. In this work, we demonstrate that existing point cloud encoder-decoder-based completion networks can provide an untapped potential for SSM, capturing population-level statistical representations of shapes while reducing the inference burden and relaxing the input requirement. We discuss the limitations of these techniques to the SSM application and suggest future improvements. Our work paves the way for further exploration of point cloud deep learning for SSM, a promising avenue for advancing shape analysis literature and broadening SSM to diverse use cases.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"486-496"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11534086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully Bayesian VIB-DeepSSM. 完全贝叶斯 VIB-DeepSSM
Jadie Adams, Shireen Y Elhabian

Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation and demonstrate the efficacy of two scalable implementation approaches: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.

统计形状建模(SSM)可对解剖形状进行基于群体的定量分析,为临床诊断提供信息。深度学习方法可直接从未分离的三维图像中预测基于对应关系的 SSM,但需要校准的不确定性量化,因此需要贝叶斯公式。变异信息瓶颈深度SSM(VIB-DeepSSM)是一种有效的原则性框架,可通过图像预测解剖学的概率形状,并进行不确定性量化。然而,VIB 只是半贝叶斯方法,缺乏认识论不确定性推断。我们推导出了完全贝叶斯的 VIB 方案,并展示了两种可扩展的实施方法的功效:具体剔除和批量集合。此外,我们还介绍了这两种方法的新型组合,通过多模态边际化进一步增强了不确定性校准。在合成形状和左心房数据上的实验表明,全贝叶斯 VIB 网络可以在不牺牲准确性的情况下,通过改进的不确定性推理从图像中预测 SSM。
{"title":"Fully Bayesian VIB-DeepSSM.","authors":"Jadie Adams, Shireen Y Elhabian","doi":"10.1007/978-3-031-43898-1_34","DOIUrl":"10.1007/978-3-031-43898-1_34","url":null,"abstract":"<p><p>Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation and demonstrate the efficacy of two scalable implementation approaches: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14222 ","pages":"346-356"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11536909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1