首页 > 最新文献

Information processing in medical imaging : proceedings of the ... conference最新文献

英文 中文
TetCNN: Convolutional Neural Networks on Tetrahedral Meshes 四面体网格上的卷积神经网络
Pub Date : 2023-02-08 DOI: 10.48550/arXiv.2302.03830
Mohammad Farazi, Zhangsihao Yang, Wenjie Zhu, Peijie Qiu, Yalin Wang
Convolutional neural networks (CNN) have been broadly studied on images, videos, graphs, and triangular meshes. However, it has seldom been studied on tetrahedral meshes. Given the merits of using volumetric meshes in applications like brain image analysis, we introduce a novel interpretable graph CNN framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over commonly used graph Laplacian which lacks the Riemannian metric information of 3D manifolds. For pooling adaptation, we introduce new objective functions for localized minimum cuts in the Graclus algorithm based on the LBO. We employ a piece-wise constant approximation scheme that uses the clustering assignment matrix to estimate the LBO on sampled meshes after each pooling. Finally, adapting the Gradient-weighted Class Activation Mapping algorithm for tetrahedral meshes, we use the obtained heatmaps to visualize discovered regions-of-interest as biomarkers. We demonstrate the effectiveness of our model on cortical tetrahedral meshes from patients with Alzheimer's disease, as there is scientific evidence showing the correlation of cortical thickness to neurodegenerative disease progression. Our results show the superiority of our LBO-based convolution layer and adapted pooling over the conventionally used unitary cortical thickness, graph Laplacian, and point cloud representation.
卷积神经网络(CNN)在图像、视频、图形和三角形网格上得到了广泛的研究。然而,在四面体网格上的研究却很少。考虑到体积网格在脑图像分析等应用中的优点,我们为四面体网格结构引入了一种新的可解释图CNN框架。受ChebyNet的启发,我们的模型利用体积拉普拉斯-贝尔特拉米算子(LBO)在缺乏三维流形黎曼度量信息的常用图拉普拉斯算子上定义滤波器。为了池化自适应,我们在基于LBO的Graclus算法中引入了新的局部最小割目标函数。我们采用一种分段常数近似方案,该方案使用聚类分配矩阵来估计每次池化后采样网格上的LBO。最后,将梯度加权类激活映射算法应用于四面体网格,使用获得的热图将发现的感兴趣区域可视化为生物标志物。我们在阿尔茨海默病患者的皮层四面体网格上证明了我们的模型的有效性,因为有科学证据表明皮层厚度与神经退行性疾病的进展相关。我们的结果表明,我们基于lbo的卷积层和自适应池优于传统使用的单一皮质厚度、图拉普拉斯和点云表示。
{"title":"TetCNN: Convolutional Neural Networks on Tetrahedral Meshes","authors":"Mohammad Farazi, Zhangsihao Yang, Wenjie Zhu, Peijie Qiu, Yalin Wang","doi":"10.48550/arXiv.2302.03830","DOIUrl":"https://doi.org/10.48550/arXiv.2302.03830","url":null,"abstract":"Convolutional neural networks (CNN) have been broadly studied on images, videos, graphs, and triangular meshes. However, it has seldom been studied on tetrahedral meshes. Given the merits of using volumetric meshes in applications like brain image analysis, we introduce a novel interpretable graph CNN framework for the tetrahedral mesh structure. Inspired by ChebyNet, our model exploits the volumetric Laplace-Beltrami Operator (LBO) to define filters over commonly used graph Laplacian which lacks the Riemannian metric information of 3D manifolds. For pooling adaptation, we introduce new objective functions for localized minimum cuts in the Graclus algorithm based on the LBO. We employ a piece-wise constant approximation scheme that uses the clustering assignment matrix to estimate the LBO on sampled meshes after each pooling. Finally, adapting the Gradient-weighted Class Activation Mapping algorithm for tetrahedral meshes, we use the obtained heatmaps to visualize discovered regions-of-interest as biomarkers. We demonstrate the effectiveness of our model on cortical tetrahedral meshes from patients with Alzheimer's disease, as there is scientific evidence showing the correlation of cortical thickness to neurodegenerative disease progression. Our results show the superiority of our LBO-based convolution layer and adapted pooling over the conventionally used unitary cortical thickness, graph Laplacian, and point cloud representation.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"19 1","pages":"303-315"},"PeriodicalIF":0.0,"publicationDate":"2023-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83781040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Med-NCA: Robust and Lightweight Segmentation with Neural Cellular Automata Med-NCA:基于神经细胞自动机的鲁棒轻量级分割
Pub Date : 2023-02-07 DOI: 10.48550/arXiv.2302.03473
John Kalkhof, Camila Gonz'alez, A. Mukhopadhyay
Access to the proper infrastructure is critical when performing medical image segmentation with Deep Learning. This requirement makes it difficult to run state-of-the-art segmentation models in resource-constrained scenarios like primary care facilities in rural areas and during crises. The recently emerging field of Neural Cellular Automata (NCA) has shown that locally interacting one-cell models can achieve competitive results in tasks such as image generation or segmentations in low-resolution inputs. However, they are constrained by high VRAM requirements and the difficulty of reaching convergence for high-resolution images. To counteract these limitations we propose Med-NCA, an end-to-end NCA training pipeline for high-resolution image segmentation. Our method follows a two-step process. Global knowledge is first communicated between cells across the downscaled image. Following that, patch-based segmentation is performed. Our proposed Med-NCA outperforms the classic UNet by 2% and 3% Dice for hippocampus and prostate segmentation, respectively, while also being 500 times smaller. We also show that Med-NCA is by design invariant with respect to image scale, shape and translation, experiencing only slight performance degradation even with strong shifts; and is robust against MRI acquisition artefacts. Med-NCA enables high-resolution medical image segmentation even on a Raspberry Pi B+, arguably the smallest device able to run PyTorch and that can be powered by a standard power bank.
在使用深度学习进行医学图像分割时,访问适当的基础设施至关重要。这一要求使得在农村地区初级保健设施等资源受限的情况下和危机期间难以运行最先进的分割模型。最近出现的神经细胞自动机(NCA)领域表明,局部相互作用的单细胞模型可以在低分辨率输入的图像生成或分割等任务中获得竞争结果。然而,它们受到高VRAM要求和难以达到高分辨率图像收敛的限制。为了克服这些限制,我们提出了Med-NCA,一种用于高分辨率图像分割的端到端NCA训练管道。我们的方法分为两步。全局知识首先在缩小图像的细胞之间进行交流。之后,执行基于补丁的分割。我们提出的Med-NCA在海马体和前列腺分割方面分别比经典UNet高2%和3%,同时也小500倍。我们还表明,通过设计,Med-NCA在图像尺度、形状和平移方面是不变的,即使有强烈的移位,也只会出现轻微的性能下降;并且对MRI采集伪影具有鲁棒性。Med-NCA甚至可以在树莓派B+上实现高分辨率医学图像分割,树莓派B+可以说是能够运行PyTorch的最小设备,可以由标准充电宝供电。
{"title":"Med-NCA: Robust and Lightweight Segmentation with Neural Cellular Automata","authors":"John Kalkhof, Camila Gonz'alez, A. Mukhopadhyay","doi":"10.48550/arXiv.2302.03473","DOIUrl":"https://doi.org/10.48550/arXiv.2302.03473","url":null,"abstract":"Access to the proper infrastructure is critical when performing medical image segmentation with Deep Learning. This requirement makes it difficult to run state-of-the-art segmentation models in resource-constrained scenarios like primary care facilities in rural areas and during crises. The recently emerging field of Neural Cellular Automata (NCA) has shown that locally interacting one-cell models can achieve competitive results in tasks such as image generation or segmentations in low-resolution inputs. However, they are constrained by high VRAM requirements and the difficulty of reaching convergence for high-resolution images. To counteract these limitations we propose Med-NCA, an end-to-end NCA training pipeline for high-resolution image segmentation. Our method follows a two-step process. Global knowledge is first communicated between cells across the downscaled image. Following that, patch-based segmentation is performed. Our proposed Med-NCA outperforms the classic UNet by 2% and 3% Dice for hippocampus and prostate segmentation, respectively, while also being 500 times smaller. We also show that Med-NCA is by design invariant with respect to image scale, shape and translation, experiencing only slight performance degradation even with strong shifts; and is robust against MRI acquisition artefacts. Med-NCA enables high-resolution medical image segmentation even on a Raspberry Pi B+, arguably the smallest device able to run PyTorch and that can be powered by a standard power bank.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"4 1","pages":"705-716"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86978022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing OTRE:最优传输引导的非配对图像到图像翻译通过增强满足正则化
Pub Date : 2023-02-06 DOI: 10.48550/arXiv.2302.03003
Wenjie Zhu, Peijie Qiu, O. Dumitrascu, Jacob Jacob, Mohammad Farazi, Zhangsihao Yang, Keshav Nandakumar, Yalin Wang
Non-mydriatic retinal color fundus photography (CFP) is widely available due to the advantage of not requiring pupillary dilation, however, is prone to poor quality due to operators, systemic imperfections, or patient-related causes. Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses. Herein, we leveraged the Optimal Transport (OT) theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts. Furthermore, to improve the flexibility, robustness, and applicability of our image enhancement pipeline in the clinical practice, we generalized a state-of-the-art model-based image reconstruction method, regularization by denoising, by plugging in priors learned by our OT-guided image-to-image translation network. We named it as regularization by enhancing (RE). We validated the integrated framework, OTRE, on three publicly available retinal image datasets by assessing the quality after enhancement and their performance on various downstream tasks, including diabetic retinopathy grading, vessel segmentation, and diabetic lesion segmentation. The experimental results demonstrated the superiority of our proposed framework over some state-of-the-art unsupervised competitors and a state-of-the-art supervised method.
非散瞳性视网膜彩色眼底摄影(CFP)由于不需要瞳孔扩张的优点而广泛可用,然而,由于操作者、系统缺陷或患者相关原因,其质量往往较差。为了实现准确的医学诊断和自动化分析,要求获得最佳的视网膜图像质量。在此,我们利用最优传输(OT)理论提出了一种不成对的图像到图像转换方案,用于将低质量的视网膜CFP映射到高质量的对应物。此外,为了提高我们的图像增强管道在临床实践中的灵活性、鲁棒性和适用性,我们推广了一种最先进的基于模型的图像重建方法,即通过去噪进行正则化,通过插入OT引导的图像到图像转译网络学习的先验。我们将其命名为增强正则化(RE)。我们在三个公开可用的视网膜图像数据集上验证了集成框架OTRE,方法是评估增强后的质量及其在各种下游任务中的性能,包括糖尿病视网膜病变分级、血管分割和糖尿病病变分割。实验结果表明,我们提出的框架优于一些最先进的无监督竞争对手和最先进的有监督方法。
{"title":"OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing","authors":"Wenjie Zhu, Peijie Qiu, O. Dumitrascu, Jacob Jacob, Mohammad Farazi, Zhangsihao Yang, Keshav Nandakumar, Yalin Wang","doi":"10.48550/arXiv.2302.03003","DOIUrl":"https://doi.org/10.48550/arXiv.2302.03003","url":null,"abstract":"Non-mydriatic retinal color fundus photography (CFP) is widely available due to the advantage of not requiring pupillary dilation, however, is prone to poor quality due to operators, systemic imperfections, or patient-related causes. Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses. Herein, we leveraged the Optimal Transport (OT) theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts. Furthermore, to improve the flexibility, robustness, and applicability of our image enhancement pipeline in the clinical practice, we generalized a state-of-the-art model-based image reconstruction method, regularization by denoising, by plugging in priors learned by our OT-guided image-to-image translation network. We named it as regularization by enhancing (RE). We validated the integrated framework, OTRE, on three publicly available retinal image datasets by assessing the quality after enhancement and their performance on various downstream tasks, including diabetic retinopathy grading, vessel segmentation, and diabetic lesion segmentation. The experimental results demonstrated the superiority of our proposed framework over some state-of-the-art unsupervised competitors and a state-of-the-art supervised method.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"13939 1","pages":"415-427"},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45135680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Unsupervised Framework for Joint MRI Super Resolution and Gibbs Artifact Removal 联合MRI超分辨率和吉布斯伪影去除的无监督框架
Pub Date : 2023-02-06 DOI: 10.48550/arXiv.2302.02849
Yikang Liu, Eric Z. Chen, Xiao Chen, Terrence Chen, Shanhui Sun
The k-space data generated from magnetic resonance imaging (MRI) is only a finite sampling of underlying signals. Therefore, MRI images often suffer from low spatial resolution and Gibbs ringing artifacts. Previous studies tackled these two problems separately, where super resolution methods tend to enhance Gibbs artifacts, whereas Gibbs ringing removal methods tend to blur the images. It is also a challenge that high resolution ground truth is hard to obtain in clinical MRI. In this paper, we propose an unsupervised learning framework for both MRI super resolution and Gibbs artifacts removal without using high resolution ground truth. Furthermore, we propose regularization methods to improve the model's generalizability across out-of-distribution MRI images. We evaluated our proposed methods with other state-of-the-art methods on eight MRI datasets with various contrasts and anatomical structures. Our method not only achieves the best SR performance but also significantly reduces the Gibbs artifacts. Our method also demonstrates good generalizability across different datasets, which is beneficial to clinical applications where training data are usually scarce and biased.
由磁共振成像(MRI)产生的k空间数据只是底层信号的有限采样。因此,MRI图像经常遭受低空间分辨率和吉布斯响伪影。先前的研究分别解决了这两个问题,其中超分辨率方法倾向于增强吉布斯伪影,而吉布斯振铃去除方法倾向于模糊图像。在临床MRI中难以获得高分辨率的地面真实也是一个挑战。在本文中,我们提出了一个无监督学习框架,用于MRI超分辨率和Gibbs伪影去除,而不使用高分辨率的基础真值。此外,我们提出了正则化方法来提高模型在非分布MRI图像中的泛化性。我们用其他最先进的方法对8个具有不同对比和解剖结构的MRI数据集进行了评估。我们的方法不仅实现了最佳的SR性能,而且显著减少了吉布斯伪影。我们的方法还在不同的数据集上表现出良好的泛化性,这有利于临床应用,因为训练数据通常是稀缺和有偏见的。
{"title":"An Unsupervised Framework for Joint MRI Super Resolution and Gibbs Artifact Removal","authors":"Yikang Liu, Eric Z. Chen, Xiao Chen, Terrence Chen, Shanhui Sun","doi":"10.48550/arXiv.2302.02849","DOIUrl":"https://doi.org/10.48550/arXiv.2302.02849","url":null,"abstract":"The k-space data generated from magnetic resonance imaging (MRI) is only a finite sampling of underlying signals. Therefore, MRI images often suffer from low spatial resolution and Gibbs ringing artifacts. Previous studies tackled these two problems separately, where super resolution methods tend to enhance Gibbs artifacts, whereas Gibbs ringing removal methods tend to blur the images. It is also a challenge that high resolution ground truth is hard to obtain in clinical MRI. In this paper, we propose an unsupervised learning framework for both MRI super resolution and Gibbs artifacts removal without using high resolution ground truth. Furthermore, we propose regularization methods to improve the model's generalizability across out-of-distribution MRI images. We evaluated our proposed methods with other state-of-the-art methods on eight MRI datasets with various contrasts and anatomical structures. Our method not only achieves the best SR performance but also significantly reduces the Gibbs artifacts. Our method also demonstrates good generalizability across different datasets, which is beneficial to clinical applications where training data are usually scarce and biased.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"1 1","pages":"403-414"},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89236240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmonizing Flows: Unsupervised MR harmonization based on normalizing flows 协调流:基于规范化流的无监督MR协调
Pub Date : 2023-01-27 DOI: 10.48550/arXiv.2301.11551
Farzad Beizaee, Christian Desrosiers, G. Lodygensky, J. Dolz
In this paper, we propose an unsupervised framework based on normalizing flows that harmonizes MR images to mimic the distribution of the source domain. The proposed framework consists of three steps. First, a shallow harmonizer network is trained to recover images of the source domain from their augmented versions. A normalizing flow network is then trained to learn the distribution of the source domain. Finally, at test time, a harmonizer network is modified so that the output images match the source domain's distribution learned by the normalizing flow model. Our unsupervised, source-free and task-independent approach is evaluated on cross-domain brain MRI segmentation using data from four different sites. Results demonstrate its superior performance compared to existing methods.
在本文中,我们提出了一个基于规范化流的无监督框架,该框架协调MR图像以模拟源域的分布。提议的框架包括三个步骤。首先,训练浅谐器网络从增强图像中恢复源域图像。然后训练一个归一化流网络来学习源域的分布。最后,在测试时,修改谐波网络,使输出图像与归一化流模型学习到的源域分布相匹配。我们的无监督、无源和任务独立的方法使用来自四个不同地点的数据在跨域脑MRI分割上进行了评估。结果表明,与现有方法相比,该方法具有优越的性能。
{"title":"Harmonizing Flows: Unsupervised MR harmonization based on normalizing flows","authors":"Farzad Beizaee, Christian Desrosiers, G. Lodygensky, J. Dolz","doi":"10.48550/arXiv.2301.11551","DOIUrl":"https://doi.org/10.48550/arXiv.2301.11551","url":null,"abstract":"In this paper, we propose an unsupervised framework based on normalizing flows that harmonizes MR images to mimic the distribution of the source domain. The proposed framework consists of three steps. First, a shallow harmonizer network is trained to recover images of the source domain from their augmented versions. A normalizing flow network is then trained to learn the distribution of the source domain. Finally, at test time, a harmonizer network is modified so that the output images match the source domain's distribution learned by the normalizing flow model. Our unsupervised, source-free and task-independent approach is evaluated on cross-domain brain MRI segmentation using data from four different sites. Results demonstrate its superior performance compared to existing methods.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"33 1","pages":"347-359"},"PeriodicalIF":0.0,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89433899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations 基于正交表征学习的多敏感属性医学图像分类公平性研究
Pub Date : 2023-01-04 DOI: 10.48550/arXiv.2301.01481
Wenlong Deng, Yuan Zhong, Qianming Dou, Xiaoxiao Li
Mitigating the discrimination of machine learning models has gained increasing attention in medical image analysis. However, rare works focus on fair treatments for patients with multiple sensitive demographic ones, which is a crucial yet challenging problem for real-world clinical applications. In this paper, we propose a novel method for fair representation learning with respect to multi-sensitive attributes. We pursue the independence between target and multi-sensitive representations by achieving orthogonality in the representation space. Concretely, we enforce the column space orthogonality by keeping target information on the complement of a low-rank sensitive space. Furthermore, in the row space, we encourage feature dimensions between target and sensitive representations to be orthogonal. The effectiveness of the proposed method is demonstrated with extensive experiments on the CheXpert dataset. To our best knowledge, this is the first work to mitigate unfairness with respect to multiple sensitive attributes in the field of medical imaging.
减轻机器学习模型的歧视在医学图像分析中越来越受到关注。然而,罕见的作品侧重于对具有多种敏感人口统计学特征的患者进行公平治疗,这对于现实世界的临床应用来说是一个至关重要但具有挑战性的问题。本文提出了一种基于多敏感属性的公平表示学习方法。我们通过在表示空间中实现正交性来追求目标表示和多敏感表示之间的独立性。具体来说,我们通过将目标信息保持在低秩敏感空间的补上来实现列空间的正交性。此外,在行空间中,我们鼓励目标和敏感表示之间的特征维度是正交的。在CheXpert数据集上进行了大量的实验,证明了该方法的有效性。据我们所知,这是第一个在医学成像领域减轻对多个敏感属性的不公平的工作。
{"title":"On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations","authors":"Wenlong Deng, Yuan Zhong, Qianming Dou, Xiaoxiao Li","doi":"10.48550/arXiv.2301.01481","DOIUrl":"https://doi.org/10.48550/arXiv.2301.01481","url":null,"abstract":"Mitigating the discrimination of machine learning models has gained increasing attention in medical image analysis. However, rare works focus on fair treatments for patients with multiple sensitive demographic ones, which is a crucial yet challenging problem for real-world clinical applications. In this paper, we propose a novel method for fair representation learning with respect to multi-sensitive attributes. We pursue the independence between target and multi-sensitive representations by achieving orthogonality in the representation space. Concretely, we enforce the column space orthogonality by keeping target information on the complement of a low-rank sensitive space. Furthermore, in the row space, we encourage feature dimensions between target and sensitive representations to be orthogonal. The effectiveness of the proposed method is demonstrated with extensive experiments on the CheXpert dataset. To our best knowledge, this is the first work to mitigate unfairness with respect to multiple sensitive attributes in the field of medical imaging.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"2014 1","pages":"158-169"},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86517512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
MetaViT: Metabolism-Aware Vision Transformer for Differential Diagnosis of Parkinsonism with 18F-FDG PET MetaViT:代谢感知视觉转换与18F-FDG PET鉴别诊断帕金森病
Pub Date : 2023-01-01 DOI: 10.1007/978-3-031-34048-2_11
Lin Zhao, Hexin Dong, P. Wu, Jiaying Lu, Le Lu, Jingren Zhou, Tianming Liu, Li Zhang, Ling Zhang, Yuxing Tang, C. Zuo
{"title":"MetaViT: Metabolism-Aware Vision Transformer for Differential Diagnosis of Parkinsonism with 18F-FDG PET","authors":"Lin Zhao, Hexin Dong, P. Wu, Jiaying Lu, Le Lu, Jingren Zhou, Tianming Liu, Li Zhang, Ling Zhang, Yuxing Tang, C. Zuo","doi":"10.1007/978-3-031-34048-2_11","DOIUrl":"https://doi.org/10.1007/978-3-031-34048-2_11","url":null,"abstract":"","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"70 1","pages":"132-144"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83827522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-task Multi-instance Learning for Jointly Diagnosis and Prognosis of Early-Stage Breast Invasive Carcinoma from Whole-Slide Pathological Images 基于多任务多实例学习的早期乳腺浸润癌全片病理影像联合诊断与预后研究
Pub Date : 2023-01-01 DOI: 10.1007/978-3-031-34048-2_12
Jianxin Liu, Rongjun Ge, Peng Wan, Qi Zhu, Daoqiang Zhang, Wei Shao
{"title":"Multi-task Multi-instance Learning for Jointly Diagnosis and Prognosis of Early-Stage Breast Invasive Carcinoma from Whole-Slide Pathological Images","authors":"Jianxin Liu, Rongjun Ge, Peng Wan, Qi Zhu, Daoqiang Zhang, Wei Shao","doi":"10.1007/978-3-031-34048-2_12","DOIUrl":"https://doi.org/10.1007/978-3-031-34048-2_12","url":null,"abstract":"","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"328 1","pages":"145-157"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86780283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source-Free Domain Adaptation for Medical Image Segmentation via Selectively Updated Mean Teacher 基于选择性更新均值教师的医学图像分割无源域自适应
Pub Date : 2023-01-01 DOI: 10.1007/978-3-031-34048-2_18
Ziqi Wen, Xinru Zhang, Chuyang Ye
{"title":"Source-Free Domain Adaptation for Medical Image Segmentation via Selectively Updated Mean Teacher","authors":"Ziqi Wen, Xinru Zhang, Chuyang Ye","doi":"10.1007/978-3-031-34048-2_18","DOIUrl":"https://doi.org/10.1007/978-3-031-34048-2_18","url":null,"abstract":"","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"1 1","pages":"225-236"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89929870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Super-Resolution Reconstruction of Fetal Brain MRI with Prior Anatomical Knowledge 胎儿脑MRI的超分辨率重建与先前的解剖学知识
Pub Date : 2023-01-01 DOI: 10.1007/978-3-031-34048-2_33
Shijie Huang, Geng Chen, Kaicong Sun, Zhiming Cui, Xukun Zhang, P. Xue, Xuan Zhang, He-Xiao Zhang, Dinggang Shen
{"title":"Super-Resolution Reconstruction of Fetal Brain MRI with Prior Anatomical Knowledge","authors":"Shijie Huang, Geng Chen, Kaicong Sun, Zhiming Cui, Xukun Zhang, P. Xue, Xuan Zhang, He-Xiao Zhang, Dinggang Shen","doi":"10.1007/978-3-031-34048-2_33","DOIUrl":"https://doi.org/10.1007/978-3-031-34048-2_33","url":null,"abstract":"","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"64 1","pages":"428-441"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80344664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information processing in medical imaging : proceedings of the ... conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1