首页 > 最新文献

Information processing in medical imaging : proceedings of the ... conference最新文献

英文 中文
OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing. OTRE:其中最优传输引导的非配对图像到图像的翻译满足正则化通过增强。
Pub Date : 2023-06-01 DOI: 10.1007/978-3-031-34048-2_32
Wenhui Zhu, Peijie Qiu, Oana M Dumitrascu, Jacob M Sobczak, Mohammad Farazi, Zhangsihao Yang, Keshav Nandakumar, Yalin Wang

Non-mydriatic retinal color fundus photography (CFP) is widely available due to the advantage of not requiring pupillary dilation, however, is prone to poor quality due to operators, systemic imperfections, or patient-related causes. Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses. Herein, we leveraged the Optimal Transport (OT) theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts. Furthermore, to improve the flexibility, robustness, and applicability of our image enhancement pipeline in the clinical practice, we generalized a state-of-the-art model-based image reconstruction method, regularization by denoising, by plugging in priors learned by our OT-guided image-to-image translation network. We named it as regularization by enhancing (RE). We validated the integrated framework, OTRE, on three publicly available retinal image datasets by assessing the quality after enhancement and their performance on various downstream tasks, including diabetic retinopathy grading, vessel segmentation, and diabetic lesion segmentation. The experimental results demonstrated the superiority of our proposed framework over some state-of-the-art unsupervised competitors and a state-of-the-art supervised method.

非晶状体视网膜彩色眼底摄影(CFP)由于不需要瞳孔扩张的优点而广泛应用,然而,由于操作人员,系统缺陷或患者相关原因,容易导致质量差。最佳的视网膜图像质量是强制准确的医学诊断和自动分析。在此,我们利用最优传输(OT)理论提出了一种非配对图像到图像的转换方案,用于将低质量的视网膜CFPs映射到高质量的对应对象。此外,为了提高我们的图像增强管道在临床实践中的灵活性、鲁棒性和适用性,我们推广了一种最先进的基于模型的图像重建方法,即通过去噪进行正则化,通过插入我们的ot引导的图像到图像翻译网络学习到的先验。我们将其命名为正则化增强(RE)。我们在三个公开可用的视网膜图像数据集上验证了集成框架OTRE,通过评估增强后的质量及其在各种下游任务中的表现,包括糖尿病视网膜病变分级、血管分割和糖尿病病变分割。实验结果表明,我们提出的框架优于一些最先进的无监督竞争对手和最先进的监督方法。
{"title":"OTRE: Where Optimal Transport Guided Unpaired Image-to-Image Translation Meets Regularization by Enhancing.","authors":"Wenhui Zhu,&nbsp;Peijie Qiu,&nbsp;Oana M Dumitrascu,&nbsp;Jacob M Sobczak,&nbsp;Mohammad Farazi,&nbsp;Zhangsihao Yang,&nbsp;Keshav Nandakumar,&nbsp;Yalin Wang","doi":"10.1007/978-3-031-34048-2_32","DOIUrl":"https://doi.org/10.1007/978-3-031-34048-2_32","url":null,"abstract":"<p><p>Non-mydriatic retinal color fundus photography (CFP) is widely available due to the advantage of not requiring pupillary dilation, however, is prone to poor quality due to operators, systemic imperfections, or patient-related causes. Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses. Herein, we leveraged the <i>Optimal Transport (OT)</i> theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts. Furthermore, to improve the flexibility, robustness, and applicability of our image enhancement pipeline in the clinical practice, we generalized a state-of-the-art model-based image reconstruction method, regularization by denoising, by plugging in priors learned by our OT-guided image-to-image translation network. We named it as <i>regularization by enhancing (RE)</i>. We validated the integrated framework, OTRE, on three publicly available retinal image datasets by assessing the quality after enhancement and their performance on various downstream tasks, including diabetic retinopathy grading, vessel segmentation, and diabetic lesion segmentation. The experimental results demonstrated the superiority of our proposed framework over some state-of-the-art unsupervised competitors and a state-of-the-art supervised method.</p>","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"13939 ","pages":"415-427"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10329768/pdf/nihms-1880857.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9811952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Token Sparsification for Faster Medical Image Segmentation. 令牌稀疏化实现更快的医学图像分割
Pub Date : 2023-06-01 Epub Date: 2023-06-08 DOI: 10.1007/978-3-031-34048-2_57
Lei Zhou, Huidong Liu, Joseph Bae, Junjun He, Dimitris Samaras, Prateek Prasanna

Can we use sparse tokens for dense prediction, e.g., segmentation? Although token sparsification has been applied to Vision Transformers (ViT) to accelerate classification, it is still unknown how to perform segmentation from sparse tokens. To this end, we reformulate segmentation as a sparse encodingtoken completiondense decoding (SCD) pipeline. We first empirically show that naïvely applying existing approaches from classification token pruning and masked image modeling (MIM) leads to failure and inefficient training caused by inappropriate sampling algorithms and the low quality of the restored dense features. In this paper, we propose Soft-topK Token Pruning (STP) and Multi-layer Token Assembly (MTA) to address these problems. In sparse encoding, STP predicts token importance scores with a lightweight sub-network and samples the topK tokens. The intractable topK gradients are approximated through a continuous perturbed score distribution. In token completion, MTA restores a full token sequence by assembling both sparse output tokens and pruned multi-layer intermediate ones. The last dense decoding stage is compatible with existing segmentation decoders, e.g., UNETR. Experiments show SCD pipelines equipped with STP and MTA are much faster than baselines without token pruning in both training (up to 120% higher throughput) and inference (up to 60.6% higher throughput) while maintaining segmentation quality. Code is available here: https://github.com/cvlab-stonybrook/TokenSparse-for-MedSeg.

我们能否使用稀疏标记进行密集预测,例如分割?虽然标记稀疏化已被应用于视觉转换器(ViT)以加速分类,但如何利用稀疏标记进行分割仍是未知数。为此,我们将分割重新表述为稀疏编码 → 标记补全 → 密集解码(SCD)流水线。我们首先通过经验证明,天真地应用现有的分类标记修剪和掩蔽图像建模(MIM)方法会导致训练失败和效率低下,原因是采样算法不当和恢复的密集特征质量不高。本文提出了软顶层令牌剪枝(Soft-topK Token Pruning,STP)和多层令牌组装(Multi-layer Token Assembly,MTA)来解决这些问题。在稀疏编码中,STP 通过轻量级子网络预测标记重要性得分,并对 topK 标记进行采样。通过连续的扰动分数分布来近似难以处理的 topK 梯度。在标记完成过程中,MTA 通过组装稀疏输出标记和剪枝多层中间标记来恢复完整的标记序列。最后的密集解码阶段与现有的分段解码器(如 UNETR)兼容。实验表明,配备 STP 和 MTA 的 SCD 管道在训练(吞吐量最多提高 120%)和推理(吞吐量最多提高 60.6%)方面都比没有标记剪枝的基线快得多,同时还能保持分割质量。代码见:https://github.com/cvlab-stonybrook/TokenSparse-for-MedSeg。
{"title":"Token Sparsification for Faster Medical Image Segmentation.","authors":"Lei Zhou, Huidong Liu, Joseph Bae, Junjun He, Dimitris Samaras, Prateek Prasanna","doi":"10.1007/978-3-031-34048-2_57","DOIUrl":"https://doi.org/10.1007/978-3-031-34048-2_57","url":null,"abstract":"<p><p><i>Can we use sparse tokens for dense prediction, e.g., segmentation?</i> Although token sparsification has been applied to Vision Transformers (ViT) to accelerate classification, it is still unknown how to perform segmentation from sparse tokens. To this end, we reformulate segmentation as a <i>s</i><i>parse encoding</i> → <i>token</i> <i>c</i><i>ompletion</i> → <i>d</i><i>ense decoding</i> (SCD) pipeline. We first empirically show that naïvely applying existing approaches from classification token pruning and masked image modeling (MIM) leads to failure and inefficient training caused by inappropriate sampling algorithms and the low quality of the restored dense features. In this paper, we propose <i>Soft-topK Token Pruning (STP)</i> and <i>Multi-layer Token Assembly (MTA)</i> to address these problems. In <i>sparse encoding</i>, <i>STP</i> predicts token importance scores with a lightweight sub-network and samples the topK tokens. The intractable topK gradients are approximated through a continuous perturbed score distribution. In <i>token completion</i>, <i>MTA</i> restores a full token sequence by assembling both sparse output tokens and pruned multi-layer intermediate ones. The last <i>dense decoding</i> stage is compatible with existing segmentation decoders, e.g., UNETR. Experiments show SCD pipelines equipped with <i>STP</i> and <i>MTA</i> are much faster than baselines without token pruning in both training (up to 120% higher throughput) and inference (up to 60.6% higher throughput) while maintaining segmentation quality. Code is available here: https://github.com/cvlab-stonybrook/TokenSparse-for-MedSeg.</p>","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"13939 ","pages":"743-754"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11056020/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140856074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous Graph Convolutional Neural Network via Hodge-Laplacian for Brain Functional Data. 通过 Hodge-Laplacian 处理大脑功能数据的异构图卷积神经网络
Pub Date : 2023-06-01 Epub Date: 2023-06-08 DOI: 10.1007/978-3-031-34048-2_22
Jinghan Huang, Moo K Chung, Anqi Qiu

This study proposes a novel heterogeneous graph convolutional neural network (HGCNN) to handle complex brain fMRI data at regional and across-region levels. We introduce a generic formulation of spectral filters on heterogeneous graphs by introducing the k-th Hodge-Laplacian (HL) operator. In particular, we propose Laguerre polynomial approximations of HL spectral filters and prove that their spatial localization on graphs is related to the polynomial order. Furthermore, based on the bijection property of boundary operators on simplex graphs, we introduce a generic topological graph pooling (TGPool) method that can be used at any dimensional simplices. This study designs HL-node, HL-edge, and HL-HGCNN neural networks to learn signal representation at a graph node, edge levels, and both, respectively. Our experiments employ fMRI from the Adolescent Brain Cognitive Development (ABCD; n=7693) to predict general intelligence. Our results demonstrate the advantage of the HL-edge network over the HL-node network when functional brain connectivity is considered as features. The HL-HGCNN outperforms the state-of-the-art graph neural networks (GNNs) approaches, such as GAT, BrainGNN, dGCN, BrainNetCNN, and Hypergraph NN. The functional connectivity features learned from the HL-HGCNN are meaningful in interpreting neural circuits related to general intelligence.

本研究提出了一种新型异构图卷积神经网络(HGCNN),用于处理区域和跨区域级别的复杂脑部 fMRI 数据。我们通过引入 k-th Hodge-Laplacian (HL) 算子,介绍了异构图谱滤波器的通用公式。特别是,我们提出了 HL 频谱滤波器的拉盖尔多项式近似值,并证明其在图上的空间定位与多项式阶数有关。此外,基于单纯形图上边界算子的双射属性,我们引入了一种通用拓扑图池化(TGPool)方法,该方法可用于任意维度的单纯形图。本研究设计了 HL-节点、HL-边缘和 HL-HGCNN 神经网络,分别学习图节点、边缘和两者的信号表示。我们的实验采用青少年大脑认知发展(ABCD;n=7693)的 fMRI 来预测一般智力。我们的结果表明,当大脑功能连接被视为特征时,HL-边网络比 HL-节点网络更具优势。HL-HGCNN 优于最先进的图神经网络(GNN)方法,如 GAT、BrainGNN、dGCN、BrainNetCNN 和 Hypergraph NN。从 HL-HGCNN 中学习到的功能连接特征对于解释与一般智能相关的神经回路很有意义。
{"title":"Heterogeneous Graph Convolutional Neural Network via Hodge-Laplacian for Brain Functional Data.","authors":"Jinghan Huang, Moo K Chung, Anqi Qiu","doi":"10.1007/978-3-031-34048-2_22","DOIUrl":"10.1007/978-3-031-34048-2_22","url":null,"abstract":"<p><p>This study proposes a novel heterogeneous graph convolutional neural network (HGCNN) to handle complex brain fMRI data at regional and across-region levels. We introduce a generic formulation of spectral filters on heterogeneous graphs by introducing the <i>k</i>-<i>th</i> Hodge-Laplacian (HL) operator. In particular, we propose Laguerre polynomial approximations of HL spectral filters and prove that their spatial localization on graphs is related to the polynomial order. Furthermore, based on the bijection property of boundary operators on simplex graphs, we introduce a generic topological graph pooling (TGPool) method that can be used at any dimensional simplices. This study designs HL-node, HL-edge, and HL-HGCNN neural networks to learn signal representation at a graph node, edge levels, and both, respectively. Our experiments employ fMRI from the Adolescent Brain Cognitive Development (ABCD; n=7693) to predict general intelligence. Our results demonstrate the advantage of the HL-edge network over the HL-node network when functional brain connectivity is considered as features. The HL-HGCNN outperforms the state-of-the-art graph neural networks (GNNs) approaches, such as GAT, BrainGNN, dGCN, BrainNetCNN, and Hypergraph NN. The functional connectivity features learned from the HL-HGCNN are meaningful in interpreting neural circuits related to general intelligence.</p>","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"13939 ","pages":"278-290"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11108189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141077300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixup-Privacy: A simple yet effective approach for privacy-preserving segmentation Mixup-Privacy:一种简单而有效的隐私保护分割方法
Pub Date : 2023-05-23 DOI: 10.48550/arXiv.2305.13756
B. Kim, J. Dolz, Pierre-Marc Jodoin, Christian Desrosiers
Privacy protection in medical data is a legitimate obstacle for centralized machine learning applications. Here, we propose a client-server image segmentation system which allows for the analysis of multi-centric medical images while preserving patient privacy. In this approach, the client protects the to-be-segmented patient image by mixing it to a reference image. As shown in our work, it is challenging to separate the image mixture to exact original content, thus making the data unworkable and unrecognizable for an unauthorized person. This proxy image is sent to a server for processing. The server then returns the mixture of segmentation maps, which the client can revert to a correct target segmentation. Our system has two components: 1) a segmentation network on the server side which processes the image mixture, and 2) a segmentation unmixing network which recovers the correct segmentation map from the segmentation mixture. Furthermore, the whole system is trained end-to-end. The proposed method is validated on the task of MRI brain segmentation using images from two different datasets. Results show that the segmentation accuracy of our method is comparable to a system trained on raw images, and outperforms other privacy-preserving methods with little computational overhead.
医疗数据中的隐私保护是集中式机器学习应用的合法障碍。在这里,我们提出了一个客户端-服务器图像分割系统,该系统允许在保护患者隐私的同时分析多中心医学图像。在这种方法中,客户端通过将待分割的患者图像混合到参考图像中来保护待分割的患者图像。正如我们的工作所示,将图像混合物与精确的原始内容分离是具有挑战性的,从而使数据对未经授权的人来说是不可用和不可识别的。该代理映像被发送到服务器进行处理。然后,服务器返回分段映射的混合,客户机可以将其还原为正确的目标分段。该系统由两个部分组成:1)服务器端的分割网络,用于处理混合图像;2)分割解混网络,用于从混合图像中恢复正确的分割图。此外,整个系统是端到端的训练。在两个不同数据集的MRI脑分割任务上验证了该方法的有效性。结果表明,该方法的分割精度与在原始图像上训练的系统相当,并且在计算开销很小的情况下优于其他隐私保护方法。
{"title":"Mixup-Privacy: A simple yet effective approach for privacy-preserving segmentation","authors":"B. Kim, J. Dolz, Pierre-Marc Jodoin, Christian Desrosiers","doi":"10.48550/arXiv.2305.13756","DOIUrl":"https://doi.org/10.48550/arXiv.2305.13756","url":null,"abstract":"Privacy protection in medical data is a legitimate obstacle for centralized machine learning applications. Here, we propose a client-server image segmentation system which allows for the analysis of multi-centric medical images while preserving patient privacy. In this approach, the client protects the to-be-segmented patient image by mixing it to a reference image. As shown in our work, it is challenging to separate the image mixture to exact original content, thus making the data unworkable and unrecognizable for an unauthorized person. This proxy image is sent to a server for processing. The server then returns the mixture of segmentation maps, which the client can revert to a correct target segmentation. Our system has two components: 1) a segmentation network on the server side which processes the image mixture, and 2) a segmentation unmixing network which recovers the correct segmentation map from the segmentation mixture. Furthermore, the whole system is trained end-to-end. The proposed method is validated on the task of MRI brain segmentation using images from two different datasets. Results show that the segmentation accuracy of our method is comparable to a system trained on raw images, and outperforms other privacy-preserving methods with little computational overhead.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"40 1","pages":"717-729"},"PeriodicalIF":0.0,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83483216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Boundary Detection in Deep Learning Models for Medical Image Segmentation 医学图像分割深度学习模型中边界检测的再思考
Pub Date : 2023-05-01 DOI: 10.48550/arXiv.2305.00678
Yi-Mou Lin, Dong-Ming Zhang, Xiaori Fang, Yufan Chen, Kwang-Ting Cheng, Hao Chen
Medical image segmentation is a fundamental task in the community of medical image analysis. In this paper, a novel network architecture, referred to as Convolution, Transformer, and Operator (CTO), is proposed. CTO employs a combination of Convolutional Neural Networks (CNNs), Vision Transformer (ViT), and an explicit boundary detection operator to achieve high recognition accuracy while maintaining an optimal balance between accuracy and efficiency. The proposed CTO follows the standard encoder-decoder segmentation paradigm, where the encoder network incorporates a popular CNN backbone for capturing local semantic information, and a lightweight ViT assistant for integrating long-range dependencies. To enhance the learning capacity on boundary, a boundary-guided decoder network is proposed that uses a boundary mask obtained from a dedicated boundary detection operator as explicit supervision to guide the decoding learning process. The performance of the proposed method is evaluated on six challenging medical image segmentation datasets, demonstrating that CTO achieves state-of-the-art accuracy with a competitive model complexity.
医学图像分割是医学图像分析领域的一项基础性工作。本文提出了一种新的网络结构,即卷积、变压器和算子(CTO)。CTO采用卷积神经网络(cnn)、视觉变换(ViT)和显式边界检测算子的组合来实现高识别精度,同时保持精度和效率之间的最佳平衡。所提出的CTO遵循标准的编码器-解码器分割范例,其中编码器网络包含用于捕获本地语义信息的流行CNN主干,以及用于集成远程依赖关系的轻量级ViT助手。为了提高边界学习能力,提出了一种边界引导解码器网络,该网络使用由专用边界检测算子获得的边界掩码作为显式监督来指导解码学习过程。在六个具有挑战性的医学图像分割数据集上对该方法的性能进行了评估,表明CTO在具有竞争力的模型复杂性的情况下实现了最先进的精度。
{"title":"Rethinking Boundary Detection in Deep Learning Models for Medical Image Segmentation","authors":"Yi-Mou Lin, Dong-Ming Zhang, Xiaori Fang, Yufan Chen, Kwang-Ting Cheng, Hao Chen","doi":"10.48550/arXiv.2305.00678","DOIUrl":"https://doi.org/10.48550/arXiv.2305.00678","url":null,"abstract":"Medical image segmentation is a fundamental task in the community of medical image analysis. In this paper, a novel network architecture, referred to as Convolution, Transformer, and Operator (CTO), is proposed. CTO employs a combination of Convolutional Neural Networks (CNNs), Vision Transformer (ViT), and an explicit boundary detection operator to achieve high recognition accuracy while maintaining an optimal balance between accuracy and efficiency. The proposed CTO follows the standard encoder-decoder segmentation paradigm, where the encoder network incorporates a popular CNN backbone for capturing local semantic information, and a lightweight ViT assistant for integrating long-range dependencies. To enhance the learning capacity on boundary, a boundary-guided decoder network is proposed that uses a boundary mask obtained from a dedicated boundary detection operator as explicit supervision to guide the decoding learning process. The performance of the proposed method is evaluated on six challenging medical image segmentation datasets, demonstrating that CTO achieves state-of-the-art accuracy with a competitive model complexity.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"77 1","pages":"730-742"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87056818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Live image-based neurosurgical guidance and roadmap generation using unsupervised embedding 使用无监督嵌入的基于实时图像的神经外科指导和路线图生成
Pub Date : 2023-03-31 DOI: 10.48550/arXiv.2303.18019
Gary Sarwin, A. Carretta, V. Staartjes, M. Zoli, D. Mazzatenta, L. Regli, C. Serra, E. Konukoglu
Advanced minimally invasive neurosurgery navigation relies mainly on Magnetic Resonance Imaging (MRI) guidance. MRI guidance, however, only provides pre-operative information in the majority of the cases. Once the surgery begins, the value of this guidance diminishes to some extent because of the anatomical changes due to surgery. Guidance with live image feedback coming directly from the surgical device, e.g., endoscope, can complement MRI-based navigation or be an alternative if MRI guidance is not feasible. With this motivation, we present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.First, we report the performance of a deep learning-based object detection method, YOLO, on detecting anatomical structures in neurosurgical images. Second, we present a method for generating neurosurgical roadmaps using unsupervised embedding without assuming exact anatomical matches between patients, presence of an extensive anatomical atlas, or the need for simultaneous localization and mapping. A generated roadmap encodes the common anatomical paths taken in surgeries in the training set. At inference, the roadmap can be used to map a surgeon's current location using live image feedback on the path to provide guidance by being able to predict which structures should appear going forward or backward, much like a mapping application. Even though the embedding is not supervised by position information, we show that it is correlated to the location inside the brain and on the surgical path. We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
先进的微创神经外科导航主要依靠磁共振成像(MRI)引导。然而,在大多数情况下,MRI指导仅提供术前信息。一旦手术开始,由于手术引起的解剖改变,这种指导的价值在一定程度上减弱。直接来自手术设备(如内窥镜)的实时图像反馈指导可以补充基于MRI的导航,或者在MRI指导不可行的情况下作为替代方案。基于这一动机,我们提出了一种利用大量带注释的神经外科视频数据集进行实时图像指导的方法。首先,我们报告了一种基于深度学习的目标检测方法YOLO在神经外科图像中检测解剖结构的性能。其次,我们提出了一种使用无监督嵌入生成神经外科路线图的方法,而无需假设患者之间的精确解剖匹配,存在广泛的解剖图谱,或者需要同时定位和绘图。生成的路线图编码了训练集中手术中常见的解剖路径。在推理中,路线图可以用来绘制外科医生当前的位置,使用路径上的实时图像反馈来提供指导,能够预测哪些结构应该向前或向后出现,就像地图应用程序一样。即使嵌入不受位置信息的监督,我们也表明它与大脑内部和手术路径上的位置相关。我们对166例经蝶窦腺瘤切除术的数据集进行了训练和评估。
{"title":"Live image-based neurosurgical guidance and roadmap generation using unsupervised embedding","authors":"Gary Sarwin, A. Carretta, V. Staartjes, M. Zoli, D. Mazzatenta, L. Regli, C. Serra, E. Konukoglu","doi":"10.48550/arXiv.2303.18019","DOIUrl":"https://doi.org/10.48550/arXiv.2303.18019","url":null,"abstract":"Advanced minimally invasive neurosurgery navigation relies mainly on Magnetic Resonance Imaging (MRI) guidance. MRI guidance, however, only provides pre-operative information in the majority of the cases. Once the surgery begins, the value of this guidance diminishes to some extent because of the anatomical changes due to surgery. Guidance with live image feedback coming directly from the surgical device, e.g., endoscope, can complement MRI-based navigation or be an alternative if MRI guidance is not feasible. With this motivation, we present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.First, we report the performance of a deep learning-based object detection method, YOLO, on detecting anatomical structures in neurosurgical images. Second, we present a method for generating neurosurgical roadmaps using unsupervised embedding without assuming exact anatomical matches between patients, presence of an extensive anatomical atlas, or the need for simultaneous localization and mapping. A generated roadmap encodes the common anatomical paths taken in surgeries in the training set. At inference, the roadmap can be used to map a surgeon's current location using live image feedback on the path to provide guidance by being able to predict which structures should appear going forward or backward, much like a mapping application. Even though the embedding is not supervised by position information, we show that it is correlated to the location inside the brain and on the surgical path. We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"12 1","pages":"107-118"},"PeriodicalIF":0.0,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82443376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images 生物医学单细胞图像中多实例学习模型的像素级解释
Pub Date : 2023-03-15 DOI: 10.48550/arXiv.2303.08632
A. Sadafi, Oleksandra Adonkina, Ashkan Khakzar, P. Lienemann, Rudolf Matthias Hehr, D. Rueckert, N. Navab, C. Marr
Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients' blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.
可解释性是计算机辅助诊断系统在临床决策中的关键要求。具有注意力池的多实例学习提供了实例级的可解释性,然而对于许多临床应用来说,更深入的、像素级的解释是可取的,但迄今为止还没有。在这项工作中,我们研究了使用四种归因方法来解释多实例学习模型:GradCAM,分层相关传播(LRP),信息瓶颈归因(IBA)和InputIBA。通过这些方法的集合,我们可以从患者的血液涂片中获得诊断血癌任务的像素级解释。我们研究了两个包含超过100,000个单细胞图像的急性髓系白血病数据集,并观察了每种归因方法在多实例学习架构上的表现,重点关注白细胞的不同特性。此外,我们将归因图与医学专家的注释进行比较,看看模型的决策与人类标准有何不同。我们的研究解决了在多实例学习模型中实现像素级可解释性的挑战,并为临床医生更好地理解和信任计算机辅助诊断系统的决策提供了见解。
{"title":"Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images","authors":"A. Sadafi, Oleksandra Adonkina, Ashkan Khakzar, P. Lienemann, Rudolf Matthias Hehr, D. Rueckert, N. Navab, C. Marr","doi":"10.48550/arXiv.2303.08632","DOIUrl":"https://doi.org/10.48550/arXiv.2303.08632","url":null,"abstract":"Explainability is a key requirement for computer-aided diagnosis systems in clinical decision-making. Multiple instance learning with attention pooling provides instance-level explainability, however for many clinical applications a deeper, pixel-level explanation is desirable, but missing so far. In this work, we investigate the use of four attribution methods to explain a multiple instance learning models: GradCAM, Layer-Wise Relevance Propagation (LRP), Information Bottleneck Attribution (IBA), and InputIBA. With this collection of methods, we can derive pixel-level explanations on for the task of diagnosing blood cancer from patients' blood smears. We study two datasets of acute myeloid leukemia with over 100 000 single cell images and observe how each attribution method performs on the multiple instance learning architecture focusing on different properties of the white blood single cells. Additionally, we compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard. Our study addresses the challenge of implementing pixel-level explainability in multiple instance learning models and provides insights for clinicians to better understand and trust decisions from computer-aided diagnosis systems.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"23 1","pages":"170-182"},"PeriodicalIF":0.0,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78891836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HALOS: Hallucination-free Organ Segmentation after Organ Resection Surgery 光晕:器官切除术后无幻觉的器官分割
Pub Date : 2023-03-14 DOI: 10.48550/arXiv.2303.07717
Anne-Marie Rickmann, Murong Xu, Thomas Wolf, Oksana P. Kovalenko, C. Wachinger
The wide range of research in deep learning-based medical image segmentation pushed the boundaries in a multitude of applications. A clinically relevant problem that received less attention is the handling of scans with irregular anatomy, e.g., after organ resection. State-of-the-art segmentation models often lead to organ hallucinations, i.e., false-positive predictions of organs, which cannot be alleviated by oversampling or post-processing. Motivated by the increasing need to develop robust deep learning models, we propose HALOS for abdominal organ segmentation in MR images that handles cases after organ resection surgery. To this end, we combine missing organ classification and multi-organ segmentation tasks into a multi-task model, yielding a classification-assisted segmentation pipeline. The segmentation network learns to incorporate knowledge about organ existence via feature fusion modules. Extensive experiments on a small labeled test set and large-scale UK Biobank data demonstrate the effectiveness of our approach in terms of higher segmentation Dice scores and near-to-zero false positive prediction rate.
基于深度学习的医学图像分割的广泛研究在众多应用中突破了界限。一个较少受到关注的临床相关问题是不规则解剖扫描的处理,例如器官切除后。最先进的分割模型经常导致器官幻觉,即器官的假阳性预测,这不能通过过采样或后处理来缓解。由于越来越需要开发强大的深度学习模型,我们提出了用于MR图像中腹部器官分割的HALOS,用于处理器官切除手术后的病例。为此,我们将缺失器官分类和多器官分割任务结合到一个多任务模型中,产生了一个分类辅助分割管道。该分割网络通过特征融合模块学习合并器官存在的知识。在小型标记测试集和大规模UK Biobank数据上进行的大量实验表明,我们的方法在更高的分割Dice分数和接近于零的假阳性预测率方面是有效的。
{"title":"HALOS: Hallucination-free Organ Segmentation after Organ Resection Surgery","authors":"Anne-Marie Rickmann, Murong Xu, Thomas Wolf, Oksana P. Kovalenko, C. Wachinger","doi":"10.48550/arXiv.2303.07717","DOIUrl":"https://doi.org/10.48550/arXiv.2303.07717","url":null,"abstract":"The wide range of research in deep learning-based medical image segmentation pushed the boundaries in a multitude of applications. A clinically relevant problem that received less attention is the handling of scans with irregular anatomy, e.g., after organ resection. State-of-the-art segmentation models often lead to organ hallucinations, i.e., false-positive predictions of organs, which cannot be alleviated by oversampling or post-processing. Motivated by the increasing need to develop robust deep learning models, we propose HALOS for abdominal organ segmentation in MR images that handles cases after organ resection surgery. To this end, we combine missing organ classification and multi-organ segmentation tasks into a multi-task model, yielding a classification-assisted segmentation pipeline. The segmentation network learns to incorporate knowledge about organ existence via feature fusion modules. Extensive experiments on a small labeled test set and large-scale UK Biobank data demonstrate the effectiveness of our approach in terms of higher segmentation Dice scores and near-to-zero false positive prediction rate.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"12 1","pages":"667-678"},"PeriodicalIF":0.0,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76323908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeurEPDiff: Neural Operators to Predict Geodesics in Deformation Spaces 预测变形空间测地线的神经算子
Pub Date : 2023-03-13 DOI: 10.48550/arXiv.2303.07115
Nian Wu, Miaomiao Zhang
This paper presents NeurEPDiff, a novel network to fast predict the geodesics in deformation spaces generated by a well known Euler-Poincar'e differential equation (EPDiff). To achieve this, we develop a neural operator that for the first time learns the evolving trajectory of geodesic deformations parameterized in the tangent space of diffeomorphisms(a.k.a velocity fields). In contrast to previous methods that purely fit the training images, our proposed NeurEPDiff learns a nonlinear mapping function between the time-dependent velocity fields. A composition of integral operators and smooth activation functions is formulated in each layer of NeurEPDiff to effectively approximate such mappings. The fact that NeurEPDiff is able to rapidly provide the numerical solution of EPDiff (given any initial condition) results in a significantly reduced computational cost of geodesic shooting of diffeomorphisms in a high-dimensional image space. Additionally, the properties of discretiztion/resolution-invariant of NeurEPDiff make its performance generalizable to multiple image resolutions after being trained offline. We demonstrate the effectiveness of NeurEPDiff in registering two image datasets: 2D synthetic data and 3D brain resonance imaging (MRI). The registration accuracy and computational efficiency are compared with the state-of-the-art diffeomophic registration algorithms with geodesic shooting.
本文提出了一种新的神经网络NeurEPDiff,用于快速预测由欧拉-庞加莱微分方程(EPDiff)产生的变形空间中的测地线。为了实现这一点,我们开发了一个神经算子,该算子首次学习了在微分同态的切空间中参数化的测地线变形的演化轨迹。A速度场)。与以往单纯拟合训练图像的方法不同,我们提出的NeurEPDiff学习了随时间变化的速度场之间的非线性映射函数。在NeurEPDiff的每一层中建立了积分算子和光滑激活函数的组合,以有效地近似这种映射。NeurEPDiff能够快速提供EPDiff的数值解(给定任何初始条件),从而大大降低了高维图像空间中差分同态的测地拍摄的计算成本。此外,NeurEPDiff的离散化/分辨率不变性特性使其在离线训练后可以推广到多种图像分辨率。我们证明了NeurEPDiff在注册两个图像数据集方面的有效性:2D合成数据和3D脑磁共振成像(MRI)。并将其配准精度和计算效率与目前最先进的测地线射击差分配准算法进行了比较。
{"title":"NeurEPDiff: Neural Operators to Predict Geodesics in Deformation Spaces","authors":"Nian Wu, Miaomiao Zhang","doi":"10.48550/arXiv.2303.07115","DOIUrl":"https://doi.org/10.48550/arXiv.2303.07115","url":null,"abstract":"This paper presents NeurEPDiff, a novel network to fast predict the geodesics in deformation spaces generated by a well known Euler-Poincar'e differential equation (EPDiff). To achieve this, we develop a neural operator that for the first time learns the evolving trajectory of geodesic deformations parameterized in the tangent space of diffeomorphisms(a.k.a velocity fields). In contrast to previous methods that purely fit the training images, our proposed NeurEPDiff learns a nonlinear mapping function between the time-dependent velocity fields. A composition of integral operators and smooth activation functions is formulated in each layer of NeurEPDiff to effectively approximate such mappings. The fact that NeurEPDiff is able to rapidly provide the numerical solution of EPDiff (given any initial condition) results in a significantly reduced computational cost of geodesic shooting of diffeomorphisms in a high-dimensional image space. Additionally, the properties of discretiztion/resolution-invariant of NeurEPDiff make its performance generalizable to multiple image resolutions after being trained offline. We demonstrate the effectiveness of NeurEPDiff in registering two image datasets: 2D synthetic data and 3D brain resonance imaging (MRI). The registration accuracy and computational efficiency are compared with the state-of-the-art diffeomophic registration algorithms with geodesic shooting.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"24 1","pages":"588-600"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79122410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Surface-normal Based Neural Framework for Colonoscopy Reconstruction 基于表面正常的结肠镜重建神经框架
Pub Date : 2023-03-13 DOI: 10.48550/arXiv.2303.07264
Shuxian Wang, Yubo Zhang, Sarah K. McGill, J. Rosenman, Jan-Michael Frahm, Soumyadip Sengupta, S. Pizer
Reconstructing a 3D surface from colonoscopy video is challenging due to illumination and reflectivity variation in the video frame that can cause defective shape predictions. Aiming to overcome this challenge, we utilize the characteristics of surface normal vectors and develop a two-step neural framework that significantly improves the colonoscopy reconstruction quality. The normal-based depth initialization network trained with self-supervised normal consistency loss provides depth map initialization to the normal-depth refinement module, which utilizes the relationship between illumination and surface normals to refine the frame-wise normal and depth predictions recursively. Our framework's depth accuracy performance on phantom colonoscopy data demonstrates the value of exploiting the surface normals in colonoscopy reconstruction, especially on en face views. Due to its low depth error, the prediction result from our framework will require limited post-processing to be clinically applicable for real-time colonoscopy reconstruction.
由于视频帧中的照明和反射率变化可能导致有缺陷的形状预测,因此从结肠镜检查视频中重建3D表面具有挑战性。为了克服这一挑战,我们利用表面法向量的特点,开发了一个两步神经框架,显著提高了结肠镜重建质量。使用自监督法线一致性损失训练的基于法线的深度初始化网络为法线深度细化模块提供深度图初始化,该模块利用光照和表面法线之间的关系递归地细化逐帧的法线和深度预测。我们的框架在模拟结肠镜数据上的深度精度表现表明了在结肠镜重建中利用表面法线的价值,特别是在正面视图上。由于其低深度误差,我们的框架预测结果需要有限的后处理才能临床应用于实时结肠镜重建。
{"title":"A Surface-normal Based Neural Framework for Colonoscopy Reconstruction","authors":"Shuxian Wang, Yubo Zhang, Sarah K. McGill, J. Rosenman, Jan-Michael Frahm, Soumyadip Sengupta, S. Pizer","doi":"10.48550/arXiv.2303.07264","DOIUrl":"https://doi.org/10.48550/arXiv.2303.07264","url":null,"abstract":"Reconstructing a 3D surface from colonoscopy video is challenging due to illumination and reflectivity variation in the video frame that can cause defective shape predictions. Aiming to overcome this challenge, we utilize the characteristics of surface normal vectors and develop a two-step neural framework that significantly improves the colonoscopy reconstruction quality. The normal-based depth initialization network trained with self-supervised normal consistency loss provides depth map initialization to the normal-depth refinement module, which utilizes the relationship between illumination and surface normals to refine the frame-wise normal and depth predictions recursively. Our framework's depth accuracy performance on phantom colonoscopy data demonstrates the value of exploiting the surface normals in colonoscopy reconstruction, especially on en face views. Due to its low depth error, the prediction result from our framework will require limited post-processing to be clinically applicable for real-time colonoscopy reconstruction.","PeriodicalId":73379,"journal":{"name":"Information processing in medical imaging : proceedings of the ... conference","volume":"30 1","pages":"797-809"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91340772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Information processing in medical imaging : proceedings of the ... conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1