首页 > 最新文献

arXiv - EE - Image and Video Processing最新文献

英文 中文
Compact Implicit Neural Representations for Plane Wave Images 平面波图像的紧凑型隐含神经表征
Pub Date : 2024-09-17 DOI: arxiv-2409.11370
Mathilde Monvoisin, Yuxin Zhang, Diana Mateus
Ultrafast Plane-Wave (PW) imaging often produces artifacts and shadows thatvary with insonification angles. We propose a novel approach using ImplicitNeural Representations (INRs) to compactly encode multi-planar sequences whilepreserving crucial orientation-dependent information. To our knowledge, this isthe first application of INRs for PW angular interpolation. Our method employsa Multi-Layer Perceptron (MLP)-based model with a concise physics-enhancedrendering technique. Quantitative evaluations using SSIM, PSNR, and standardultrasound metrics, along with qualitative visual assessments, confirm theeffectiveness of our approach. Additionally, our method demonstratessignificant storage efficiency, with model weights requiring 530 KB compared to8 MB for directly storing the 75 PW images, achieving a notable compressionratio of approximately 15:1.
超快平面波(PW)成像经常会产生随电离角变化的伪影和阴影。我们提出了一种使用隐式神经表征(INRs)的新方法,以紧凑编码多平面序列,同时保留与方向相关的关键信息。据我们所知,这是 INRs 在 PW 角度插值中的首次应用。我们的方法采用了基于多层感知器(MLP)的模型和简洁的物理增强渲染技术。使用 SSIM、PSNR 和标准超声指标进行的定量评估以及定性视觉评估证实了我们方法的有效性。此外,我们的方法还具有显著的存储效率,与直接存储 75 幅 PW 图像所需的 8 MB 相比,模型权重仅需 530 KB,实现了约 15:1 的显著压缩比。
{"title":"Compact Implicit Neural Representations for Plane Wave Images","authors":"Mathilde Monvoisin, Yuxin Zhang, Diana Mateus","doi":"arxiv-2409.11370","DOIUrl":"https://doi.org/arxiv-2409.11370","url":null,"abstract":"Ultrafast Plane-Wave (PW) imaging often produces artifacts and shadows that\u0000vary with insonification angles. We propose a novel approach using Implicit\u0000Neural Representations (INRs) to compactly encode multi-planar sequences while\u0000preserving crucial orientation-dependent information. To our knowledge, this is\u0000the first application of INRs for PW angular interpolation. Our method employs\u0000a Multi-Layer Perceptron (MLP)-based model with a concise physics-enhanced\u0000rendering technique. Quantitative evaluations using SSIM, PSNR, and standard\u0000ultrasound metrics, along with qualitative visual assessments, confirm the\u0000effectiveness of our approach. Additionally, our method demonstrates\u0000significant storage efficiency, with model weights requiring 530 KB compared to\u00008 MB for directly storing the 75 PW images, achieving a notable compression\u0000ratio of approximately 15:1.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-based Denoising Image Compression 基于边缘去噪的图像压缩
Pub Date : 2024-09-17 DOI: arxiv-2409.10978
Ryugo Morita, Hitoshi Nishimura, Ko Watanabe, Andreas Dengel, Jinjia Zhou
In recent years, deep learning-based image compression, particularly throughgenerative models, has emerged as a pivotal area of research. Despitesignificant advancements, challenges such as diminished sharpness and qualityin reconstructed images, learning inefficiencies due to mode collapse, and dataloss during transmission persist. To address these issues, we propose a novelcompression model that incorporates a denoising step with diffusion models,significantly enhancing image reconstruction fidelity by sub-information(e.g.,edge and depth) from leveraging latent space. Empirical experiments demonstratethat our model achieves superior or comparable results in terms of imagequality and compression efficiency when measured against the existing models.Notably, our model excels in scenarios of partial image loss or excessive noiseby introducing an edge estimation network to preserve the integrity ofreconstructed images, offering a robust solution to the current limitations ofimage compression.
近年来,基于深度学习的图像压缩,特别是通过生成模型进行的压缩,已成为一个重要的研究领域。尽管取得了重大进展,但重建图像的清晰度和质量下降、模式崩溃导致的学习效率低下以及传输过程中的数据丢失等挑战依然存在。为了解决这些问题,我们提出了一种新颖的压缩模型,该模型将去噪步骤与扩散模型相结合,通过利用潜在空间的子信息(如边缘和深度)显著提高了图像重建的保真度。实证实验证明,与现有模型相比,我们的模型在图像质量和压缩效率方面取得了更优或相当的结果。值得注意的是,我们的模型通过引入边缘估计网络来保持重建图像的完整性,从而在部分图像丢失或噪声过大的情况下表现出色,为目前图像压缩的局限性提供了一种稳健的解决方案。
{"title":"Edge-based Denoising Image Compression","authors":"Ryugo Morita, Hitoshi Nishimura, Ko Watanabe, Andreas Dengel, Jinjia Zhou","doi":"arxiv-2409.10978","DOIUrl":"https://doi.org/arxiv-2409.10978","url":null,"abstract":"In recent years, deep learning-based image compression, particularly through\u0000generative models, has emerged as a pivotal area of research. Despite\u0000significant advancements, challenges such as diminished sharpness and quality\u0000in reconstructed images, learning inefficiencies due to mode collapse, and data\u0000loss during transmission persist. To address these issues, we propose a novel\u0000compression model that incorporates a denoising step with diffusion models,\u0000significantly enhancing image reconstruction fidelity by sub-information(e.g.,\u0000edge and depth) from leveraging latent space. Empirical experiments demonstrate\u0000that our model achieves superior or comparable results in terms of image\u0000quality and compression efficiency when measured against the existing models.\u0000Notably, our model excels in scenarios of partial image loss or excessive noise\u0000by introducing an edge estimation network to preserve the integrity of\u0000reconstructed images, offering a robust solution to the current limitations of\u0000image compression.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise-aware Dynamic Image Denoising and Positron Range Correction for Rubidium-82 Cardiac PET Imaging via Self-supervision 通过自我监督实现铷-82 心脏正电子发射计算机断层成像的噪声感知动态图像去噪和正电子范围校正
Pub Date : 2024-09-17 DOI: arxiv-2409.11543
Huidong Xie, Liang Guo, Alexandre Velo, Zhao Liu, Qiong Liu, Xueqi Guo, Bo Zhou, Xiongchao Chen, Yu-Jung Tsai, Tianshun Miao, Menghua Xia, Yi-Hwa Liu, Ian S. Armstrong, Ge Wang, Richard E. Carson, Albert J. Sinusas, Chi Liu
Rb-82 is a radioactive isotope widely used for cardiac PET imaging. Despitenumerous benefits of 82-Rb, there are several factors that limits its imagequality and quantitative accuracy. First, the short half-life of 82-Rb resultsin noisy dynamic frames. Low signal-to-noise ratio would result in inaccurateand biased image quantification. Noisy dynamic frames also lead to highly noisyparametric images. The noise levels also vary substantially in differentdynamic frames due to radiotracer decay and short half-life. Existing denoisingmethods are not applicable for this task due to the lack of paired traininginputs/labels and inability to generalize across varying noise levels. Second,82-Rb emits high-energy positrons. Compared with other tracers such as 18-F,82-Rb travels a longer distance before annihilation, which negatively affectimage spatial resolution. Here, the goal of this study is to propose aself-supervised method for simultaneous (1) noise-aware dynamic image denoisingand (2) positron range correction for 82-Rb cardiac PET imaging. Tested on aseries of PET scans from a cohort of normal volunteers, the proposed methodproduced images with superior visual quality. To demonstrate the improvement inimage quantification, we compared image-derived input functions (IDIFs) witharterial input functions (AIFs) from continuous arterial blood samples. TheIDIF derived from the proposed method led to lower AUC differences, decreasingfrom 11.09% to 7.58% on average, compared to the original dynamic frames. Theproposed method also improved the quantification of myocardium blood flow(MBF), as validated against 15-O-water scans, with mean MBF differencesdecreased from 0.43 to 0.09, compared to the original dynamic frames. We alsoconducted a generalizability experiment on 37 patient scans obtained from adifferent country using a different scanner.
Rb-82 是一种广泛用于心脏 PET 成像的放射性同位素。尽管 82-Rb 有诸多优点,但有几个因素限制了它的成像质量和定量准确性。首先,82-Rb 的半衰期短,导致动态帧噪声大。低信噪比会导致图像量化不准确和有偏差。噪声动态帧还会导致高噪声参数图像。由于放射性示踪剂衰变和半衰期短,不同动态帧中的噪声水平也有很大差异。由于缺乏成对的训练输入/标签,现有的去噪方法无法在不同的噪声水平下通用,因此不适用于这项任务。其次,82-Rb 发射高能正电子。与 18-F 等其他示踪剂相比,82-Rb 在湮灭前的飞行距离更长,这会对图像的空间分辨率产生负面影响。本研究的目的是提出一种自我监督方法,用于同时对 82-Rb 心脏 PET 成像进行(1)噪声感知动态图像去噪和(2)正电子射程校正。通过对一系列正常志愿者的正电子发射计算机断层扫描进行测试,发现该方法生成的图像具有极佳的视觉质量。为了证明图像量化的改进,我们将图像衍生输入函数(IDIF)与连续动脉血样本的动脉输入函数(AIF)进行了比较。与原始动态帧相比,由建议方法得出的 IDIF 降低了 AUC 差异,平均从 11.09% 降至 7.58%。根据 15-O 水扫描的验证,提出的方法还改善了心肌血流(MBF)的量化,与原始动态帧相比,平均 MBF 差值从 0.43 降至 0.09。我们还对使用不同扫描仪从不同国家获得的 37 个病人扫描结果进行了通用性实验。
{"title":"Noise-aware Dynamic Image Denoising and Positron Range Correction for Rubidium-82 Cardiac PET Imaging via Self-supervision","authors":"Huidong Xie, Liang Guo, Alexandre Velo, Zhao Liu, Qiong Liu, Xueqi Guo, Bo Zhou, Xiongchao Chen, Yu-Jung Tsai, Tianshun Miao, Menghua Xia, Yi-Hwa Liu, Ian S. Armstrong, Ge Wang, Richard E. Carson, Albert J. Sinusas, Chi Liu","doi":"arxiv-2409.11543","DOIUrl":"https://doi.org/arxiv-2409.11543","url":null,"abstract":"Rb-82 is a radioactive isotope widely used for cardiac PET imaging. Despite\u0000numerous benefits of 82-Rb, there are several factors that limits its image\u0000quality and quantitative accuracy. First, the short half-life of 82-Rb results\u0000in noisy dynamic frames. Low signal-to-noise ratio would result in inaccurate\u0000and biased image quantification. Noisy dynamic frames also lead to highly noisy\u0000parametric images. The noise levels also vary substantially in different\u0000dynamic frames due to radiotracer decay and short half-life. Existing denoising\u0000methods are not applicable for this task due to the lack of paired training\u0000inputs/labels and inability to generalize across varying noise levels. Second,\u000082-Rb emits high-energy positrons. Compared with other tracers such as 18-F,\u000082-Rb travels a longer distance before annihilation, which negatively affect\u0000image spatial resolution. Here, the goal of this study is to propose a\u0000self-supervised method for simultaneous (1) noise-aware dynamic image denoising\u0000and (2) positron range correction for 82-Rb cardiac PET imaging. Tested on a\u0000series of PET scans from a cohort of normal volunteers, the proposed method\u0000produced images with superior visual quality. To demonstrate the improvement in\u0000image quantification, we compared image-derived input functions (IDIFs) with\u0000arterial input functions (AIFs) from continuous arterial blood samples. The\u0000IDIF derived from the proposed method led to lower AUC differences, decreasing\u0000from 11.09% to 7.58% on average, compared to the original dynamic frames. The\u0000proposed method also improved the quantification of myocardium blood flow\u0000(MBF), as validated against 15-O-water scans, with mean MBF differences\u0000decreased from 0.43 to 0.09, compared to the original dynamic frames. We also\u0000conducted a generalizability experiment on 37 patient scans obtained from a\u0000different country using a different scanner.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced segmentation of femoral bone metastasis in CT scans of patients using synthetic data generation with 3D diffusion models 利用三维扩散模型生成合成数据,增强患者 CT 扫描中股骨骨转移瘤的分割能力
Pub Date : 2024-09-17 DOI: arxiv-2409.11011
Emile Saillard, Aurélie Levillain, David Mitton, Jean-Baptiste Pialat, Cyrille Confavreux, Hélène Follet, Thomas Grenier
Purpose: Bone metastasis have a major impact on the quality of life ofpatients and they are diverse in terms of size and location, making theirsegmentation complex. Manual segmentation is time-consuming, and expertsegmentations are subject to operator variability, which makes obtainingaccurate and reproducible segmentations of bone metastasis on CT-scans achallenging yet important task to achieve. Materials and Methods: Deep learningmethods tackle segmentation tasks efficiently but require large datasets alongwith expert manual segmentations to generalize on new images. We propose anautomated data synthesis pipeline using 3D Denoising Diffusion ProbabilisticModels (DDPM) to enchance the segmentation of femoral metastasis from CT-scanvolumes of patients. We used 29 existing lesions along with 26 healthy femursto create new realistic synthetic metastatic images, and trained a DDPM toimprove the diversity and realism of the simulated volumes. We alsoinvestigated the operator variability on manual segmentation. Results: Wecreated 5675 new volumes, then trained 3D U-Net segmentation models on real andsynthetic data to compare segmentation performance, and we evaluated theperformance of the models depending on the amount of synthetic data used intraining. Conclusion: Our results showed that segmentation models trained withsynthetic data outperformed those trained on real volumes only, and that thosemodels perform especially well when considering operator variability.
目的:骨转移瘤对患者的生活质量有重大影响,而且骨转移瘤的大小和位置各不相同,因此对其进行分割非常复杂。人工分割非常耗时,而且专家的分割会受到操作者差异性的影响,因此在 CT 扫描上获得准确且可重复的骨转移瘤分割是一项具有挑战性的重要任务。材料与方法:深度学习方法能高效处理分割任务,但需要大型数据集和专家手动分割才能在新图像上推广。我们提出了一种使用三维去噪扩散概率模型(DDPM)的自动化数据合成管道,以增强对患者 CT 扫描图像中股骨转移灶的分割。我们利用 29 个现有病灶和 26 个健康股骨创建了新的逼真合成转移图像,并训练了 DDPM 以提高模拟体积的多样性和逼真度。我们还研究了操作员手动分割的可变性。结果我们创建了 5675 个新体积,然后在真实数据和合成数据上训练 3D U-Net 分割模型,以比较分割性能。结论我们的结果表明,使用合成数据训练的分割模型优于仅在真实体积上训练的模型,而且在考虑操作者变异性的情况下,这些模型的表现尤为出色。
{"title":"Enhanced segmentation of femoral bone metastasis in CT scans of patients using synthetic data generation with 3D diffusion models","authors":"Emile Saillard, Aurélie Levillain, David Mitton, Jean-Baptiste Pialat, Cyrille Confavreux, Hélène Follet, Thomas Grenier","doi":"arxiv-2409.11011","DOIUrl":"https://doi.org/arxiv-2409.11011","url":null,"abstract":"Purpose: Bone metastasis have a major impact on the quality of life of\u0000patients and they are diverse in terms of size and location, making their\u0000segmentation complex. Manual segmentation is time-consuming, and expert\u0000segmentations are subject to operator variability, which makes obtaining\u0000accurate and reproducible segmentations of bone metastasis on CT-scans a\u0000challenging yet important task to achieve. Materials and Methods: Deep learning\u0000methods tackle segmentation tasks efficiently but require large datasets along\u0000with expert manual segmentations to generalize on new images. We propose an\u0000automated data synthesis pipeline using 3D Denoising Diffusion Probabilistic\u0000Models (DDPM) to enchance the segmentation of femoral metastasis from CT-scan\u0000volumes of patients. We used 29 existing lesions along with 26 healthy femurs\u0000to create new realistic synthetic metastatic images, and trained a DDPM to\u0000improve the diversity and realism of the simulated volumes. We also\u0000investigated the operator variability on manual segmentation. Results: We\u0000created 5675 new volumes, then trained 3D U-Net segmentation models on real and\u0000synthetic data to compare segmentation performance, and we evaluated the\u0000performance of the models depending on the amount of synthetic data used in\u0000training. Conclusion: Our results showed that segmentation models trained with\u0000synthetic data outperformed those trained on real volumes only, and that those\u0000models perform especially well when considering operator variability.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NCT-CRC-HE: Not All Histopathological Datasets Are Equally Useful NCT-CRC-HE:并非所有组织病理学数据集都同样有用
Pub Date : 2024-09-17 DOI: arxiv-2409.11546
Andrey Ignatov, Grigory Malivenko
Numerous deep learning-based solutions have been proposed forhistopathological image analysis over the past years. While they usuallydemonstrate exceptionally high accuracy, one key question is whether theirprecision might be affected by low-level image properties not related tohistopathology but caused by microscopy image handling and pre-processing. Inthis paper, we analyze a popular NCT-CRC-HE-100K colorectal cancer dataset usedin numerous prior works and show that both this dataset and the obtainedresults may be affected by data-specific biases. The most prominent revealeddataset issues are inappropriate color normalization, severe JPEG artifactsinconsistent between different classes, and completely corrupted tissue samplesresulting from incorrect image dynamic range handling. We show that even thesimplest model using only 3 features per image (red, green and blue colorintensities) can demonstrate over 50% accuracy on this 9-class dataset, whileusing color histogram not explicitly capturing cell morphology features yieldsover 82% accuracy. Moreover, we show that a basic EfficientNet-B0 ImageNetpretrained model can achieve over 97.7% accuracy on this dataset, outperformingall previously proposed solutions developed for this task, including dedicatedfoundation histopathological models and large cell morphology-aware neuralnetworks. The NCT-CRC-HE dataset is publicly available and can be freely usedto replicate the presented results. The codes and pre-trained models used inthis paper are available athttps://github.com/gmalivenko/NCT-CRC-HE-experiments
在过去几年中,针对组织病理学图像分析提出了许多基于深度学习的解决方案。虽然它们通常表现出极高的准确性,但一个关键问题是,它们的准确性是否会受到与组织病理学无关、但由显微镜图像处理和预处理引起的低层次图像属性的影响。在本文中,我们分析了之前许多研究中使用的流行的 NCT-CRC-HE-100K 大肠癌数据集,结果表明该数据集和获得的结果都可能受到特定数据偏差的影响。数据集暴露出的最突出问题是色彩归一化不当、不同类别之间存在严重的 JPEG 伪影,以及图像动态范围处理不当导致组织样本完全损坏。我们的研究表明,即使是最简单的模型,每幅图像只使用 3 个特征(红、绿、蓝颜色密度),在这个 9 类数据集上的准确率也能超过 50%,而使用不明确捕捉细胞形态特征的颜色直方图,准确率也能超过 82%。此外,我们还表明,基本的 EfficientNet-B0 ImageNet 训练模型在该数据集上可以达到 97.7% 以上的准确率,优于之前针对该任务提出的所有解决方案,包括专用的基础组织病理学模型和大型细胞形态感知神经网络。NCT-CRC-HE数据集是公开的,可免费用于复制所展示的结果。本文中使用的代码和预训练模型可从以下网址获取:https://github.com/gmalivenko/NCT-CRC-HE-experiments。
{"title":"NCT-CRC-HE: Not All Histopathological Datasets Are Equally Useful","authors":"Andrey Ignatov, Grigory Malivenko","doi":"arxiv-2409.11546","DOIUrl":"https://doi.org/arxiv-2409.11546","url":null,"abstract":"Numerous deep learning-based solutions have been proposed for\u0000histopathological image analysis over the past years. While they usually\u0000demonstrate exceptionally high accuracy, one key question is whether their\u0000precision might be affected by low-level image properties not related to\u0000histopathology but caused by microscopy image handling and pre-processing. In\u0000this paper, we analyze a popular NCT-CRC-HE-100K colorectal cancer dataset used\u0000in numerous prior works and show that both this dataset and the obtained\u0000results may be affected by data-specific biases. The most prominent revealed\u0000dataset issues are inappropriate color normalization, severe JPEG artifacts\u0000inconsistent between different classes, and completely corrupted tissue samples\u0000resulting from incorrect image dynamic range handling. We show that even the\u0000simplest model using only 3 features per image (red, green and blue color\u0000intensities) can demonstrate over 50% accuracy on this 9-class dataset, while\u0000using color histogram not explicitly capturing cell morphology features yields\u0000over 82% accuracy. Moreover, we show that a basic EfficientNet-B0 ImageNet\u0000pretrained model can achieve over 97.7% accuracy on this dataset, outperforming\u0000all previously proposed solutions developed for this task, including dedicated\u0000foundation histopathological models and large cell morphology-aware neural\u0000networks. The NCT-CRC-HE dataset is publicly available and can be freely used\u0000to replicate the presented results. The codes and pre-trained models used in\u0000this paper are available at\u0000https://github.com/gmalivenko/NCT-CRC-HE-experiments","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FGR-Net:Interpretable fundus imagegradeability classification based on deepreconstruction learning FGR-Net:基于深度重构学习的可解释眼底图像降级性分类
Pub Date : 2024-09-16 DOI: arxiv-2409.10246
Saif Khalid, Hatem A. Rashwan, Saddam Abdulwahab, Mohamed Abdel-Nasser, Facundo Manuel Quiroga, Domenec Puig
The performance of diagnostic Computer-Aided Design (CAD) systems for retinaldiseases depends on the quality of the retinal images being screened. Thus,many studies have been developed to evaluate and assess the quality of suchretinal images. However, most of them did not investigate the relationshipbetween the accuracy of the developed models and the quality of thevisualization of interpretability methods for distinguishing between gradableand non-gradable retinal images. Consequently, this paper presents a novelframework called FGR-Net to automatically assess and interpret underlyingfundus image quality by merging an autoencoder network with a classifiernetwork. The FGR-Net model also provides an interpretable quality assessmentthrough visualizations. In particular, FGR-Net uses a deep autoencoder toreconstruct the input image in order to extract the visual characteristics ofthe input fundus images based on self-supervised learning. The extractedfeatures by the autoencoder are then fed into a deep classifier network todistinguish between gradable and ungradable fundus images. FGR-Net is evaluatedwith different interpretability methods, which indicates that the autoencoderis a key factor in forcing the classifier to focus on the relevant structuresof the fundus images, such as the fovea, optic disk, and prominent bloodvessels. Additionally, the interpretability methods can provide visual feedbackfor ophthalmologists to understand how our model evaluates the quality offundus images. The experimental results showed the superiority of FGR-Net overthe state-of-the-art quality assessment methods, with an accuracy of 89% and anF1-score of 87%.
视网膜疾病诊断计算机辅助设计(CAD)系统的性能取决于所筛选视网膜图像的质量。因此,许多研究都对此类视网膜图像的质量进行了评估和评价。然而,其中大多数研究并没有调查所开发模型的准确性与区分可分级和不可分级视网膜图像的可视化可解释性方法的质量之间的关系。因此,本文提出了一种名为 FGR-Net 的新型框架,通过合并自动编码器网络和分类网络来自动评估和解释底层视网膜图像质量。FGR-Net 模型还通过可视化提供了可解释的质量评估。特别是,FGR-Net 使用深度自动编码器重新构建输入图像,以便在自我监督学习的基础上提取输入眼底图像的视觉特征。然后将自动编码器提取的特征输入深度分类器网络,以区分可渐变和不可渐变的眼底图像。使用不同的可解释性方法对 FGR-Net 进行了评估,结果表明,自动编码器是迫使分类器关注眼底图像相关结构(如眼窝、视盘和突出的血管)的关键因素。此外,可解释性方法还能为眼科医生提供视觉反馈,让他们了解我们的模型是如何评估眼底图像质量的。实验结果表明,FGR-Net 的准确率为 89%,F1 分数为 87%,优于最先进的质量评估方法。
{"title":"FGR-Net:Interpretable fundus imagegradeability classification based on deepreconstruction learning","authors":"Saif Khalid, Hatem A. Rashwan, Saddam Abdulwahab, Mohamed Abdel-Nasser, Facundo Manuel Quiroga, Domenec Puig","doi":"arxiv-2409.10246","DOIUrl":"https://doi.org/arxiv-2409.10246","url":null,"abstract":"The performance of diagnostic Computer-Aided Design (CAD) systems for retinal\u0000diseases depends on the quality of the retinal images being screened. Thus,\u0000many studies have been developed to evaluate and assess the quality of such\u0000retinal images. However, most of them did not investigate the relationship\u0000between the accuracy of the developed models and the quality of the\u0000visualization of interpretability methods for distinguishing between gradable\u0000and non-gradable retinal images. Consequently, this paper presents a novel\u0000framework called FGR-Net to automatically assess and interpret underlying\u0000fundus image quality by merging an autoencoder network with a classifier\u0000network. The FGR-Net model also provides an interpretable quality assessment\u0000through visualizations. In particular, FGR-Net uses a deep autoencoder to\u0000reconstruct the input image in order to extract the visual characteristics of\u0000the input fundus images based on self-supervised learning. The extracted\u0000features by the autoencoder are then fed into a deep classifier network to\u0000distinguish between gradable and ungradable fundus images. FGR-Net is evaluated\u0000with different interpretability methods, which indicates that the autoencoder\u0000is a key factor in forcing the classifier to focus on the relevant structures\u0000of the fundus images, such as the fovea, optic disk, and prominent blood\u0000vessels. Additionally, the interpretability methods can provide visual feedback\u0000for ophthalmologists to understand how our model evaluates the quality of\u0000fundus images. The experimental results showed the superiority of FGR-Net over\u0000the state-of-the-art quality assessment methods, with an accuracy of 89% and an\u0000F1-score of 87%.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth from Coupled Optical Differentiation 耦合光学分辨深度
Pub Date : 2024-09-16 DOI: arxiv-2409.10725
Junjie Luo, Yuxuan Liu, Emma Alexander, Qi Guo
We propose depth from coupled optical differentiation, a low-computationpassive-lighting 3D sensing mechanism. It is based on our discovery thatper-pixel object distance can be rigorously determined by a coupled pair ofoptical derivatives of a defocused image using a simple, closed-formrelationship. Unlike previous depth-from-defocus (DfD) methods that leveragespatial derivatives of the image to estimate scene depths, the proposedmechanism's use of only optical derivatives makes it significantly more robustto noise. Furthermore, unlike many previous DfD algorithms with requirements onaperture code, this relationship is proved to be universal to a broad range ofaperture codes. We build the first 3D sensor based on depth from coupled opticaldifferentiation. Its optical assembly includes a deformable lens and amotorized iris, which enables dynamic adjustments to the optical power andaperture radius. The sensor captures two pairs of images: one pair with adifferential change of optical power and the other with a differential changeof aperture scale. From the four images, a depth and confidence map can begenerated with only 36 floating point operations per output pixel (FLOPOP),more than ten times lower than the previous lowest passive-lighting depthsensing solution to our knowledge. Additionally, the depth map generated by theproposed sensor demonstrates more than twice the working range of previous DfDmethods while using significantly lower computation.
我们提出了一种低计算量的被动照明三维传感机制--耦合光学微分深度。它基于我们的发现,即使用简单的闭合式关系,可以通过离焦图像的一对耦合光学导数严格确定每像素物体的距离。与以往利用图像的空间导数来估计场景深度的离焦深度(DfD)方法不同,所提出的机制仅使用光学导数,因此对噪声的鲁棒性大大提高。此外,与之前许多对光圈编码有要求的 DfD 算法不同,这种关系被证明适用于多种光圈编码。我们建立了第一个基于耦合光学差分深度的三维传感器。它的光学组件包括一个可变形透镜和电动光圈,可对光学功率和光圈半径进行动态调整。传感器捕捉两对图像:一对是光学功率的差异变化,另一对是光圈尺度的差异变化。从这四幅图像中,只需对每个输出像素进行 36 次浮点运算(FLOPOP)即可生成深度图和置信度图,比我们所知的之前最低的被动照明深度感应解决方案低十倍以上。此外,该传感器生成的深度图的工作范围是之前 DfD 方法的两倍多,而计算量却大大降低。
{"title":"Depth from Coupled Optical Differentiation","authors":"Junjie Luo, Yuxuan Liu, Emma Alexander, Qi Guo","doi":"arxiv-2409.10725","DOIUrl":"https://doi.org/arxiv-2409.10725","url":null,"abstract":"We propose depth from coupled optical differentiation, a low-computation\u0000passive-lighting 3D sensing mechanism. It is based on our discovery that\u0000per-pixel object distance can be rigorously determined by a coupled pair of\u0000optical derivatives of a defocused image using a simple, closed-form\u0000relationship. Unlike previous depth-from-defocus (DfD) methods that leverage\u0000spatial derivatives of the image to estimate scene depths, the proposed\u0000mechanism's use of only optical derivatives makes it significantly more robust\u0000to noise. Furthermore, unlike many previous DfD algorithms with requirements on\u0000aperture code, this relationship is proved to be universal to a broad range of\u0000aperture codes. We build the first 3D sensor based on depth from coupled optical\u0000differentiation. Its optical assembly includes a deformable lens and a\u0000motorized iris, which enables dynamic adjustments to the optical power and\u0000aperture radius. The sensor captures two pairs of images: one pair with a\u0000differential change of optical power and the other with a differential change\u0000of aperture scale. From the four images, a depth and confidence map can be\u0000generated with only 36 floating point operations per output pixel (FLOPOP),\u0000more than ten times lower than the previous lowest passive-lighting depth\u0000sensing solution to our knowledge. Additionally, the depth map generated by the\u0000proposed sensor demonstrates more than twice the working range of previous DfD\u0000methods while using significantly lower computation.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Centric Strategies for Overcoming PET/CT Heterogeneity: Insights from the AutoPET III Lesion Segmentation Challenge 克服 PET/CT 异质性的数据中心策略:从 AutoPET III 病灶划分挑战中获得的启示
Pub Date : 2024-09-16 DOI: arxiv-2409.10120
Balint Kovacs, Shuhan Xiao, Maximilian Rokuss, Constantin Ulrich, Fabian Isensee, Klaus H. Maier-Hein
The third autoPET challenge introduced a new data-centric task this year,shifting the focus from model development to improving metastatic lesionsegmentation on PET/CT images through data quality and handling strategies. Inresponse, we developed targeted methods to enhance segmentation performancetailored to the characteristics of PET/CT imaging. Our approach encompasses twokey elements. First, to address potential alignment errors between CT and PETmodalities as well as the prevalence of punctate lesions, we modified thebaseline data augmentation scheme and extended it with misalignmentaugmentation. This adaptation aims to improve segmentation accuracy,particularly for tiny metastatic lesions. Second, to tackle the variability inimage dimensions significantly affecting the prediction time, we implemented adynamic ensembling and test-time augmentation (TTA) strategy. This methodoptimizes the use of ensembling and TTA within a 5-minute prediction timelimit, effectively leveraging the generalization potential for both small andlarge images. Both of our solutions are designed to be robust across differenttracers and institutional settings, offering a general, yet imaging-specificapproach to the multi-tracer and multi-institutional challenges of thecompetition. We made the challenge repository with our modifications publiclyavailable at url{https://github.com/MIC-DKFZ/miccai2024_autopet3_datacentric}.
今年的第三届 autoPET 挑战赛引入了一项以数据为中心的新任务,将重点从模型开发转移到通过数据质量和处理策略改进 PET/CT 图像上转移性病灶的分割。为此,我们根据 PET/CT 成像的特点开发了有针对性的方法来提高分割性能。我们的方法包含两个关键要素。首先,针对 CT 和 PETmodalities 之间潜在的对齐误差以及点状病变的普遍性,我们修改了基准数据增强方案,并通过误对齐增强进行了扩展。这种调整旨在提高分割准确性,尤其是对微小转移病灶的分割准确性。其次,为了解决图像维度的可变性对预测时间的显著影响,我们实施了动态集合和测试时间增强(TTA)策略。这种方法在 5 分钟的预测时限内优化了集合和 TTA 的使用,有效利用了对小型和大型图像的泛化潜力。我们的两种解决方案都是为了在不同示踪剂和不同机构环境下都能保持稳定而设计的,为应对比赛中的多示踪剂和多机构挑战提供了一种通用但又针对特定图像的方法。我们在 url{https://github.com/MIC-DKFZ/miccai2024_autopet3_datacentric} 上公开了包含我们修改的挑战库。
{"title":"Data-Centric Strategies for Overcoming PET/CT Heterogeneity: Insights from the AutoPET III Lesion Segmentation Challenge","authors":"Balint Kovacs, Shuhan Xiao, Maximilian Rokuss, Constantin Ulrich, Fabian Isensee, Klaus H. Maier-Hein","doi":"arxiv-2409.10120","DOIUrl":"https://doi.org/arxiv-2409.10120","url":null,"abstract":"The third autoPET challenge introduced a new data-centric task this year,\u0000shifting the focus from model development to improving metastatic lesion\u0000segmentation on PET/CT images through data quality and handling strategies. In\u0000response, we developed targeted methods to enhance segmentation performance\u0000tailored to the characteristics of PET/CT imaging. Our approach encompasses two\u0000key elements. First, to address potential alignment errors between CT and PET\u0000modalities as well as the prevalence of punctate lesions, we modified the\u0000baseline data augmentation scheme and extended it with misalignment\u0000augmentation. This adaptation aims to improve segmentation accuracy,\u0000particularly for tiny metastatic lesions. Second, to tackle the variability in\u0000image dimensions significantly affecting the prediction time, we implemented a\u0000dynamic ensembling and test-time augmentation (TTA) strategy. This method\u0000optimizes the use of ensembling and TTA within a 5-minute prediction time\u0000limit, effectively leveraging the generalization potential for both small and\u0000large images. Both of our solutions are designed to be robust across different\u0000tracers and institutional settings, offering a general, yet imaging-specific\u0000approach to the multi-tracer and multi-institutional challenges of the\u0000competition. We made the challenge repository with our modifications publicly\u0000available at url{https://github.com/MIC-DKFZ/miccai2024_autopet3_datacentric}.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modality image synthesis from TOF-MRA to CTA using diffusion-based models 利用基于扩散的模型从 TOF-MRA 到 CTA 的跨模态图像合成
Pub Date : 2024-09-16 DOI: arxiv-2409.10089
Alexander Koch, Orhun Utku Aydin, Adam Hilbert, Jana Rieger, Satoru Tanioka, Fujimaro Ishida, Dietmar Frey
Cerebrovascular disease often requires multiple imaging modalities foraccurate diagnosis, treatment, and monitoring. Computed Tomography Angiography(CTA) and Time-of-Flight Magnetic Resonance Angiography (TOF-MRA) are twocommon non-invasive angiography techniques, each with distinct strengths inaccessibility, safety, and diagnostic accuracy. While CTA is more widely usedin acute stroke due to its faster acquisition times and higher diagnosticaccuracy, TOF-MRA is preferred for its safety, as it avoids radiation exposureand contrast agent-related health risks. Despite the predominant role of CTA inclinical workflows, there is a scarcity of open-source CTA data, limiting theresearch and development of AI models for tasks such as large vessel occlusiondetection and aneurysm segmentation. This study explores diffusion-basedimage-to-image translation models to generate synthetic CTA images from TOF-MRAinput. We demonstrate the modality conversion from TOF-MRA to CTA and show thatdiffusion models outperform a traditional U-Net-based approach. Our workcompares different state-of-the-art diffusion architectures and samplers,offering recommendations for optimal model performance in this cross-modalitytranslation task.
脑血管疾病通常需要多种成像方式进行准确诊断、治疗和监测。计算机断层扫描血管造影(CTA)和飞行时间磁共振血管造影(TOF-MRA)是两种常见的无创血管造影技术,在可及性、安全性和诊断准确性方面各有所长。CTA 因其更快的采集时间和更高的诊断准确性而在急性卒中中得到更广泛的应用,而 TOF-MRA 则因其安全性而受到青睐,因为它避免了辐射暴露和造影剂相关的健康风险。尽管 CTA 在临床工作流程中发挥着主导作用,但开源 CTA 数据稀缺,限制了针对大血管闭塞检测和动脉瘤分割等任务的人工智能模型的研究和开发。本研究探索了基于扩散的图像到图像转换模型,以从 TOF-MRA 输入生成合成 CTA 图像。我们演示了从 TOF-MRA 到 CTA 的模式转换,并表明扩散模型优于传统的基于 U-Net 的方法。我们的研究比较了不同的最先进的扩散架构和采样器,为这种跨模态转换任务中的最佳模型性能提供了建议。
{"title":"Cross-modality image synthesis from TOF-MRA to CTA using diffusion-based models","authors":"Alexander Koch, Orhun Utku Aydin, Adam Hilbert, Jana Rieger, Satoru Tanioka, Fujimaro Ishida, Dietmar Frey","doi":"arxiv-2409.10089","DOIUrl":"https://doi.org/arxiv-2409.10089","url":null,"abstract":"Cerebrovascular disease often requires multiple imaging modalities for\u0000accurate diagnosis, treatment, and monitoring. Computed Tomography Angiography\u0000(CTA) and Time-of-Flight Magnetic Resonance Angiography (TOF-MRA) are two\u0000common non-invasive angiography techniques, each with distinct strengths in\u0000accessibility, safety, and diagnostic accuracy. While CTA is more widely used\u0000in acute stroke due to its faster acquisition times and higher diagnostic\u0000accuracy, TOF-MRA is preferred for its safety, as it avoids radiation exposure\u0000and contrast agent-related health risks. Despite the predominant role of CTA in\u0000clinical workflows, there is a scarcity of open-source CTA data, limiting the\u0000research and development of AI models for tasks such as large vessel occlusion\u0000detection and aneurysm segmentation. This study explores diffusion-based\u0000image-to-image translation models to generate synthetic CTA images from TOF-MRA\u0000input. We demonstrate the modality conversion from TOF-MRA to CTA and show that\u0000diffusion models outperform a traditional U-Net-based approach. Our work\u0000compares different state-of-the-art diffusion architectures and samplers,\u0000offering recommendations for optimal model performance in this cross-modality\u0000translation task.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Elimination of Non-Independent Noise in Hyperspectral Imaging 自监督消除高光谱成像中的非独立噪声
Pub Date : 2024-09-16 DOI: arxiv-2409.09910
Guangrui Ding, Chang Liu, Jiaze Yin, Xinyan Teng, Yuying Tan, Hongjian He, Haonan Lin, Lei Tian, Ji-Xin Cheng
Hyperspectral imaging has been widely used for spectral and spatialidentification of target molecules, yet often contaminated by sophisticatednoise. Current denoising methods generally rely on independent and identicallydistributed noise statistics, showing corrupted performance for non-independentnoise removal. Here, we demonstrate Self-supervised PErmutation Noise2noiseDenoising (SPEND), a deep learning denoising architecture tailor-made forremoving non-independent noise from a single hyperspectral image stack. Weutilize hyperspectral stimulated Raman scattering and mid-infrared photothermalmicroscopy as the testbeds, where the noise is spatially correlated andspectrally varied. Based on single hyperspectral images, SPEND permutates oddand even spectral frames to generate two stacks with identical noiseproperties, and uses the pairs for efficient self-supervised noise-to-noisetraining. SPEND achieved an 8-fold signal-to-noise improvement without havingaccess to the ground truth data. SPEND enabled accurate mapping of lowconcentration biomolecules in both fingerprint and silent regions,demonstrating its robustness in sophisticated cellular environments.
高光谱成像已被广泛用于目标分子的光谱和空间识别,但经常受到复杂噪声的污染。目前的去噪方法一般依赖于独立且同分布的噪声统计,在去除非独立噪声时会出现性能下降的情况。在这里,我们展示了自监督高光谱噪声去噪(SPEND),这是一种深度学习去噪架构,专为去除单个高光谱图像堆栈中的非独立噪声而量身定制。我们利用高光谱受激拉曼散射和中红外光热显微镜作为测试平台,其中的噪声具有空间相关性和光谱变化性。基于单幅高光谱图像,SPEND对奇数和偶数光谱帧进行排列,生成两组具有相同噪声属性的图像,并利用这两组图像进行高效的自监督噪声-噪声训练。SPEND 在不获取地面实况数据的情况下实现了 8 倍的信噪比改进。SPEND 能够准确绘制指纹区和无声区的低浓度生物分子图谱,证明了它在复杂细胞环境中的稳定性。
{"title":"Self-Supervised Elimination of Non-Independent Noise in Hyperspectral Imaging","authors":"Guangrui Ding, Chang Liu, Jiaze Yin, Xinyan Teng, Yuying Tan, Hongjian He, Haonan Lin, Lei Tian, Ji-Xin Cheng","doi":"arxiv-2409.09910","DOIUrl":"https://doi.org/arxiv-2409.09910","url":null,"abstract":"Hyperspectral imaging has been widely used for spectral and spatial\u0000identification of target molecules, yet often contaminated by sophisticated\u0000noise. Current denoising methods generally rely on independent and identically\u0000distributed noise statistics, showing corrupted performance for non-independent\u0000noise removal. Here, we demonstrate Self-supervised PErmutation Noise2noise\u0000Denoising (SPEND), a deep learning denoising architecture tailor-made for\u0000removing non-independent noise from a single hyperspectral image stack. We\u0000utilize hyperspectral stimulated Raman scattering and mid-infrared photothermal\u0000microscopy as the testbeds, where the noise is spatially correlated and\u0000spectrally varied. Based on single hyperspectral images, SPEND permutates odd\u0000and even spectral frames to generate two stacks with identical noise\u0000properties, and uses the pairs for efficient self-supervised noise-to-noise\u0000training. SPEND achieved an 8-fold signal-to-noise improvement without having\u0000access to the ground truth data. SPEND enabled accurate mapping of low\u0000concentration biomolecules in both fingerprint and silent regions,\u0000demonstrating its robustness in sophisticated cellular environments.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - EE - Image and Video Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1