首页 > 最新文献

arXiv - EE - Image and Video Processing最新文献

英文 中文
PSFHS Challenge Report: Pubic Symphysis and Fetal Head Segmentation from Intrapartum Ultrasound Images PSFHS 挑战报告:从产内超声图像中分割耻骨联合和胎儿头部
Pub Date : 2024-09-17 DOI: arxiv-2409.10980
Jieyun Bai, Zihao Zhou, Zhanhong Ou, Gregor Koehler, Raphael Stock, Klaus Maier-Hein, Marawan Elbatel, Robert Martí, Xiaomeng Li, Yaoyang Qiu, Panjie Gou, Gongping Chen, Lei Zhao, Jianxun Zhang, Yu Dai, Fangyijie Wang, Guénolé Silvestre, Kathleen Curran, Hongkun Sun, Jing Xu, Pengzhou Cai, Lu Jiang, Libin Lan, Dong Ni, Mei Zhong, Gaowen Chen, Víctor M. Campello, Yaosheng Lu, Karim Lekadir
Segmentation of the fetal and maternal structures, particularly intrapartumultrasound imaging as advocated by the International Society of Ultrasound inObstetrics and Gynecology (ISUOG) for monitoring labor progression, is acrucial first step for quantitative diagnosis and clinical decision-making.This requires specialized analysis by obstetrics professionals, in a task thati) is highly time- and cost-consuming and ii) often yields inconsistentresults. The utility of automatic segmentation algorithms for biometry has beenproven, though existing results remain suboptimal. To push forward advancementsin this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation(PSFHS) was held alongside the 26th International Conference on Medical ImageComputing and Computer Assisted Intervention (MICCAI 2023). This challengeaimed to enhance the development of automatic segmentation algorithms at aninternational scale, providing the largest dataset to date with 5,101intrapartum ultrasound images collected from two ultrasound machines acrossthree hospitals from two institutions. The scientific community's enthusiasticparticipation led to the selection of the top 8 out of 179 entries from 193registrants in the initial phase to proceed to the competition's second stage.These algorithms have elevated the state-of-the-art in automatic PSFHS fromintrapartum ultrasound images. A thorough analysis of the results pinpointedongoing challenges in the field and outlined recommendations for future work.The top solutions and the complete dataset remain publicly available, fosteringfurther advancements in automatic segmentation and biometry for intrapartumultrasound imaging.
对胎儿和母体结构进行分割,尤其是国际妇产科超声学会(ISUOG)所倡导的用于监测产程进展的产前超声成像,是定量诊断和临床决策的关键第一步。这需要产科专业人员进行专业分析,这项工作i)非常耗费时间和成本,ii)产生的结果往往不一致。自动分割算法在生物测量中的实用性已得到证实,但现有结果仍不理想。为了推动这一领域的发展,在第26届国际医学影像计算和计算机辅助干预大会(MICCAI 2023)期间举办了耻骨联合-胎儿头部分割(PSFHS)大挑战。该挑战赛旨在加强国际范围内自动分割算法的开发,提供了迄今为止最大的数据集,包括从两家机构的三家医院的两台超声波机上采集的5101张产后超声图像。由于科学界的踊跃参与,初赛从 193 名参赛者的 179 个作品中选出了前 8 名进入第二阶段。对结果的全面分析指出了该领域目前面临的挑战,并概述了对未来工作的建议。最优秀的解决方案和完整的数据集将继续向公众开放,这将促进产前超声成像自动分割和生物测量的进一步发展。
{"title":"PSFHS Challenge Report: Pubic Symphysis and Fetal Head Segmentation from Intrapartum Ultrasound Images","authors":"Jieyun Bai, Zihao Zhou, Zhanhong Ou, Gregor Koehler, Raphael Stock, Klaus Maier-Hein, Marawan Elbatel, Robert Martí, Xiaomeng Li, Yaoyang Qiu, Panjie Gou, Gongping Chen, Lei Zhao, Jianxun Zhang, Yu Dai, Fangyijie Wang, Guénolé Silvestre, Kathleen Curran, Hongkun Sun, Jing Xu, Pengzhou Cai, Lu Jiang, Libin Lan, Dong Ni, Mei Zhong, Gaowen Chen, Víctor M. Campello, Yaosheng Lu, Karim Lekadir","doi":"arxiv-2409.10980","DOIUrl":"https://doi.org/arxiv-2409.10980","url":null,"abstract":"Segmentation of the fetal and maternal structures, particularly intrapartum\u0000ultrasound imaging as advocated by the International Society of Ultrasound in\u0000Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a\u0000crucial first step for quantitative diagnosis and clinical decision-making.\u0000This requires specialized analysis by obstetrics professionals, in a task that\u0000i) is highly time- and cost-consuming and ii) often yields inconsistent\u0000results. The utility of automatic segmentation algorithms for biometry has been\u0000proven, though existing results remain suboptimal. To push forward advancements\u0000in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation\u0000(PSFHS) was held alongside the 26th International Conference on Medical Image\u0000Computing and Computer Assisted Intervention (MICCAI 2023). This challenge\u0000aimed to enhance the development of automatic segmentation algorithms at an\u0000international scale, providing the largest dataset to date with 5,101\u0000intrapartum ultrasound images collected from two ultrasound machines across\u0000three hospitals from two institutions. The scientific community's enthusiastic\u0000participation led to the selection of the top 8 out of 179 entries from 193\u0000registrants in the initial phase to proceed to the competition's second stage.\u0000These algorithms have elevated the state-of-the-art in automatic PSFHS from\u0000intrapartum ultrasound images. A thorough analysis of the results pinpointed\u0000ongoing challenges in the field and outlined recommendations for future work.\u0000The top solutions and the complete dataset remain publicly available, fostering\u0000further advancements in automatic segmentation and biometry for intrapartum\u0000ultrasound imaging.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TTT-Unet: Enhancing U-Net with Test-Time Training Layers for biomedical image segmentation TTT-Unet:利用测试时间训练层增强 U-Net 以进行生物医学图像分割
Pub Date : 2024-09-17 DOI: arxiv-2409.11299
Rong Zhou, Zhengqing Yuan, Zhiling Yan, Weixiang Sun, Kai Zhang, Yiwei Li, Yanfang Ye, Xiang Li, Lifang He, Lichao Sun
Biomedical image segmentation is crucial for accurately diagnosing andanalyzing various diseases. However, Convolutional Neural Networks (CNNs) andTransformers, the most commonly used architectures for this task, struggle toeffectively capture long-range dependencies due to the inherent locality ofCNNs and the computational complexity of Transformers. To address thislimitation, we introduce TTT-Unet, a novel framework that integrates Test-TimeTraining (TTT) layers into the traditional U-Net architecture for biomedicalimage segmentation. TTT-Unet dynamically adjusts model parameters during thetesting time, enhancing the model's ability to capture both local andlong-range features. We evaluate TTT-Unet on multiple medical imaging datasets,including 3D abdominal organ segmentation in CT and MR images, instrumentsegmentation in endoscopy images, and cell segmentation in microscopy images.The results demonstrate that TTT-Unet consistently outperforms state-of-the-artCNN-based and Transformer-based segmentation models across all tasks. The codeis available at https://github.com/rongzhou7/TTT-Unet.
生物医学图像分割对于准确诊断和分析各种疾病至关重要。然而,由于卷积神经网络(CNN)固有的局部性和变换器的计算复杂性,这一任务中最常用的架构--卷积神经网络(CNN)和变换器--难以有效捕捉长距离依赖关系。为了解决这一限制,我们引入了 TTT-Unet,这是一种新型框架,它将测试-时间-训练(TTT)层集成到传统的 U-Net 架构中,用于生物医学图像分割。TTT-Unet 可在测试期间动态调整模型参数,从而增强模型捕捉局部和长距离特征的能力。我们在多个医学影像数据集上对 TTT-Unet 进行了评估,包括 CT 和 MR 图像中的三维腹部器官分割、内窥镜图像中的器械分割以及显微镜图像中的细胞分割。结果表明,在所有任务中,TTT-Unet 的表现始终优于基于 CNN 和 Transformer 的先进分割模型。代码见 https://github.com/rongzhou7/TTT-Unet。
{"title":"TTT-Unet: Enhancing U-Net with Test-Time Training Layers for biomedical image segmentation","authors":"Rong Zhou, Zhengqing Yuan, Zhiling Yan, Weixiang Sun, Kai Zhang, Yiwei Li, Yanfang Ye, Xiang Li, Lifang He, Lichao Sun","doi":"arxiv-2409.11299","DOIUrl":"https://doi.org/arxiv-2409.11299","url":null,"abstract":"Biomedical image segmentation is crucial for accurately diagnosing and\u0000analyzing various diseases. However, Convolutional Neural Networks (CNNs) and\u0000Transformers, the most commonly used architectures for this task, struggle to\u0000effectively capture long-range dependencies due to the inherent locality of\u0000CNNs and the computational complexity of Transformers. To address this\u0000limitation, we introduce TTT-Unet, a novel framework that integrates Test-Time\u0000Training (TTT) layers into the traditional U-Net architecture for biomedical\u0000image segmentation. TTT-Unet dynamically adjusts model parameters during the\u0000testing time, enhancing the model's ability to capture both local and\u0000long-range features. We evaluate TTT-Unet on multiple medical imaging datasets,\u0000including 3D abdominal organ segmentation in CT and MR images, instrument\u0000segmentation in endoscopy images, and cell segmentation in microscopy images.\u0000The results demonstrate that TTT-Unet consistently outperforms state-of-the-art\u0000CNN-based and Transformer-based segmentation models across all tasks. The code\u0000is available at https://github.com/rongzhou7/TTT-Unet.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers 时间插件:使用预训练图像去噪器进行无监督视频去噪
Pub Date : 2024-09-17 DOI: arxiv-2409.11256
Zixuan Fu, Lanqing Guo, Chong Wang, Yufei Wang, Zhihao Li, Bihan Wen
Recent advancements in deep learning have shown impressive results in imageand video denoising, leveraging extensive pairs of noisy and noise-free datafor supervision. However, the challenge of acquiring paired videos for dynamicscenes hampers the practical deployment of deep video denoising techniques. Incontrast, this obstacle is less pronounced in image denoising, where paireddata is more readily available. Thus, a well-trained image denoiser could serveas a reliable spatial prior for video denoising. In this paper, we propose anovel unsupervised video denoising framework, named ``Temporal As a Plugin''(TAP), which integrates tunable temporal modules into a pre-trained imagedenoiser. By incorporating temporal modules, our method can harness temporalinformation across noisy frames, complementing its power of spatial denoising.Furthermore, we introduce a progressive fine-tuning strategy that refines eachtemporal module using the generated pseudo clean video frames, progressivelyenhancing the network's denoising performance. Compared to other unsupervisedvideo denoising methods, our framework demonstrates superior performance onboth sRGB and raw video denoising datasets.
最近,深度学习在图像和视频去噪方面取得了令人瞩目的进展,利用大量成对的有噪和无噪数据进行监督。然而,获取动态场景配对视频的挑战阻碍了深度视频去噪技术的实际应用。相比之下,这一障碍在图像去噪中不那么明显,因为配对数据更容易获得。因此,训练有素的图像去噪器可以作为视频去噪的可靠空间先验。在本文中,我们提出了一种新的无监督视频去噪框架,名为 "时态插件"(TAP),它将可调整的时态模块集成到预先训练好的图像去噪器中。此外,我们还引入了渐进微调策略,利用生成的伪干净视频帧完善每个时态模块,逐步提高网络的去噪性能。与其他无监督视频去噪方法相比,我们的框架在 sRGB 和原始视频去噪数据集上都表现出卓越的性能。
{"title":"Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers","authors":"Zixuan Fu, Lanqing Guo, Chong Wang, Yufei Wang, Zhihao Li, Bihan Wen","doi":"arxiv-2409.11256","DOIUrl":"https://doi.org/arxiv-2409.11256","url":null,"abstract":"Recent advancements in deep learning have shown impressive results in image\u0000and video denoising, leveraging extensive pairs of noisy and noise-free data\u0000for supervision. However, the challenge of acquiring paired videos for dynamic\u0000scenes hampers the practical deployment of deep video denoising techniques. In\u0000contrast, this obstacle is less pronounced in image denoising, where paired\u0000data is more readily available. Thus, a well-trained image denoiser could serve\u0000as a reliable spatial prior for video denoising. In this paper, we propose a\u0000novel unsupervised video denoising framework, named ``Temporal As a Plugin''\u0000(TAP), which integrates tunable temporal modules into a pre-trained image\u0000denoiser. By incorporating temporal modules, our method can harness temporal\u0000information across noisy frames, complementing its power of spatial denoising.\u0000Furthermore, we introduce a progressive fine-tuning strategy that refines each\u0000temporal module using the generated pseudo clean video frames, progressively\u0000enhancing the network's denoising performance. Compared to other unsupervised\u0000video denoising methods, our framework demonstrates superior performance on\u0000both sRGB and raw video denoising datasets.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lite-FBCN: Lightweight Fast Bilinear Convolutional Network for Brain Disease Classification from MRI Image Lite-FBCN:从核磁共振成像图像进行脑疾病分类的轻量级快速双线性卷积网络
Pub Date : 2024-09-17 DOI: arxiv-2409.10952
Dewinda Julianensi Rumala, Reza Fuad Rachmadi, Anggraini Dwi Sensusiati, I Ketut Eddy Purnama
Achieving high accuracy with computational efficiency in brain diseaseclassification from Magnetic Resonance Imaging (MRI) scans is challenging,particularly when both coarse and fine-grained distinctions are crucial.Current deep learning methods often struggle to balance accuracy withcomputational demands. We propose Lite-FBCN, a novel Lightweight Fast BilinearConvolutional Network designed to address this issue. Unlike traditionaldual-network bilinear models, Lite-FBCN utilizes a single-network architecture,significantly reducing computational load. Lite-FBCN leverages lightweight,pre-trained CNNs fine-tuned to extract relevant features and incorporates achannel reducer layer before bilinear pooling, minimizing feature mapdimensionality and resulting in a compact bilinear vector. Extensiveevaluations on cross-validation and hold-out data demonstrate that Lite-FBCNnot only surpasses baseline CNNs but also outperforms existing bilinear models.Lite-FBCN with MobileNetV1 attains 98.10% accuracy in cross-validation and69.37% on hold-out data (a 3% improvement over the baseline). UMAPvisualizations further confirm its effectiveness in distinguishing closelyrelated brain disease classes. Moreover, its optimal trade-off betweenperformance and computational efficiency positions Lite-FBCN as a promisingsolution for enhancing diagnostic capabilities in resource-constrained and orreal-time clinical environments.
在磁共振成像(MRI)扫描的脑部疾病分类中,实现高准确度和计算效率是一项挑战,尤其是当粗粒度和细粒度的区分都至关重要时。我们提出的 Lite-FBCN 是一种新型轻量级快速双线性卷积网络,旨在解决这一问题。与传统的双网络双线性模型不同,Lite-FBCN 采用单网络架构,大大降低了计算负荷。Lite-FBCN 利用轻量级的预训练 CNN 进行微调,以提取相关特征,并在双线性池化之前加入信道减速层,从而最大限度地降低了特征图的维度,并产生了一个紧凑的双线性向量。在交叉验证和保留数据上进行的广泛评估表明,Lite-FBCN 不仅超越了基准 CNN,而且优于现有的双线性模型。UMAP 可视化进一步证实了它在区分密切相关的脑部疾病类别方面的有效性。此外,Lite-FBCN 在性能和计算效率之间进行了最佳权衡,因此有望成为在资源有限和实时的临床环境中提高诊断能力的解决方案。
{"title":"Lite-FBCN: Lightweight Fast Bilinear Convolutional Network for Brain Disease Classification from MRI Image","authors":"Dewinda Julianensi Rumala, Reza Fuad Rachmadi, Anggraini Dwi Sensusiati, I Ketut Eddy Purnama","doi":"arxiv-2409.10952","DOIUrl":"https://doi.org/arxiv-2409.10952","url":null,"abstract":"Achieving high accuracy with computational efficiency in brain disease\u0000classification from Magnetic Resonance Imaging (MRI) scans is challenging,\u0000particularly when both coarse and fine-grained distinctions are crucial.\u0000Current deep learning methods often struggle to balance accuracy with\u0000computational demands. We propose Lite-FBCN, a novel Lightweight Fast Bilinear\u0000Convolutional Network designed to address this issue. Unlike traditional\u0000dual-network bilinear models, Lite-FBCN utilizes a single-network architecture,\u0000significantly reducing computational load. Lite-FBCN leverages lightweight,\u0000pre-trained CNNs fine-tuned to extract relevant features and incorporates a\u0000channel reducer layer before bilinear pooling, minimizing feature map\u0000dimensionality and resulting in a compact bilinear vector. Extensive\u0000evaluations on cross-validation and hold-out data demonstrate that Lite-FBCN\u0000not only surpasses baseline CNNs but also outperforms existing bilinear models.\u0000Lite-FBCN with MobileNetV1 attains 98.10% accuracy in cross-validation and\u000069.37% on hold-out data (a 3% improvement over the baseline). UMAP\u0000visualizations further confirm its effectiveness in distinguishing closely\u0000related brain disease classes. Moreover, its optimal trade-off between\u0000performance and computational efficiency positions Lite-FBCN as a promising\u0000solution for enhancing diagnostic capabilities in resource-constrained and or\u0000real-time clinical environments.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUNSB-RFIE: Context-aware Unpaired Neural Schr"{o}dinger Bridge in Retinal Fundus Image Enhancement CUNSB-RFIE:视网膜眼底图像增强中的情境感知非配对神经 Schr"{o}dinger 桥接器
Pub Date : 2024-09-17 DOI: arxiv-2409.10966
Xuanzhao Dong, Vamsi Krishna Vasa, Wenhui Zhu, Peijie Qiu, Xiwen Chen, Yi Su, Yujian Xiong, Zhangsihao Yang, Yanxi Chen, Yalin Wang
Retinal fundus photography is significant in diagnosing and monitoringretinal diseases. However, systemic imperfections and operator/patient-relatedfactors can hinder the acquisition of high-quality retinal images. Previousefforts in retinal image enhancement primarily relied on GANs, which arelimited by the trade-off between training stability and output diversity. Incontrast, the Schr"{o}dinger Bridge (SB), offers a more stable solution byutilizing Optimal Transport (OT) theory to model a stochastic differentialequation (SDE) between two arbitrary distributions. This allows SB toeffectively transform low-quality retinal images into their high-qualitycounterparts. In this work, we leverage the SB framework to propose animage-to-image translation pipeline for retinal image enhancement.Additionally, previous methods often fail to capture fine structural details,such as blood vessels. To address this, we enhance our pipeline by introducingDynamic Snake Convolution, whose tortuous receptive field can better preservetubular structures. We name the resulting retinal fundus image enhancementframework the Context-aware Unpaired Neural Schr"{o}dinger Bridge(CUNSB-RFIE). To the best of our knowledge, this is the first endeavor to usethe SB approach for retinal image enhancement. Experimental results on alarge-scale dataset demonstrate the advantage of the proposed method comparedto several state-of-the-art supervised and unsupervised methods in terms ofimage quality and performance on downstream tasks.The code is available aturl{https://github.com/Retinal-Research/CUNSB-RFIE}.
视网膜眼底摄影在诊断和监测视网膜疾病方面具有重要意义。然而,系统缺陷和操作员/患者相关因素会阻碍高质量视网膜图像的获取。以前在视网膜图像增强方面的努力主要依赖于 GAN,而 GAN 受限于训练稳定性和输出多样性之间的权衡。相比之下,薛定谔桥(SB)利用最优传输(OT)理论对两个任意分布之间的随机微分方程(SDE)进行建模,从而提供了一种更稳定的解决方案。这使得 SB 能够有效地将低质量视网膜图像转换为高质量图像。在这项工作中,我们利用 SB 框架提出了用于视网膜图像增强的动画到图像转换管道。为了解决这个问题,我们通过引入动态蛇卷积(Dynamic Snake Convolution)来增强我们的管道,其迂回的感受野可以更好地保留管状结构。我们将由此产生的视网膜眼底图像增强框架命名为 "上下文感知非配对神经桥接(CUNSB-RFIE)"。据我们所知,这是首次将 SB 方法用于视网膜图像增强。在大规模数据集上的实验结果表明,与几种最先进的有监督和无监督方法相比,所提出的方法在图像质量和下游任务性能方面更具优势。代码可在(url{https://github.com/Retinal-Research/CUNSB-RFIE}.
{"title":"CUNSB-RFIE: Context-aware Unpaired Neural Schr\"{o}dinger Bridge in Retinal Fundus Image Enhancement","authors":"Xuanzhao Dong, Vamsi Krishna Vasa, Wenhui Zhu, Peijie Qiu, Xiwen Chen, Yi Su, Yujian Xiong, Zhangsihao Yang, Yanxi Chen, Yalin Wang","doi":"arxiv-2409.10966","DOIUrl":"https://doi.org/arxiv-2409.10966","url":null,"abstract":"Retinal fundus photography is significant in diagnosing and monitoring\u0000retinal diseases. However, systemic imperfections and operator/patient-related\u0000factors can hinder the acquisition of high-quality retinal images. Previous\u0000efforts in retinal image enhancement primarily relied on GANs, which are\u0000limited by the trade-off between training stability and output diversity. In\u0000contrast, the Schr\"{o}dinger Bridge (SB), offers a more stable solution by\u0000utilizing Optimal Transport (OT) theory to model a stochastic differential\u0000equation (SDE) between two arbitrary distributions. This allows SB to\u0000effectively transform low-quality retinal images into their high-quality\u0000counterparts. In this work, we leverage the SB framework to propose an\u0000image-to-image translation pipeline for retinal image enhancement.\u0000Additionally, previous methods often fail to capture fine structural details,\u0000such as blood vessels. To address this, we enhance our pipeline by introducing\u0000Dynamic Snake Convolution, whose tortuous receptive field can better preserve\u0000tubular structures. We name the resulting retinal fundus image enhancement\u0000framework the Context-aware Unpaired Neural Schr\"{o}dinger Bridge\u0000(CUNSB-RFIE). To the best of our knowledge, this is the first endeavor to use\u0000the SB approach for retinal image enhancement. Experimental results on a\u0000large-scale dataset demonstrate the advantage of the proposed method compared\u0000to several state-of-the-art supervised and unsupervised methods in terms of\u0000image quality and performance on downstream tasks.The code is available at\u0000url{https://github.com/Retinal-Research/CUNSB-RFIE}.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending 通过水印信息混合实现潜在扩散模型的有效用户归属
Pub Date : 2024-09-17 DOI: arxiv-2409.10958
Yongyang Pan, Xiaohong Liu, Siqi Luo, Yi Xin, Xiao Guo, Xiaoming Liu, Xiongkuo Min, Guangtao Zhai
Rapid advancements in multimodal large language models have enabled thecreation of hyper-realistic images from textual descriptions. However, theseadvancements also raise significant concerns about unauthorized use, whichhinders their broader distribution. Traditional watermarking methods oftenrequire complex integration or degrade image quality. To address thesechallenges, we introduce a novel framework Towards Effective user Attributionfor latent diffusion models via Watermark-Informed Blending (TEAWIB). TEAWIBincorporates a unique ready-to-use configuration approach that allows seamlessintegration of user-specific watermarks into generative models. This approachensures that each user can directly apply a pre-configured set of parameters tothe model without altering the original model parameters or compromising imagequality. Additionally, noise and augmentation operations are embedded at thepixel level to further secure and stabilize watermarked images. Extensiveexperiments validate the effectiveness of TEAWIB, showcasing thestate-of-the-art performance in perceptual quality and attribution accuracy.
多模态大型语言模型的飞速发展使人们能够根据文字描述创建超逼真的图像。然而,这些进步也引起了人们对未经授权使用的极大担忧,从而阻碍了图像的广泛传播。传统的水印方法往往需要复杂的整合,或者会降低图像质量。为了应对这些挑战,我们推出了一种新型框架:通过水印信息混合(TEAWIB)实现潜在扩散模型的有效用户归属。TEAWIB 包含一种独特的即用型配置方法,可将用户特定的水印无缝集成到生成模型中。这种方法确保每个用户都能直接将预先配置好的参数集应用到模型中,而不会改变原始模型参数或影响图像质量。此外,噪声和增强操作被嵌入到像素级,以进一步确保水印图像的安全性和稳定性。广泛的实验验证了 TEAWIB 的有效性,展示了其在感知质量和归属准确性方面的一流性能。
{"title":"Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending","authors":"Yongyang Pan, Xiaohong Liu, Siqi Luo, Yi Xin, Xiao Guo, Xiaoming Liu, Xiongkuo Min, Guangtao Zhai","doi":"arxiv-2409.10958","DOIUrl":"https://doi.org/arxiv-2409.10958","url":null,"abstract":"Rapid advancements in multimodal large language models have enabled the\u0000creation of hyper-realistic images from textual descriptions. However, these\u0000advancements also raise significant concerns about unauthorized use, which\u0000hinders their broader distribution. Traditional watermarking methods often\u0000require complex integration or degrade image quality. To address these\u0000challenges, we introduce a novel framework Towards Effective user Attribution\u0000for latent diffusion models via Watermark-Informed Blending (TEAWIB). TEAWIB\u0000incorporates a unique ready-to-use configuration approach that allows seamless\u0000integration of user-specific watermarks into generative models. This approach\u0000ensures that each user can directly apply a pre-configured set of parameters to\u0000the model without altering the original model parameters or compromising image\u0000quality. Additionally, noise and augmentation operations are embedded at the\u0000pixel level to further secure and stabilize watermarked images. Extensive\u0000experiments validate the effectiveness of TEAWIB, showcasing the\u0000state-of-the-art performance in perceptual quality and attribution accuracy.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SkinMamba: A Precision Skin Lesion Segmentation Architecture with Cross-Scale Global State Modeling and Frequency Boundary Guidance SkinMamba:具有跨尺度全球状态建模和频率边界指导功能的精确皮肤病变分割架构
Pub Date : 2024-09-17 DOI: arxiv-2409.10890
Shun Zou, Mingya Zhang, Bingjian Fan, Zhengyi Zhou, Xiuguo Zou
Skin lesion segmentation is a crucial method for identifying early skincancer. In recent years, both convolutional neural network (CNN) andTransformer-based methods have been widely applied. Moreover, combining CNN andTransformer effectively integrates global and local relationships, but remainslimited by the quadratic complexity of Transformer. To address this, we proposea hybrid architecture based on Mamba and CNN, called SkinMamba. It maintainslinear complexity while offering powerful long-range dependency modeling andlocal feature extraction capabilities. Specifically, we introduce the ScaleResidual State Space Block (SRSSB), which captures global contextualrelationships and cross-scale information exchange at a macro level, enablingexpert communication in a global state. This effectively addresses challengesin skin lesion segmentation related to varying lesion sizes and inconspicuoustarget areas. Additionally, to mitigate boundary blurring and information lossduring model downsampling, we introduce the Frequency Boundary Guided Module(FBGM), providing sufficient boundary priors to guide precise boundarysegmentation, while also using the retained information to assist the decoderin the decoding process. Finally, we conducted comparative and ablationexperiments on two public lesion segmentation datasets (ISIC2017 and ISIC2018),and the results demonstrate the strong competitiveness of SkinMamba in skinlesion segmentation tasks. The code is available athttps://github.com/zs1314/SkinMamba.
皮损分割是识别早期皮肤癌的重要方法。近年来,基于卷积神经网络(CNN)和变换器的方法得到了广泛应用。此外,将卷积神经网络和变换器相结合能有效整合全局和局部关系,但仍受限于变换器的二次复杂性。为此,我们提出了一种基于 Mamba 和 CNN 的混合架构,称为 SkinMamba。它在保持线性复杂性的同时,提供了强大的长距离依赖建模和局部特征提取功能。具体来说,我们引入了尺度残留状态空间块(SRSSB),它能在宏观层面捕捉全局上下文关系和跨尺度信息交换,实现全局状态下的专家交流。这有效解决了皮损大小不一和目标区域不明显等皮损分割难题。此外,为了减少模型下采样时的边界模糊和信息丢失,我们引入了频率边界引导模块(FBGM),提供足够的边界先验来引导精确的边界分割,同时还利用保留的信息来协助解码器进行解码。最后,我们在两个公开的皮损分割数据集(ISIC2017 和 ISIC2018)上进行了对比和消融实验,结果表明 SkinMamba 在皮损分割任务中具有很强的竞争力。代码可在https://github.com/zs1314/SkinMamba。
{"title":"SkinMamba: A Precision Skin Lesion Segmentation Architecture with Cross-Scale Global State Modeling and Frequency Boundary Guidance","authors":"Shun Zou, Mingya Zhang, Bingjian Fan, Zhengyi Zhou, Xiuguo Zou","doi":"arxiv-2409.10890","DOIUrl":"https://doi.org/arxiv-2409.10890","url":null,"abstract":"Skin lesion segmentation is a crucial method for identifying early skin\u0000cancer. In recent years, both convolutional neural network (CNN) and\u0000Transformer-based methods have been widely applied. Moreover, combining CNN and\u0000Transformer effectively integrates global and local relationships, but remains\u0000limited by the quadratic complexity of Transformer. To address this, we propose\u0000a hybrid architecture based on Mamba and CNN, called SkinMamba. It maintains\u0000linear complexity while offering powerful long-range dependency modeling and\u0000local feature extraction capabilities. Specifically, we introduce the Scale\u0000Residual State Space Block (SRSSB), which captures global contextual\u0000relationships and cross-scale information exchange at a macro level, enabling\u0000expert communication in a global state. This effectively addresses challenges\u0000in skin lesion segmentation related to varying lesion sizes and inconspicuous\u0000target areas. Additionally, to mitigate boundary blurring and information loss\u0000during model downsampling, we introduce the Frequency Boundary Guided Module\u0000(FBGM), providing sufficient boundary priors to guide precise boundary\u0000segmentation, while also using the retained information to assist the decoder\u0000in the decoding process. Finally, we conducted comparative and ablation\u0000experiments on two public lesion segmentation datasets (ISIC2017 and ISIC2018),\u0000and the results demonstrate the strong competitiveness of SkinMamba in skin\u0000lesion segmentation tasks. The code is available at\u0000https://github.com/zs1314/SkinMamba.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retinal Vessel Segmentation with Deep Graph and Capsule Reasoning 利用深度图和胶囊推理进行视网膜血管分段
Pub Date : 2024-09-17 DOI: arxiv-2409.11508
Xinxu Wei, Xi Lin, Haiyun Liu, Shixuan Zhao, Yongjie Li
Effective retinal vessel segmentation requires a sophisticated integration ofglobal contextual awareness and local vessel continuity. To address thischallenge, we propose the Graph Capsule Convolution Network (GCC-UNet), whichmerges capsule convolutions with CNNs to capture both local and globalfeatures. The Graph Capsule Convolution operator is specifically designed toenhance the representation of global context, while the Selective GraphAttention Fusion module ensures seamless integration of local and globalinformation. To further improve vessel continuity, we introduce the BottleneckGraph Attention module, which incorporates Channel-wise and Spatial GraphAttention mechanisms. The Multi-Scale Graph Fusion module adeptly combinesfeatures from various scales. Our approach has been rigorously validatedthrough experiments on widely used public datasets, with ablation studiesconfirming the efficacy of each component. Comparative results highlightGCC-UNet's superior performance over existing methods, setting a new benchmarkin retinal vessel segmentation. Notably, this work represents the firstintegration of vanilla, graph, and capsule convolutional techniques in thedomain of medical image segmentation.
有效的视网膜血管分割需要对全局上下文感知和局部血管连续性进行精密整合。为了应对这一挑战,我们提出了图形胶囊卷积网络(GCC-UNet),它将胶囊卷积与 CNN 相结合,以捕捉局部和全局特征。图胶囊卷积算子专为增强全局上下文的表示而设计,而选择性图注意融合模块则确保本地和全局信息的无缝整合。为了进一步改善船只的连续性,我们引入了瓶颈图关注模块,该模块融合了通道和空间图关注机制。多尺度图融合模块巧妙地结合了不同尺度的特征。通过在广泛使用的公共数据集上进行实验,我们的方法得到了严格的验证,消融研究证实了每个组件的功效。比较结果表明,GCC-UNet 的性能优于现有方法,为视网膜血管分割树立了新的标杆。值得注意的是,这项研究首次在医学图像分割领域整合了香草、图和胶囊卷积技术。
{"title":"Retinal Vessel Segmentation with Deep Graph and Capsule Reasoning","authors":"Xinxu Wei, Xi Lin, Haiyun Liu, Shixuan Zhao, Yongjie Li","doi":"arxiv-2409.11508","DOIUrl":"https://doi.org/arxiv-2409.11508","url":null,"abstract":"Effective retinal vessel segmentation requires a sophisticated integration of\u0000global contextual awareness and local vessel continuity. To address this\u0000challenge, we propose the Graph Capsule Convolution Network (GCC-UNet), which\u0000merges capsule convolutions with CNNs to capture both local and global\u0000features. The Graph Capsule Convolution operator is specifically designed to\u0000enhance the representation of global context, while the Selective Graph\u0000Attention Fusion module ensures seamless integration of local and global\u0000information. To further improve vessel continuity, we introduce the Bottleneck\u0000Graph Attention module, which incorporates Channel-wise and Spatial Graph\u0000Attention mechanisms. The Multi-Scale Graph Fusion module adeptly combines\u0000features from various scales. Our approach has been rigorously validated\u0000through experiments on widely used public datasets, with ablation studies\u0000confirming the efficacy of each component. Comparative results highlight\u0000GCC-UNet's superior performance over existing methods, setting a new benchmark\u0000in retinal vessel segmentation. Notably, this work represents the first\u0000integration of vanilla, graph, and capsule convolutional techniques in the\u0000domain of medical image segmentation.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach 利用蒸馏辅助可学习方法实现无梯度的事后可解释性
Pub Date : 2024-09-17 DOI: arxiv-2409.11123
Debarpan Bhattacharya, Amir H. Poorjam, Deepak Mittal, Sriram Ganapathy
The recent advancements in artificial intelligence (AI), with the release ofseveral large models having only query access, make a strong case forexplainability of deep models in a post-hoc gradient free manner. In thispaper, we propose a framework, named distillation aided explainability (DAX),that attempts to generate a saliency-based explanation in a model agnosticgradient free application. The DAX approach poses the problem of explanation ina learnable setting with a mask generation network and a distillation network.The mask generation network learns to generate the multiplier mask that findsthe salient regions of the input, while the student distillation network aimsto approximate the local behavior of the black-box model. We propose a jointoptimization of the two networks in the DAX framework using the locallyperturbed input samples, with the targets derived from input-output access tothe black-box model. We extensively evaluate DAX across different modalities(image and audio), in a classification setting, using a diverse set ofevaluations (intersection over union with ground truth, deletion based andsubjective human evaluation based measures) and benchmark it with respect to$9$ different methods. In these evaluations, the DAX significantly outperformsthe existing approaches on all modalities and evaluation metrics.
最近,人工智能(AI)取得了长足的进步,发布了多个大型模型,这些模型只允许查询访问,这有力地证明了深度模型可以通过无后置梯度的方式进行解释。在本文中,我们提出了一个名为 "蒸馏辅助可解释性(DAX)"的框架,该框架试图在模型不可知论的无梯度应用中生成基于显著性的解释。DAX 方法通过一个掩码生成网络和一个蒸馏网络,在可学习的环境中提出了解释问题。掩码生成网络通过学习来生成乘法掩码,从而找到输入的突出区域,而学生蒸馏网络则旨在逼近黑盒模型的局部行为。我们提出了在 DAX 框架中使用局部扰动输入样本对这两个网络进行联合优化的方法,而目标则来自黑盒模型的输入输出访问。我们在不同模式(图像和音频)的分类环境中广泛评估了 DAX,使用了一系列不同的评估方法(与地面实况的交集与联合、基于删除的评估方法和基于主观人类评估的评估方法),并将其与 9 种不同的方法进行比较。在这些评估中,DAX 在所有模式和评估指标上都明显优于现有方法。
{"title":"Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach","authors":"Debarpan Bhattacharya, Amir H. Poorjam, Deepak Mittal, Sriram Ganapathy","doi":"arxiv-2409.11123","DOIUrl":"https://doi.org/arxiv-2409.11123","url":null,"abstract":"The recent advancements in artificial intelligence (AI), with the release of\u0000several large models having only query access, make a strong case for\u0000explainability of deep models in a post-hoc gradient free manner. In this\u0000paper, we propose a framework, named distillation aided explainability (DAX),\u0000that attempts to generate a saliency-based explanation in a model agnostic\u0000gradient free application. The DAX approach poses the problem of explanation in\u0000a learnable setting with a mask generation network and a distillation network.\u0000The mask generation network learns to generate the multiplier mask that finds\u0000the salient regions of the input, while the student distillation network aims\u0000to approximate the local behavior of the black-box model. We propose a joint\u0000optimization of the two networks in the DAX framework using the locally\u0000perturbed input samples, with the targets derived from input-output access to\u0000the black-box model. We extensively evaluate DAX across different modalities\u0000(image and audio), in a classification setting, using a diverse set of\u0000evaluations (intersection over union with ground truth, deletion based and\u0000subjective human evaluation based measures) and benchmark it with respect to\u0000$9$ different methods. In these evaluations, the DAX significantly outperforms\u0000the existing approaches on all modalities and evaluation metrics.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HoloTile RGB: Ultra-fast, Speckle-Free RGB Computer Generated Holography HoloTile RGB:超快、无斑点的 RGB 计算机生成全息技术
Pub Date : 2024-09-17 DOI: arxiv-2409.11049
Andreas Erik Gejl Madsen, Jesper Glückstad
We demonstrate the first use of the HoloTile Computer Generated Holography(CGH) modality on multicolor targets. Taking advantage of the sub-hologramtiling and Point Spread Function (PSF) shaping of HoloTile allows for thereconstruction of high-fidelity, pseudo-digital RGB images, with well-definedoutput pixels, without the need for temporal averaging. For each wavelength,the target channels are scaled appropriately, using the same output pixel size.We employ a Stochastic Gradient Descent (SGD) hologram generation algorithm foreach wavelength, and display them sequentially on a HoloEye GAEA 2.1 SpatialLight Modulator (SLM) in Color Field Sequential (CFS) phase modulation mode. Assuch, we get full 8-bit phase modulation at 60Hz for each wavelength. Thereconstructions are projected onto a camera sensor where each RGB image iscaptured at once.
我们展示了 HoloTile 计算机生成全息(CGH)模式在多色目标上的首次应用。利用 HoloTile 的子全息图和点展宽函数(PSF)整形功能,可以构建高保真的伪数字 RGB 图像,输出像素定义明确,无需进行时间平均。我们对每个波长采用随机梯度下降(SGD)全息图生成算法,并在色场顺序(CFS)相位调制模式的 HoloEye GAEA 2.1 空间光调制器(SLM)上顺序显示。这样,每个波长都能以 60Hz 的频率获得完整的 8 位相位调制。这些结构被投射到相机传感器上,每个 RGB 图像都被一次性捕捉。
{"title":"HoloTile RGB: Ultra-fast, Speckle-Free RGB Computer Generated Holography","authors":"Andreas Erik Gejl Madsen, Jesper Glückstad","doi":"arxiv-2409.11049","DOIUrl":"https://doi.org/arxiv-2409.11049","url":null,"abstract":"We demonstrate the first use of the HoloTile Computer Generated Holography\u0000(CGH) modality on multicolor targets. Taking advantage of the sub-hologram\u0000tiling and Point Spread Function (PSF) shaping of HoloTile allows for the\u0000reconstruction of high-fidelity, pseudo-digital RGB images, with well-defined\u0000output pixels, without the need for temporal averaging. For each wavelength,\u0000the target channels are scaled appropriately, using the same output pixel size.\u0000We employ a Stochastic Gradient Descent (SGD) hologram generation algorithm for\u0000each wavelength, and display them sequentially on a HoloEye GAEA 2.1 Spatial\u0000Light Modulator (SLM) in Color Field Sequential (CFS) phase modulation mode. As\u0000such, we get full 8-bit phase modulation at 60Hz for each wavelength. The\u0000reconstructions are projected onto a camera sensor where each RGB image is\u0000captured at once.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - EE - Image and Video Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1