首页 > 最新文献

Journal of Imaging最新文献

英文 中文
Comparison of Visual and Quantra Software Mammographic Density Assessment According to BI-RADS® in 2D and 3D Images. 根据 BI-RADS® 在二维和三维图像中进行乳腺密度评估的视觉软件和 Quantra 软件的比较。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-23 DOI: 10.3390/jimaging10090238
Francesca Morciano, Cristina Marcazzan, Rossella Rella, Oscar Tommasini, Marco Conti, Paolo Belli, Andrea Spagnolo, Andrea Quaglia, Stefano Tambalo, Andreea Georgiana Trisca, Claudia Rossati, Francesca Fornasa, Giovanna Romanucci

Mammographic density (MD) assessment is subject to inter- and intra-observer variability. An automated method, such as Quantra software, could be a useful tool for an objective and reproducible MD assessment. Our purpose was to evaluate the performance of Quantra software in assessing MD, according to BI-RADS® Atlas Fifth Edition recommendations, verifying the degree of agreement with the gold standard, given by the consensus of two breast radiologists. A total of 5009 screening examinations were evaluated by two radiologists and analysed by Quantra software to assess MD. The agreement between the three assigned values was expressed as intraclass correlation coefficients (ICCs). The agreement between the software and the two readers (R1 and R2) was moderate with ICC values of 0.725 and 0.713, respectively. A better agreement was demonstrated between the software's assessment and the average score of the values assigned by the two radiologists, with an index of 0.793, which reflects a good correlation. Quantra software appears a promising tool in supporting radiologists in the MD assessment and could be part of a personalised screening protocol soon. However, some fine-tuning is needed to improve its accuracy, reduce its tendency to overestimate, and ensure it excludes high-density structures from its assessment.

乳腺密度(MD)评估受观察者之间和观察者内部差异的影响。Quantra 软件等自动化方法是进行客观、可重复 MD 评估的有用工具。我们的目的是根据 BI-RADS® 图谱第五版的建议,评估 Quantra 软件在评估乳腺组织密度方面的性能,并验证其与两位乳腺放射科专家一致给出的金标准的吻合程度。两位放射科专家共对 5009 例筛查进行了评估,并通过 Quantra 软件对 MD 进行了分析。三个指定值之间的一致性以类内相关系数(ICC)表示。软件与两名读者(R1 和 R2)之间的一致性为中等,ICC 值分别为 0.725 和 0.713。软件的评估结果与两位放射科医生给出的平均值之间的一致性更好,指数为 0.793,反映了良好的相关性。Quantra 软件在支持放射科医生进行 MD 评估方面似乎是一个很有前途的工具,很快就能成为个性化筛查方案的一部分。不过,还需要进行一些微调,以提高其准确性,降低其高估倾向,并确保其在评估中排除高密度结构。
{"title":"Comparison of Visual and Quantra Software Mammographic Density Assessment According to BI-RADS<sup>®</sup> in 2D and 3D Images.","authors":"Francesca Morciano, Cristina Marcazzan, Rossella Rella, Oscar Tommasini, Marco Conti, Paolo Belli, Andrea Spagnolo, Andrea Quaglia, Stefano Tambalo, Andreea Georgiana Trisca, Claudia Rossati, Francesca Fornasa, Giovanna Romanucci","doi":"10.3390/jimaging10090238","DOIUrl":"https://doi.org/10.3390/jimaging10090238","url":null,"abstract":"<p><p>Mammographic density (MD) assessment is subject to inter- and intra-observer variability. An automated method, such as Quantra software, could be a useful tool for an objective and reproducible MD assessment. Our purpose was to evaluate the performance of Quantra software in assessing MD, according to BI-RADS<sup>®</sup> Atlas Fifth Edition recommendations, verifying the degree of agreement with the gold standard, given by the consensus of two breast radiologists. A total of 5009 screening examinations were evaluated by two radiologists and analysed by Quantra software to assess MD. The agreement between the three assigned values was expressed as intraclass correlation coefficients (ICCs). The agreement between the software and the two readers (R1 and R2) was moderate with ICC values of 0.725 and 0.713, respectively. A better agreement was demonstrated between the software's assessment and the average score of the values assigned by the two radiologists, with an index of 0.793, which reflects a good correlation. Quantra software appears a promising tool in supporting radiologists in the MD assessment and could be part of a personalised screening protocol soon. However, some fine-tuning is needed to improve its accuracy, reduce its tendency to overestimate, and ensure it excludes high-density structures from its assessment.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433353/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient End-to-End Convolutional Architecture for Point-of-Gaze Estimation. 用于注视点估计的高效端到端卷积架构
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-23 DOI: 10.3390/jimaging10090237
Casian Miron, George Ciubotariu, Alexandru Păsărică, Radu Timofte

Point-of-gaze estimation is part of a larger set of tasks aimed at improving user experience, providing business insights, or facilitating interactions with different devices. There has been a growing interest in this task, particularly due to the need for upgrades in e-meeting platforms during the pandemic when on-site activities were no longer possible for educational institutions, corporations, and other organizations. Current research advancements are focusing on more complex methodologies for data collection and task implementation, creating a gap that we intend to address with our contributions. Thus, we introduce a methodology for data acquisition that shows promise due to its nonrestrictive and straightforward nature, notably increasing the yield of collected data without compromising diversity or quality. Additionally, we present a novel and efficient convolutional neural network specifically tailored for calibration-free point-of-gaze estimation that outperforms current state-of-the-art methods on the MPIIFaceGaze dataset by a substantial margin, and sets a strong baseline on our own data.

观测点估算是旨在改善用户体验、提供业务洞察力或促进与不同设备互动的一系列大型任务的一部分。人们对这项任务的兴趣与日俱增,特别是在大流行病期间,教育机构、公司和其他组织无法再进行现场活动,因此需要升级电子会议平台。目前的研究进展主要集中在更复杂的数据收集和任务执行方法上,这就造成了一个空白,我们打算通过我们的贡献来弥补这个空白。因此,我们介绍了一种数据采集方法,该方法因其不受限制和简单明了的性质而大有可为,在不影响多样性和质量的前提下显著提高了采集数据的产量。此外,我们还介绍了一种专为无校准凝视点估算量身定制的新型高效卷积神经网络,该网络在 MPIIFaceGaze 数据集上的表现大大优于目前最先进的方法,并在我们自己的数据上建立了一个强大的基准。
{"title":"Efficient End-to-End Convolutional Architecture for Point-of-Gaze Estimation.","authors":"Casian Miron, George Ciubotariu, Alexandru Păsărică, Radu Timofte","doi":"10.3390/jimaging10090237","DOIUrl":"https://doi.org/10.3390/jimaging10090237","url":null,"abstract":"<p><p>Point-of-gaze estimation is part of a larger set of tasks aimed at improving user experience, providing business insights, or facilitating interactions with different devices. There has been a growing interest in this task, particularly due to the need for upgrades in e-meeting platforms during the pandemic when on-site activities were no longer possible for educational institutions, corporations, and other organizations. Current research advancements are focusing on more complex methodologies for data collection and task implementation, creating a gap that we intend to address with our contributions. Thus, we introduce a methodology for data acquisition that shows promise due to its nonrestrictive and straightforward nature, notably increasing the yield of collected data without compromising diversity or quality. Additionally, we present a novel and efficient convolutional neural network specifically tailored for calibration-free point-of-gaze estimation that outperforms current state-of-the-art methods on the MPIIFaceGaze dataset by a substantial margin, and sets a strong baseline on our own data.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142336671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Method for Augmenting Side-Scan Sonar Seafloor Sediment Image Dataset Based on BCEL1-CBAM-INGAN. 基于 BCEL1-CBAM-INGAN 的侧扫声纳海底沉积物图像数据集增强方法。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-20 DOI: 10.3390/jimaging10090233
Haixing Xia, Yang Cui, Shaohua Jin, Gang Bian, Wei Zhang, Chengyang Peng

In this paper, a method for augmenting samples of side-scan sonar seafloor sediment images based on CBAM-BCEL1-INGAN is proposed, aiming to address the difficulties in acquiring and labeling datasets, as well as the insufficient diversity and quantity of data samples. Firstly, a Convolutional Block Attention Module (CBAM) is integrated into the residual blocks of the INGAN generator to enhance the learning of specific attributes and improve the quality of the generated images. Secondly, a BCEL1 loss function (combining binary cross-entropy and L1 loss functions) is introduced into the discriminator, enabling it to focus on both global image consistency and finer distinctions for better generation results. Finally, augmented samples are input into an AlexNet classifier to verify their authenticity. Experimental results demonstrate the excellent performance of the method in generating images of coarse sand, gravel, and bedrock, as evidenced by significant improvements in the Frechet Inception Distance (FID) and Inception Score (IS). The introduction of the CBAM and BCEL1 loss function notably enhances the quality and details of the generated images. Moreover, classification experiments using the AlexNet classifier show an increase in the recognition rate from 90.5% using only INGAN-generated images of bedrock to 97.3% using images augmented using our method, marking a 6.8% improvement. Additionally, the classification accuracy of bedrock-type matrices is improved by 5.2% when images enhanced using the method presented in this paper are added to the training set, which is 2.7% higher than that of the simple method amplification. This validates the effectiveness of our method in the task of generating seafloor sediment images, partially alleviating the scarcity of side-scan sonar seafloor sediment image data.

本文提出了一种基于 CBAM-BCEL1-INGAN 的侧扫声纳海底沉积物图像样本增强方法,旨在解决数据集获取和标注困难以及数据样本多样性和数量不足的问题。首先,在INGAN 生成器的残差块中集成了卷积块注意模块(CBAM),以加强对特定属性的学习,提高生成图像的质量。其次,在鉴别器中引入了 BCEL1 损失函数(结合了二元交叉熵和 L1 损失函数),使其既能关注全局图像的一致性,又能进行更精细的区分,以获得更好的生成结果。最后,将增强样本输入 AlexNet 分类器,以验证其真实性。实验结果表明,该方法在生成粗砂、砾石和基岩图像时表现出色,这体现在弗雷谢特起始距离(FID)和起始分数(IS)的显著提高上。CBAM 和 BCEL1 损失函数的引入显著提高了生成图像的质量和细节。此外,使用 AlexNet 分类器进行的分类实验表明,仅使用 INGAN 生成的基岩图像的识别率为 90.5%,而使用我们的方法增强的图像的识别率为 97.3%,提高了 6.8%。此外,在训练集中加入使用本文方法增强的图像后,基岩类型矩阵的分类准确率提高了 5.2%,比简单方法放大的准确率高出 2.7%。这验证了我们的方法在生成海底沉积物图像任务中的有效性,部分缓解了侧扫声纳海底沉积物图像数据稀缺的问题。
{"title":"Method for Augmenting Side-Scan Sonar Seafloor Sediment Image Dataset Based on BCEL1-CBAM-INGAN.","authors":"Haixing Xia, Yang Cui, Shaohua Jin, Gang Bian, Wei Zhang, Chengyang Peng","doi":"10.3390/jimaging10090233","DOIUrl":"https://doi.org/10.3390/jimaging10090233","url":null,"abstract":"<p><p>In this paper, a method for augmenting samples of side-scan sonar seafloor sediment images based on CBAM-BCEL1-INGAN is proposed, aiming to address the difficulties in acquiring and labeling datasets, as well as the insufficient diversity and quantity of data samples. Firstly, a Convolutional Block Attention Module (CBAM) is integrated into the residual blocks of the INGAN generator to enhance the learning of specific attributes and improve the quality of the generated images. Secondly, a BCEL1 loss function (combining binary cross-entropy and L1 loss functions) is introduced into the discriminator, enabling it to focus on both global image consistency and finer distinctions for better generation results. Finally, augmented samples are input into an AlexNet classifier to verify their authenticity. Experimental results demonstrate the excellent performance of the method in generating images of coarse sand, gravel, and bedrock, as evidenced by significant improvements in the Frechet Inception Distance (FID) and Inception Score (IS). The introduction of the CBAM and BCEL1 loss function notably enhances the quality and details of the generated images. Moreover, classification experiments using the AlexNet classifier show an increase in the recognition rate from 90.5% using only INGAN-generated images of bedrock to 97.3% using images augmented using our method, marking a 6.8% improvement. Additionally, the classification accuracy of bedrock-type matrices is improved by 5.2% when images enhanced using the method presented in this paper are added to the training set, which is 2.7% higher than that of the simple method amplification. This validates the effectiveness of our method in the task of generating seafloor sediment images, partially alleviating the scarcity of side-scan sonar seafloor sediment image data.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433333/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Task Model for Pulmonary Nodule Segmentation and Classification. 肺结节分割和分类的多任务模型
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-20 DOI: 10.3390/jimaging10090234
Tiequn Tang, Rongfu Zhang

In the computer-aided diagnosis of lung cancer, the automatic segmentation of pulmonary nodules and the classification of benign and malignant tumors are two fundamental tasks. However, deep learning models often overlook the potential benefits of task correlations in improving their respective performances, as they are typically designed for a single task only. Therefore, we propose a multi-task network (MT-Net) that integrates shared backbone architecture and a prediction distillation structure for the simultaneous segmentation and classification of pulmonary nodules. The model comprises a coarse segmentation subnetwork (Coarse Seg-net), a cooperative classification subnetwork (Class-net), and a cooperative segmentation subnetwork (Fine Seg-net). Coarse Seg-net and Fine Seg-net share identical structure, where Coarse Seg-net provides prior location information for the subsequent Fine Seg-net and Class-net, thereby boosting pulmonary nodule segmentation and classification performance. We quantitatively and qualitatively analyzed the performance of the model by using the public dataset LIDC-IDRI. Our results show that the model achieves a Dice similarity coefficient (DI) index of 83.2% for pulmonary nodule segmentation, as well as an accuracy (ACC) of 91.9% for benign and malignant pulmonary nodule classification, which is competitive with other state-of-the-art methods. The experimental results demonstrate that the performance of pulmonary nodule segmentation and classification can be improved by a unified model that leverages the potential correlation between tasks.

在肺癌的计算机辅助诊断中,肺结节的自动分割和良性肿瘤与恶性肿瘤的分类是两项基本任务。然而,由于深度学习模型通常只针对单一任务而设计,它们往往忽视了任务相关性在提高各自性能方面的潜在优势。因此,我们提出了一种多任务网络(MT-Net),它集成了共享骨干架构和预测蒸馏结构,可同时对肺结节进行分割和分类。该模型包括一个粗分割子网络(Coarse Seg-net)、一个合作分类子网络(Class-net)和一个合作分割子网络(Fine Seg-net)。粗分割子网和细分割子网具有相同的结构,其中粗分割子网为后续的细分割子网和分类子网提供先验位置信息,从而提高肺结节的分割和分类性能。我们利用公开数据集 LIDC-IDRI 对模型的性能进行了定量和定性分析。结果表明,该模型在肺结节分割方面的 Dice 相似性系数 (DI) 指数达到 83.2%,在肺结节良恶性分类方面的准确率 (ACC) 达到 91.9%,与其他最先进的方法相比具有竞争力。实验结果表明,利用任务间潜在的相关性,统一模型可以提高肺结节分割和分类的性能。
{"title":"A Multi-Task Model for Pulmonary Nodule Segmentation and Classification.","authors":"Tiequn Tang, Rongfu Zhang","doi":"10.3390/jimaging10090234","DOIUrl":"https://doi.org/10.3390/jimaging10090234","url":null,"abstract":"<p><p>In the computer-aided diagnosis of lung cancer, the automatic segmentation of pulmonary nodules and the classification of benign and malignant tumors are two fundamental tasks. However, deep learning models often overlook the potential benefits of task correlations in improving their respective performances, as they are typically designed for a single task only. Therefore, we propose a multi-task network (MT-Net) that integrates shared backbone architecture and a prediction distillation structure for the simultaneous segmentation and classification of pulmonary nodules. The model comprises a coarse segmentation subnetwork (Coarse Seg-net), a cooperative classification subnetwork (Class-net), and a cooperative segmentation subnetwork (Fine Seg-net). Coarse Seg-net and Fine Seg-net share identical structure, where Coarse Seg-net provides prior location information for the subsequent Fine Seg-net and Class-net, thereby boosting pulmonary nodule segmentation and classification performance. We quantitatively and qualitatively analyzed the performance of the model by using the public dataset LIDC-IDRI. Our results show that the model achieves a Dice similarity coefficient (<i>DI</i>) index of 83.2% for pulmonary nodule segmentation, as well as an accuracy (<i>ACC</i>) of 91.9% for benign and malignant pulmonary nodule classification, which is competitive with other state-of-the-art methods. The experimental results demonstrate that the performance of pulmonary nodule segmentation and classification can be improved by a unified model that leverages the potential correlation between tasks.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433280/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional Neural Network-Machine Learning Model: Hybrid Model for Meningioma Tumour and Healthy Brain Classification. 卷积神经网络-机器学习模型:用于脑膜瘤肿瘤和健康大脑分类的混合模型。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-20 DOI: 10.3390/jimaging10090235
Simona Moldovanu, Gigi Tăbăcaru, Marian Barbu

This paper presents a hybrid study of convolutional neural networks (CNNs), machine learning (ML), and transfer learning (TL) in the context of brain magnetic resonance imaging (MRI). The anatomy of the brain is very complex; inside the skull, a brain tumour can form in any part. With MRI technology, cross-sectional images are generated, and radiologists can detect the abnormalities. When the size of the tumour is very small, it is undetectable to the human visual system, necessitating alternative analysis using AI tools. As is widely known, CNNs explore the structure of an image and provide features on the SoftMax fully connected (SFC) layer, and the classification of the items that belong to the input classes is established. Two comparison studies for the classification of meningioma tumours and healthy brains are presented in this paper: (i) classifying MRI images using an original CNN and two pre-trained CNNs, DenseNet169 and EfficientNetV2B0; (ii) determining which CNN and ML combination yields the most accurate classification when SoftMax is replaced with three ML models; in this context, Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were proposed. In a binary classification of tumours and healthy brains, the EfficientNetB0-SVM combination shows an accuracy of 99.5% on the test dataset. A generalisation of the results was performed, and overfitting was prevented by using the bagging ensemble method.

本文以脑磁共振成像(MRI)为背景,介绍了卷积神经网络(CNN)、机器学习(ML)和迁移学习(TL)的混合研究。大脑的解剖结构非常复杂;在头骨内部,脑肿瘤可能在任何部位形成。利用核磁共振成像技术可以生成横截面图像,放射科医生可以检测出异常情况。当肿瘤非常小的时候,人类的视觉系统无法检测到,这就需要使用人工智能工具进行替代分析。众所周知,CNN 会探索图像的结构,并在 SoftMax 全连接(SFC)层上提供特征,然后对属于输入类别的项目进行分类。本文介绍了脑膜瘤肿瘤和健康大脑分类的两项对比研究:(i) 使用原始 CNN 和两个预先训练过的 CNN(DenseNet169 和 EfficientNetV2B0)对 MRI 图像进行分类;(ii) 当 SoftMax 被三个 ML 模型取代时,确定哪个 CNN 和 ML 组合能产生最准确的分类;在这种情况下,提出了随机森林 (RF)、K-最近邻 (KNN) 和支持向量机 (SVM)。在肿瘤和健康大脑的二元分类中,EfficientNetB0-SVM 组合在测试数据集上的准确率达到了 99.5%。对结果进行了泛化,并通过使用装袋集合方法防止了过拟合。
{"title":"Convolutional Neural Network-Machine Learning Model: Hybrid Model for Meningioma Tumour and Healthy Brain Classification.","authors":"Simona Moldovanu, Gigi Tăbăcaru, Marian Barbu","doi":"10.3390/jimaging10090235","DOIUrl":"https://doi.org/10.3390/jimaging10090235","url":null,"abstract":"<p><p>This paper presents a hybrid study of convolutional neural networks (CNNs), machine learning (ML), and transfer learning (TL) in the context of brain magnetic resonance imaging (MRI). The anatomy of the brain is very complex; inside the skull, a brain tumour can form in any part. With MRI technology, cross-sectional images are generated, and radiologists can detect the abnormalities. When the size of the tumour is very small, it is undetectable to the human visual system, necessitating alternative analysis using AI tools. As is widely known, CNNs explore the structure of an image and provide features on the SoftMax fully connected (SFC) layer, and the classification of the items that belong to the input classes is established. Two comparison studies for the classification of meningioma tumours and healthy brains are presented in this paper: (i) classifying MRI images using an original CNN and two pre-trained CNNs, DenseNet169 and EfficientNetV2B0; (ii) determining which CNN and ML combination yields the most accurate classification when SoftMax is replaced with three ML models; in this context, Random Forest (RF), K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were proposed. In a binary classification of tumours and healthy brains, the EfficientNetB0-SVM combination shows an accuracy of 99.5% on the test dataset. A generalisation of the results was performed, and overfitting was prevented by using the bagging ensemble method.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433632/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Historical Blurry Video-Based Face Recognition. 基于历史模糊视频的人脸识别。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-20 DOI: 10.3390/jimaging10090236
Lujun Zhai, Suxia Cui, Yonghui Wang, Song Wang, Jun Zhou, Greg Wilsbacher

Face recognition is a widely used computer vision, which plays an increasingly important role in user authentication systems, security systems, and consumer electronics. The models for most current applications are based on high-definition digital cameras. In this paper, we focus on digital images derived from historical motion picture films. Historical motion picture films often have poorer resolution than modern digital imagery, making face detection a more challenging task. To approach this problem, we first propose a trunk-branch concatenated multi-task cascaded convolutional neural network (TB-MTCNN), which efficiently extracts facial features from blurry historical films by combining the trunk with branch networks and employing various sizes of kernels to enrich the multi-scale receptive field. Next, we build a deep neural network-integrated object-tracking algorithm to compensate for failed recognition over one or more video frames. The framework combines simple online and real-time tracking with deep data association (Deep SORT), and TB-MTCNN with the residual neural network (ResNet) model. Finally, a state-of-the-art image restoration method is employed to reduce the effect of noise and blurriness. The experimental results show that our proposed joint face recognition and tracking network can significantly reduce missed recognition in historical motion picture film frames.

人脸识别是一种广泛应用的计算机视觉技术,在用户身份验证系统、安全系统和消费电子产品中发挥着越来越重要的作用。目前大多数应用的模型都基于高清数码相机。在本文中,我们将重点关注从历史电影胶片中提取的数字图像。与现代数字图像相比,历史电影胶片的分辨率通常较低,这使得人脸检测成为一项更具挑战性的任务。为了解决这个问题,我们首先提出了一种主干-分支串联多任务级联卷积神经网络(TB-MTCNN),它通过将主干网络与分支网络相结合,并采用不同大小的核来丰富多尺度感受野,从而有效地从模糊的历史影片中提取面部特征。接下来,我们构建了一种深度神经网络集成的对象跟踪算法,以补偿一个或多个视频帧的识别失败。该框架将简单的在线实时跟踪与深度数据关联(Deep SORT)相结合,并将 TB-MTCNN 与残差神经网络(ResNet)模型相结合。最后,还采用了最先进的图像修复方法来减少噪声和模糊的影响。实验结果表明,我们提出的联合人脸识别和跟踪网络可以显著减少历史电影胶片中的漏识现象。
{"title":"Historical Blurry Video-Based Face Recognition.","authors":"Lujun Zhai, Suxia Cui, Yonghui Wang, Song Wang, Jun Zhou, Greg Wilsbacher","doi":"10.3390/jimaging10090236","DOIUrl":"https://doi.org/10.3390/jimaging10090236","url":null,"abstract":"<p><p>Face recognition is a widely used computer vision, which plays an increasingly important role in user authentication systems, security systems, and consumer electronics. The models for most current applications are based on high-definition digital cameras. In this paper, we focus on digital images derived from historical motion picture films. Historical motion picture films often have poorer resolution than modern digital imagery, making face detection a more challenging task. To approach this problem, we first propose a trunk-branch concatenated multi-task cascaded convolutional neural network (TB-MTCNN), which efficiently extracts facial features from blurry historical films by combining the trunk with branch networks and employing various sizes of kernels to enrich the multi-scale receptive field. Next, we build a deep neural network-integrated object-tracking algorithm to compensate for failed recognition over one or more video frames. The framework combines simple online and real-time tracking with deep data association (Deep SORT), and TB-MTCNN with the residual neural network (ResNet) model. Finally, a state-of-the-art image restoration method is employed to reduce the effect of noise and blurriness. The experimental results show that our proposed joint face recognition and tracking network can significantly reduce missed recognition in historical motion picture film frames.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Deep Learning Model Explainability in Brain Tumor Datasets Using Post-Heuristic Approaches. 利用后探索方法增强脑肿瘤数据集中深度学习模型的可解释性
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-18 DOI: 10.3390/jimaging10090232
Konstantinos Pasvantis, Eftychios Protopapadakis

The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved through post-processing mechanisms based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results in the context of medical diagnosis.

近年来,深度学习模型在医疗诊断中的应用取得了显著成效。然而,一个值得注意的局限是,决策过程本身缺乏可解释性。本研究通过增强可解释性的稳健性来解决这一制约因素。主要重点是完善 LIME 库和 LIME 图像解释器生成的解释。这是通过基于特定场景规则的后处理机制实现的。我们使用与脑肿瘤检测相关的公开数据集进行了多项实验。我们提出的后启发式方法取得了显著进步,在医疗诊断方面产生了更稳健、更具体的结果。
{"title":"Enhancing Deep Learning Model Explainability in Brain Tumor Datasets Using Post-Heuristic Approaches.","authors":"Konstantinos Pasvantis, Eftychios Protopapadakis","doi":"10.3390/jimaging10090232","DOIUrl":"https://doi.org/10.3390/jimaging10090232","url":null,"abstract":"<p><p>The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved through post-processing mechanisms based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results in the context of medical diagnosis.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Reconstruction of Indoor Scenes Based on Implicit Neural Representation. 基于内隐神经表征的室内场景三维重建
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-16 DOI: 10.3390/jimaging10090231
Zhaoji Lin, Yutao Huang, Li Yao

Reconstructing 3D indoor scenes from 2D images has always been an important task in computer vision and graphics applications. For indoor scenes, traditional 3D reconstruction methods have problems such as missing surface details, poor reconstruction of large plane textures and uneven illumination areas, and many wrongly reconstructed floating debris noises in the reconstructed models. This paper proposes a 3D reconstruction method for indoor scenes that combines neural radiation field (NeRFs) and signed distance function (SDF) implicit expressions. The volume density of the NeRF is used to provide geometric information for the SDF field, and the learning of geometric shapes and surfaces is strengthened by adding an adaptive normal prior optimization learning process. It not only preserves the high-quality geometric information of the NeRF, but also uses the SDF to generate an explicit mesh with a smooth surface, significantly improving the reconstruction quality of large plane textures and uneven illumination areas in indoor scenes. At the same time, a new regularization term is designed to constrain the weight distribution, making it an ideal unimodal compact distribution, thereby alleviating the problem of uneven density distribution and achieving the effect of floating debris removal in the final model. Experiments show that the 3D reconstruction effect of this paper on ScanNet, Hypersim, and Replica datasets outperforms the state-of-the-art methods.

从二维图像重建三维室内场景一直是计算机视觉和图形应用中的一项重要任务。对于室内场景,传统的三维重建方法存在表面细节缺失、大面积平面纹理和光照不均区域重建效果不佳、重建模型中存在许多错误的浮动碎片噪声等问题。本文提出了一种结合神经辐射场(NeRF)和符号距离函数(SDF)隐式表达的室内场景三维重建方法。利用 NeRF 的体积密度为 SDF 场提供几何信息,并通过添加自适应法向先验优化学习过程来加强几何形状和曲面的学习。它不仅保留了 NeRF 的高质量几何信息,还利用 SDF 生成了表面光滑的显式网格,显著提高了室内场景中大平面纹理和光照不均区域的重建质量。同时,设计了新的正则化项来约束权重分布,使其成为理想的单模态紧凑分布,从而缓解了密度分布不均匀的问题,并在最终模型中达到了去除漂浮物的效果。实验表明,本文在 ScanNet、Hypersim 和 Replica 数据集上的三维重建效果优于最先进的方法。
{"title":"Three-Dimensional Reconstruction of Indoor Scenes Based on Implicit Neural Representation.","authors":"Zhaoji Lin, Yutao Huang, Li Yao","doi":"10.3390/jimaging10090231","DOIUrl":"https://doi.org/10.3390/jimaging10090231","url":null,"abstract":"<p><p>Reconstructing 3D indoor scenes from 2D images has always been an important task in computer vision and graphics applications. For indoor scenes, traditional 3D reconstruction methods have problems such as missing surface details, poor reconstruction of large plane textures and uneven illumination areas, and many wrongly reconstructed floating debris noises in the reconstructed models. This paper proposes a 3D reconstruction method for indoor scenes that combines neural radiation field (NeRFs) and signed distance function (SDF) implicit expressions. The volume density of the NeRF is used to provide geometric information for the SDF field, and the learning of geometric shapes and surfaces is strengthened by adding an adaptive normal prior optimization learning process. It not only preserves the high-quality geometric information of the NeRF, but also uses the SDF to generate an explicit mesh with a smooth surface, significantly improving the reconstruction quality of large plane textures and uneven illumination areas in indoor scenes. At the same time, a new regularization term is designed to constrain the weight distribution, making it an ideal unimodal compact distribution, thereby alleviating the problem of uneven density distribution and achieving the effect of floating debris removal in the final model. Experiments show that the 3D reconstruction effect of this paper on ScanNet, Hypersim, and Replica datasets outperforms the state-of-the-art methods.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Cardiovascular Imaging in the Diagnosis of Athlete's Heart: Navigating the Shades of Grey. 心血管成像在运动员心脏诊断中的作用:灰色阴影中的导航。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-14 DOI: 10.3390/jimaging10090230
Nima Baba Ali, Sogol Attaripour Esfahani, Isabel G Scalia, Juan M Farina, Milagros Pereyra, Timothy Barry, Steven J Lester, Said Alsidawi, David E Steidley, Chadi Ayoub, Stefano Palermi, Reza Arsanjani

Athlete's heart (AH) represents the heart's remarkable ability to adapt structurally and functionally to prolonged and intensive athletic training. Characterized by increased left ventricular (LV) wall thickness, enlarged cardiac chambers, and augmented cardiac mass, AH typically maintains or enhances systolic and diastolic functions. Despite the positive health implications, these adaptations can obscure the difference between benign physiological changes and early manifestations of cardiac pathologies such as dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), and arrhythmogenic cardiomyopathy (ACM). This article reviews the imaging characteristics of AH across various modalities, emphasizing echocardiography, cardiac magnetic resonance (CMR), and cardiac computed tomography as primary tools for evaluating cardiac function and distinguishing physiological adaptations from pathological conditions. The findings highlight the need for precise diagnostic criteria and advanced imaging techniques to ensure accurate differentiation, preventing misdiagnosis and its associated risks, such as sudden cardiac death (SCD). Understanding these adaptations and employing the appropriate imaging methods are crucial for athletes' effective management and health optimization.

运动员心脏(AH)代表了心脏在结构和功能上适应长期高强度运动训练的卓越能力。运动员心脏的特点是左心室壁厚度增加、心腔增大和心脏质量增加,通常能保持或增强心脏的收缩和舒张功能。尽管对健康有积极影响,但这些适应性可能会掩盖良性生理变化与扩张型心肌病(DCM)、肥厚型心肌病(HCM)和致心律失常性心肌病(ACM)等心脏病变早期表现之间的区别。本文回顾了 AH 在各种模式下的成像特征,强调超声心动图、心脏磁共振 (CMR) 和心脏计算机断层扫描是评估心脏功能和区分生理适应与病理状况的主要工具。研究结果强调,需要精确的诊断标准和先进的成像技术,以确保准确区分,防止误诊及其相关风险,如心脏性猝死(SCD)。了解这些适应性并采用适当的成像方法对于运动员的有效管理和健康优化至关重要。
{"title":"The Role of Cardiovascular Imaging in the Diagnosis of Athlete's Heart: Navigating the Shades of Grey.","authors":"Nima Baba Ali, Sogol Attaripour Esfahani, Isabel G Scalia, Juan M Farina, Milagros Pereyra, Timothy Barry, Steven J Lester, Said Alsidawi, David E Steidley, Chadi Ayoub, Stefano Palermi, Reza Arsanjani","doi":"10.3390/jimaging10090230","DOIUrl":"https://doi.org/10.3390/jimaging10090230","url":null,"abstract":"<p><p>Athlete's heart (AH) represents the heart's remarkable ability to adapt structurally and functionally to prolonged and intensive athletic training. Characterized by increased left ventricular (LV) wall thickness, enlarged cardiac chambers, and augmented cardiac mass, AH typically maintains or enhances systolic and diastolic functions. Despite the positive health implications, these adaptations can obscure the difference between benign physiological changes and early manifestations of cardiac pathologies such as dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), and arrhythmogenic cardiomyopathy (ACM). This article reviews the imaging characteristics of AH across various modalities, emphasizing echocardiography, cardiac magnetic resonance (CMR), and cardiac computed tomography as primary tools for evaluating cardiac function and distinguishing physiological adaptations from pathological conditions. The findings highlight the need for precise diagnostic criteria and advanced imaging techniques to ensure accurate differentiation, preventing misdiagnosis and its associated risks, such as sudden cardiac death (SCD). Understanding these adaptations and employing the appropriate imaging methods are crucial for athletes' effective management and health optimization.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11433181/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Perspective Transformation for Enhanced Pothole Detection in Autonomous Vehicles. 利用透视变换增强自动驾驶汽车的坑洞探测能力。
IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2024-09-14 DOI: 10.3390/jimaging10090227
Abdalmalek Abu-Raddaha, Zaid A El-Shair, Samir Rawashdeh

Road conditions, often degraded by insufficient maintenance or adverse weather, significantly contribute to accidents, exacerbated by the limited human reaction time to sudden hazards like potholes. Early detection of distant potholes is crucial for timely corrective actions, such as reducing speed or avoiding obstacles, to mitigate vehicle damage and accidents. This paper introduces a novel approach that utilizes perspective transformation to enhance pothole detection at different distances, focusing particularly on distant potholes. Perspective transformation improves the visibility and clarity of potholes by virtually bringing them closer and enlarging their features, which is particularly beneficial given the fixed-size input requirement of object detection networks, typically significantly smaller than the raw image resolutions captured by cameras. Our method automatically identifies the region of interest (ROI)-the road area-and calculates the corner points to generate a perspective transformation matrix. This matrix is applied to all images and corresponding bounding box labels, enhancing the representation of potholes in the dataset. This approach significantly boosts detection performance when used with YOLOv5-small, achieving a 43% improvement in the average precision (AP) metric at intersection-over-union thresholds of 0.5 to 0.95 for single class evaluation, and notable improvements of 34%, 63%, and 194% for near, medium, and far potholes, respectively, after categorizing them based on their distance. To the best of our knowledge, this work is the first to employ perspective transformation specifically for enhancing the detection of distant potholes.

道路状况往往因维护不足或恶劣天气而恶化,这在很大程度上导致了事故的发生,而人类对坑洼等突发危险的反应时间有限又加剧了事故的严重性。及早发现远处的坑洼对于及时采取纠正措施(如降低车速或避开障碍物)以减少车辆损坏和事故至关重要。本文介绍了一种利用透视变换增强不同距离坑洞检测的新方法,尤其侧重于远处坑洞的检测。透视变换通过虚拟拉近坑洞距离并放大其特征,提高了坑洞的可见度和清晰度,鉴于物体检测网络的输入要求大小固定,通常比摄像头捕捉的原始图像分辨率小得多,这一点尤其有益。我们的方法能自动识别感兴趣区域(ROI)--道路区域,并计算角点,生成透视变换矩阵。该矩阵适用于所有图像和相应的边界框标签,从而增强了数据集中坑洞的代表性。当与 YOLOv5-small 一起使用时,这种方法大大提高了检测性能,在 0.5 至 0.95 的交集-重叠阈值条件下,单类评估的平均精度 (AP) 指标提高了 43%,而根据距离对近、中、远坑洞进行分类后,平均精度分别提高了 34%、63% 和 194%。据我们所知,这项工作是首次采用透视变换专门用于增强对远处坑洞的检测。
{"title":"Leveraging Perspective Transformation for Enhanced Pothole Detection in Autonomous Vehicles.","authors":"Abdalmalek Abu-Raddaha, Zaid A El-Shair, Samir Rawashdeh","doi":"10.3390/jimaging10090227","DOIUrl":"https://doi.org/10.3390/jimaging10090227","url":null,"abstract":"<p><p>Road conditions, often degraded by insufficient maintenance or adverse weather, significantly contribute to accidents, exacerbated by the limited human reaction time to sudden hazards like potholes. Early detection of distant potholes is crucial for timely corrective actions, such as reducing speed or avoiding obstacles, to mitigate vehicle damage and accidents. This paper introduces a novel approach that utilizes perspective transformation to enhance pothole detection at different distances, focusing particularly on distant potholes. Perspective transformation improves the visibility and clarity of potholes by virtually bringing them closer and enlarging their features, which is particularly beneficial given the fixed-size input requirement of object detection networks, typically significantly smaller than the raw image resolutions captured by cameras. Our method automatically identifies the region of interest (ROI)-the road area-and calculates the corner points to generate a perspective transformation matrix. This matrix is applied to all images and corresponding bounding box labels, enhancing the representation of potholes in the dataset. This approach significantly boosts detection performance when used with YOLOv5-small, achieving a 43% improvement in the average precision (AP) metric at intersection-over-union thresholds of 0.5 to 0.95 for single class evaluation, and notable improvements of 34%, 63%, and 194% for near, medium, and far potholes, respectively, after categorizing them based on their distance. To the best of our knowledge, this work is the first to employ perspective transformation specifically for enhancing the detection of distant potholes.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11432791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1