首页 > 最新文献

International Journal of Image and Graphics最新文献

英文 中文
Novel Enrichment of Brightness-Distorted Chest X-Ray Images Using Fusion-Based Contrast-Limited Adaptive Fuzzy Gamma Algorithm 基于融合的对比度受限自适应模糊Gamma算法对亮度失真胸部X射线图像的新富集
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s021946782450058x
K. Kiruthika, Rashmita Khilar
As innovations for image handling, image enrichment (IE) can give more effective information and image compression can decrease memory space. IE plays a vital role in the medical field for which we have to use a noiseless image. IE applies to all areas of understanding and analysis of images. This paper provides an innovative algorithm called contrast-limited adaptive fuzzy gamma (CLAFG) for IE using chest X-ray (CXR) images. The image dissimilarity is enriched by computing several histograms and membership planes. The proposed algorithm comprises various steps. Firstly, CXR is separated into contextual region (CR). Secondly, the cliplimit, a threshold value which alters the dissimilarity of the CXR and applies it to the histogram which, is generated by CR and then applies the fuzzification technique via the membership plane to the CXR. Thirdly, the clipped histograms are performed in two ways, i.e. it is merged using bi-cubic interpolation techniques and it is modified with membership function. Finally, the resulting output from bi-cubic interpolation and membership function are fond of using upgrade contemplate standard methods for a richer CXR image.
作为图像处理的创新,图像增强(IE)可以提供更有效的信息,图像压缩可以减少内存空间。IE在医学领域发挥着至关重要的作用,我们必须使用无噪声图像。IE适用于理解和分析图像的所有领域。本文提出了一种基于胸部X射线(CXR)图像的IE的创新算法,称为对比度受限自适应模糊伽玛(CLAFG)。通过计算几个直方图和隶属度平面来丰富图像的相异性。所提出的算法包括各种步骤。首先,将CXR划分为上下文区域(CR)。其次,cliplimit,一个改变CXR的相异性并将其应用于直方图的阈值,由CR生成,然后通过隶属平面将模糊化技术应用于CXR。第三,截取的直方图有两种方法,即使用双三次插值技术对其进行合并,并使用隶属函数对其进行修改。最后,双三次插值和隶属函数的结果输出喜欢使用升级设想的标准方法来获得更丰富的CXR图像。
{"title":"Novel Enrichment of Brightness-Distorted Chest X-Ray Images Using Fusion-Based Contrast-Limited Adaptive Fuzzy Gamma Algorithm","authors":"K. Kiruthika, Rashmita Khilar","doi":"10.1142/s021946782450058x","DOIUrl":"https://doi.org/10.1142/s021946782450058x","url":null,"abstract":"As innovations for image handling, image enrichment (IE) can give more effective information and image compression can decrease memory space. IE plays a vital role in the medical field for which we have to use a noiseless image. IE applies to all areas of understanding and analysis of images. This paper provides an innovative algorithm called contrast-limited adaptive fuzzy gamma (CLAFG) for IE using chest X-ray (CXR) images. The image dissimilarity is enriched by computing several histograms and membership planes. The proposed algorithm comprises various steps. Firstly, CXR is separated into contextual region (CR). Secondly, the cliplimit, a threshold value which alters the dissimilarity of the CXR and applies it to the histogram which, is generated by CR and then applies the fuzzification technique via the membership plane to the CXR. Thirdly, the clipped histograms are performed in two ways, i.e. it is merged using bi-cubic interpolation techniques and it is modified with membership function. Finally, the resulting output from bi-cubic interpolation and membership function are fond of using upgrade contemplate standard methods for a richer CXR image.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43869871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Convolutional Neural Network based on UNet for Iris Segmentation 基于UNet的鲁棒卷积神经网络用于虹膜分割
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500426
A. Khaki
Nowadays, the iris recognition system is one of the most widely used and most accurate biometric systems. The iris segmentation is the most crucial stage of iris recognition system. The accurate iris segmentation can improve the efficiency of iris recognition. The main objective of iris segmentation is to obtain the iris area. Recently, the iris segmentation methods based on convolutional neural networks (CNNs) have been grown, and they have improved the accuracy greatly. Nevertheless, their accuracy is decreased by low-quality images captured in uncontrolled conditions. Therefore, the existing methods cannot segment low-quality images precisely. To overcome the challenge, this paper proposes a robust convolutional neural network (R-Net) inspired by UNet for iris segmentation. R-Net is divided into two parts: encoder and decoder. In this network, several layers are added to ResNet-34, and used in the encoder path. In the decoder path, four convolutions are applied at each level. Both help to obtain suitable feature maps and increase the network accuracy. The proposed network has been tested on four datasets: UBIRIS v2 (UBIRIS), CASIA iris v4.0 (CASIA) distance, CASIA interval, and IIT Delhi v1.0 (IITD). UBIRIS is a dataset that is used for low-quality images. The error rate (NICE1) of proposed network is 0.0055 on UBIRIS, 0.0105 on CASIA interval, 0.0043 on CASIA distance, and 0.0154 on IITD. Results show better performance of the proposed network compared to other methods.
虹膜识别系统是目前应用最广泛、精度最高的生物识别系统之一。虹膜分割是虹膜识别系统中最关键的环节。准确的虹膜分割可以提高虹膜识别的效率。虹膜分割的主要目的是获得虹膜区域。近年来,基于卷积神经网络(CNNs)的虹膜分割方法得到了发展,并大大提高了精度。然而,在不受控制的条件下拍摄的低质量图像降低了它们的准确性。因此,现有的方法不能精确地分割低质量的图像。为了克服这一挑战,本文提出了一种受UNet启发的鲁棒卷积神经网络(R-Net)用于虹膜分割。R-Net分为编码器和解码器两部分。在这个网络中,几个层被添加到ResNet-34,并在编码器路径中使用。在解码器路径中,在每个级别应用四个卷积。两者都有助于获得合适的特征图并提高网络的准确性。所提出的网络已经在四个数据集上进行了测试:UBIRIS v2(UBIRIS)、CASIA iris v4.0(CASIA)距离、CASIA间隔和IIT Delhi v1.0(IITD)。UBIRIS是一个用于低质量图像的数据集。所提出的网络的错误率(NICE1)在UBIRIS上为0.0055,在CASIA间隔上为0.0105,在CASIA距离上为0.0043,在IITD上为0.0154。结果表明,与其他方法相比,所提出的网络具有更好的性能。
{"title":"Robust Convolutional Neural Network based on UNet for Iris Segmentation","authors":"A. Khaki","doi":"10.1142/s0219467824500426","DOIUrl":"https://doi.org/10.1142/s0219467824500426","url":null,"abstract":"Nowadays, the iris recognition system is one of the most widely used and most accurate biometric systems. The iris segmentation is the most crucial stage of iris recognition system. The accurate iris segmentation can improve the efficiency of iris recognition. The main objective of iris segmentation is to obtain the iris area. Recently, the iris segmentation methods based on convolutional neural networks (CNNs) have been grown, and they have improved the accuracy greatly. Nevertheless, their accuracy is decreased by low-quality images captured in uncontrolled conditions. Therefore, the existing methods cannot segment low-quality images precisely. To overcome the challenge, this paper proposes a robust convolutional neural network (R-Net) inspired by UNet for iris segmentation. R-Net is divided into two parts: encoder and decoder. In this network, several layers are added to ResNet-34, and used in the encoder path. In the decoder path, four convolutions are applied at each level. Both help to obtain suitable feature maps and increase the network accuracy. The proposed network has been tested on four datasets: UBIRIS v2 (UBIRIS), CASIA iris v4.0 (CASIA) distance, CASIA interval, and IIT Delhi v1.0 (IITD). UBIRIS is a dataset that is used for low-quality images. The error rate (NICE1) of proposed network is 0.0055 on UBIRIS, 0.0105 on CASIA interval, 0.0043 on CASIA distance, and 0.0154 on IITD. Results show better performance of the proposed network compared to other methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45410552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Yoga Posture Recognition by Learning Spatial-Temporal Feature with Deep Learning Techniques 利用深度学习技术学习时空特征识别瑜伽姿势
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500554
J. Palanimeera, K. Ponmozhi
Yoga posture recognition remains a difficult issue because of crowded backgrounds, varied settings, occlusions, viewpoint alterations, and camera motions, despite recent promising advances in deep learning. In this paper, the method for accurately detecting various yoga poses using DL (Deep Learning) algorithms is provided. Using a standard RGB camera, six yoga poses — Sukhasana, Kakasana, Naukasana, Dhanurasana, Tadasana, and Vrikshasana — were captured on ten people, five men and five women. In this study, a brand-new DL model is presented for representing the spatio-temporal (ST) variation of skeleton-based yoga poses in movies. It is advised to use a variety of representation learners to pry video-level temporal recordings, which combine spatio-temporal sampling with long-range time mastering to produce a successful and effective training approach. A novel feature extraction method using Open Pose is described, together with a DenceBi-directional LSTM network to represent spatial-temporal links in both the forward and backward directions. This will increase the efficacy and consistency of modeling long-range action detection. To improve temporal pattern modeling capability, they are stacked and combined with dense skip connections. To improve performance, two modalities from look and motion are fused with a fusion module and compared to other deep learning models are LSTMs including LSTM, Bi-LSTM, Res-LSTM, and Res-BiLSTM. Studies on real-time datasets of yoga poses show that the suggested DenseBi-LSTM model performs better and yields better results than state-of-the-art techniques for yoga pose detection.
瑜伽姿势识别仍然是一个困难的问题,因为拥挤的背景、不同的环境、遮挡、视点改变和相机运动,尽管最近在深度学习方面取得了可喜的进展。本文提供了一种使用DL(深度学习)算法精确检测各种瑜伽姿势的方法。使用标准RGB相机,在10个人身上拍摄到了六个瑜伽姿势——Sukhasana、Kakasana、Naukasana、Dhanurasana、Tadasana和Vrikshasana,其中包括5男5女。在这项研究中,提出了一个全新的DL模型来表示电影中基于骨骼的瑜伽姿势的时空变化。建议使用各种表示学习器来窥探视频级别的时间记录,将时空采样与长时间掌握相结合,以产生一种成功有效的训练方法。描述了一种使用开放姿态的新特征提取方法,以及DenceBi-directional LSTM网络来表示前向和后向的时空链路。这将提高远程动作检测建模的有效性和一致性。为了提高时间模式建模能力,它们被堆叠并与密集的跳跃连接相结合。为了提高性能,将来自视觉和运动的两种模式与融合模块融合,并与其他深度学习模型相比,LSTM包括LSTM、Bi-LSTM、Res-LSTM和Res-BiLSTM。对瑜伽姿势实时数据集的研究表明,与最先进的瑜伽姿势检测技术相比,所提出的DenseBi LSTM模型表现更好,产生更好的结果。
{"title":"Yoga Posture Recognition by Learning Spatial-Temporal Feature with Deep Learning Techniques","authors":"J. Palanimeera, K. Ponmozhi","doi":"10.1142/s0219467824500554","DOIUrl":"https://doi.org/10.1142/s0219467824500554","url":null,"abstract":"Yoga posture recognition remains a difficult issue because of crowded backgrounds, varied settings, occlusions, viewpoint alterations, and camera motions, despite recent promising advances in deep learning. In this paper, the method for accurately detecting various yoga poses using DL (Deep Learning) algorithms is provided. Using a standard RGB camera, six yoga poses — Sukhasana, Kakasana, Naukasana, Dhanurasana, Tadasana, and Vrikshasana — were captured on ten people, five men and five women. In this study, a brand-new DL model is presented for representing the spatio-temporal (ST) variation of skeleton-based yoga poses in movies. It is advised to use a variety of representation learners to pry video-level temporal recordings, which combine spatio-temporal sampling with long-range time mastering to produce a successful and effective training approach. A novel feature extraction method using Open Pose is described, together with a DenceBi-directional LSTM network to represent spatial-temporal links in both the forward and backward directions. This will increase the efficacy and consistency of modeling long-range action detection. To improve temporal pattern modeling capability, they are stacked and combined with dense skip connections. To improve performance, two modalities from look and motion are fused with a fusion module and compared to other deep learning models are LSTMs including LSTM, Bi-LSTM, Res-LSTM, and Res-BiLSTM. Studies on real-time datasets of yoga poses show that the suggested DenseBi-LSTM model performs better and yields better results than state-of-the-art techniques for yoga pose detection.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49029323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms 基于机器学习算法的脑肿瘤检测混合分割方法
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s0219467823400089
M. Praveena, M. Rao
Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.
肿瘤对人类来说是最危险的,如果患者在早期没有注意到它,就会导致死亡。水肿是大脑肿胀的一种,由人类大脑中的有毒颗粒组成。特别是在脑部,肿瘤是通过磁共振成像(MRI)扫描来识别的。这种扫描在检测给定输入图像中受影响区域的区域方面起着主要作用。肿瘤可能含有癌细胞或非癌细胞。许多专家使用该MRI报告作为肿瘤或水肿为癌细胞的初步确认。脑肿瘤分割是对正常组织和肿瘤组织进行分类的一项重要任务。本文提出了一种混合分割方法(HSA),对给定的脑输入图像进行精确的肿瘤和水肿区域检测。HSA是一种先进的分割模型和边缘检测技术的结合,用于发现肿瘤或水肿的状态。将HSA应用于由MRI扫描图像组成的Kaggle脑图像数据集。边缘检测技术提高了对肿瘤或水肿区域的检测。将HSA算法与支持向量机全自动异构分割、支持向量机正常分割等算法进行了性能比较。利用均方误差(MSE)、峰值信噪比(PSNR)和精度来计算所提出工作的性能。该方法通过提高精度获得了更好的性能。
{"title":"Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms","authors":"M. Praveena, M. Rao","doi":"10.1142/s0219467823400089","DOIUrl":"https://doi.org/10.1142/s0219467823400089","url":null,"abstract":"Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42142265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet 基于蜜蜂优化和概率U-RSNet的混合人工智能多类脑肿瘤图像高效分类
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500591
Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive
The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.
人类的生命被认为是最宝贵的,在过去的二十年里,平均寿命从75岁减少到50岁。平均寿命的缩短是由于各种健康危害,即癌症等。脑瘤是十大最常见的死亡原因之一。虽然脑瘤不是全球死亡的主要原因,但40%的其他癌症(如乳腺癌或肺癌)转移到大脑并成为脑瘤。尽管活检是肿瘤诊断的金标准,但它有许多缺点,包括灵敏度/特异性较差,活检时存在威胁,等待结果的时间较长。与长短期记忆网络(LSTM)、卷积神经网络、生成对抗网络、循环神经网络和深度信念网络等深度学习算法相比,本研究采用集成了蜜蜂优化(HBO)的人工智能来检测脑肿瘤,在准确率、查全率、精密度、F1分数和Jaccard指数方面具有较高的执行水平。在这项工作中,为了提高预测水平,图像分割方法由概率U-RSNet执行。使用BraTS 2020、BraTS 2021和OASIS数据集对准确性、精密度、召回率、F1分数、Jaccard指数和PPV等重要参数进行了分析。
{"title":"An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet","authors":"Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive","doi":"10.1142/s0219467824500591","DOIUrl":"https://doi.org/10.1142/s0219467824500591","url":null,"abstract":"The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46186203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN 利用ResNet-152 FPN改进基于调制变形ConvNets v2主干掩码评分的边界盒和实例分割精度
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500542
Suresh Shanmugasundaram, Natarajan Palaniappan
A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.
一项具有挑战性的任务是确保深度学习网络能够自己学习预测精度。ground truth和instance mask之间的交集-over- union (IoU)决定了mask的质量。分类评分与掩膜质量无相关性。我们的任务是研究这个问题,并了解预测的实例掩码的准确性。该网络通过比较预测的掩码和相应的实例特征来回归MaskIoU。掩码评分策略确定掩码评分与掩码质量之间的无序性,并对参数进行相应的调整。对物体几何变化的适应能力决定了可变形卷积网络的性能。利用增强的建模能力和更强的训练,重构的可变形卷积神经网络提高了对相关图像区域的聚焦能力。调制技术的引入拓宽了变形建模的范围,并在网络内全面集成了变形卷积,提高了建模能力。该网络借助DCNv2的特征模拟方案学习与基于区域的卷积神经网络(R-CNN)特征的分类能力及其对象焦点相似的特征。DCNv2的特征模拟方案指导网络训练,有效地控制这种增强的建模能力。所提出的掩码评分R-CNN网络的主干采用ResNet-152 FPN和DCNv2网络设计。基于DCNv2网络的掩码评分R-CNN网络也在ResNet-50和ResNet-101骨干网上进行了测试。使用该网络在COCO基准和cityscape数据集上实现了高精度的实例分割和目标检测。
{"title":"Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN","authors":"Suresh Shanmugasundaram, Natarajan Palaniappan","doi":"10.1142/s0219467824500542","DOIUrl":"https://doi.org/10.1142/s0219467824500542","url":null,"abstract":"A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48564995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Fake Colorized Images based on Deep Learning 基于深度学习的假彩色图像检测
IF 1.6 Q3 Computer Science Pub Date : 2023-07-21 DOI: 10.1142/s0219467825500020
Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi
Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.
先进的图像编辑技术可以显著增强图像,但也可以被恶意使用。着色是一种新的图像编辑技术,它使用逼真的颜色对灰度照片进行着色。然而,这种策略可以用于自然颜色图像的恶意目的(例如,混淆依赖物体颜色进行识别的物体识别系统)。图像取证是一个发达的领域,通过检查特定条件下的照片来建立信心和真实性。本文提出了一种基于特殊残差网络(ResNet)架构的伪彩色图像检测方法。ResNets是卷积神经网络(Convolutional Neural Networks, cnn)的一种架构,已被广泛应用于各种任务。首先,通过一种特殊的图像表示来重建输入图像,该图像表示结合了来自三个独立颜色空间(HSV, Lab和Ycbcr)的颜色信息;然后,将重构后的图像用于ResNet模型的训练。实验结果表明,我们提出的方法具有高度的泛化性和显著的鲁棒性,可以显示由各种着色方法生成的假彩色图像。
{"title":"Detection of Fake Colorized Images based on Deep Learning","authors":"Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi","doi":"10.1142/s0219467825500020","DOIUrl":"https://doi.org/10.1142/s0219467825500020","url":null,"abstract":"Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44758172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network 基于卷积神经网络的PET/CT肺癌计算机辅助分类
IF 1.6 Q3 Computer Science Pub Date : 2023-07-15 DOI: 10.1142/s0219467824500402
Dhekra El Hamdi, Ines Elouedi, I. Slim
Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.
肺癌是全球癌症相关死亡的主要原因。因此,早期诊断对于获得适当的治疗策略仍然至关重要。本文提出了一种新的方法来评估与人工智能技术相关的正电子发射断层扫描/计算机断层扫描(PET/CT)图像对肺癌分类的能力。在这项工作中,我们建立了一个多输出卷积神经网络(CNN)作为辅助肺癌患者分期的工具。参照TNM分期系统及组织学亚型分型。将VGG - 16网络应用于PET/CT图像,提取图像中最相关的特征。然后将获得的特征传输到三分支分类器,以指定淋巴结(N),肿瘤(T)和组织学亚型分类。实验结果表明,我们的CNN模型在TN分期和组织学分类方面取得了很好的效果。当在Lung-PET-CT-Dx数据集上测试时,所提出的架构对肿瘤大小的分类准确率为0.94,曲线下面积(AUC)为0.97。它也产生了N分期的高性能,准确率为0.98。此外,我们的方法在组织学分类方面取得了比最先进的方法更好的准确性。
{"title":"Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network","authors":"Dhekra El Hamdi, Ines Elouedi, I. Slim","doi":"10.1142/s0219467824500402","DOIUrl":"https://doi.org/10.1142/s0219467824500402","url":null,"abstract":"Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47610541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network RDN-NET:一个使用循环深度神经网络进行哮喘预测和分类的深度学习框架
IF 1.6 Q3 Computer Science Pub Date : 2023-07-13 DOI: 10.1142/s0219467824500505
Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed
Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.
哮喘是一种重要的疾病类型,它导致全世界所有年龄组的大量死亡。因此,早期发现和预防哮喘疾病可以挽救无数生命,也有助于医疗领域。但传统的机器学习方法无法从语音信号中检测出哮喘,导致准确率较低。为此,本文提出了一种基于循环深度神经网络(RDN-Net)的基于深度学习的哮喘预测与分类方法。首先,采用最小均方误差短时谱幅(MMSE-STSA)方法对语音信号进行预处理,去除噪声,增强语音性能。然后,使用改进的Ripplet-II变换(IR2T)提取疾病依赖和疾病特异性特征。然后,采用基于改进灰狼优化(MGWO)的生物优化方法,通过狩猎过程选择最优特征;最后,利用RDN-Net从语音信号中预测哮喘疾病的存在,并将其分为喘息型、噼啪型和正常型。在实时COSWARA数据集上进行了仿真,与最先进的方法相比,所提出的方法在所有指标上都具有更好的性能。
{"title":"RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network","authors":"Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed","doi":"10.1142/s0219467824500505","DOIUrl":"https://doi.org/10.1142/s0219467824500505","url":null,"abstract":"Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46658873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples 基于自注意的卷积GRU增强对抗性语音示例
IF 1.6 Q3 Computer Science Pub Date : 2023-07-08 DOI: 10.1142/s0219467824500530
Chaitanya Jannu, S. Vanambathina
Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.
最近的研究已经确定了对抗性示例,这些示例是对基于DNN的ASR系统的挑战。在本文中,我们提出了一种基于卷积GRU和自注意U-Net的新模型,称为[公式:见正文],以改进对抗性语音信号。为了表示相邻噪声语音帧之间的相关性,在U-Net的瓶颈中添加了两层GRU,并在上采样单元中插入了注意门,以提高对抗性稳定性。使用GRU的目标是将权重共享技术与使用门相结合,以控制多个特征图之间的数据流。因此,它优于[公式:见正文]中使用的原始1D卷积。特别是,通过可解释的语音识别指标来评估该模型的性能,并通过改进的对抗性训练来分析其性能。我们使用对抗性音频攻击来进行自动语音识别(ASR)实验。我们看到(i)使用基于注意力的GRU网络所掌握的时间特征,可以提高基于DNN的ASR模型的鲁棒性;(ii)通过对抗性训练,包括一些附加的对抗性数据增强,我们可以提高基于DNN的自动语音识别模型的泛化能力。单词错误率(WER)指标证实了增强能力优于最先进的[公式:见正文]。这种增强的原因是GRU单元能够提取特征图中的全局信息。基于所进行的实验,所提出的[公式:见正文]在语音增强中提高了对抗性语音示例的语音传输指数(STI)、语音质量感知评估(PESQ)和短期目标可理解性(STOI)的得分。
{"title":"Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples","authors":"Chaitanya Jannu, S. Vanambathina","doi":"10.1142/s0219467824500530","DOIUrl":"https://doi.org/10.1142/s0219467824500530","url":null,"abstract":"Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41729473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Image and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1