首页 > 最新文献

International Journal of Image and Graphics最新文献

英文 中文
A Comprehensive Review of GAN-Based Denoising Models for Low-Dose Computed Tomography Images 基于gan的低剂量计算机断层图像去噪模型综述
Q3 Computer Science Pub Date : 2023-10-14 DOI: 10.1142/s0219467825500305
Manbir Sandhu, Sumit Kushwaha, Tanvi Arora
Computed Tomography (CT) offers great visualization of the intricate internal body structures. To protect a patient from the potential radiation-related health risks, the acquisition of CT images should adhere to the “as low as reasonably allowed” (ALARA) standard. However, the acquired Low-dose CT (LDCT) images are inadvertently corrupted by artifacts and noise during the processes of acquisition, storage, and transmission, degrading the visual quality of the image and also causing the loss of image features and relevant information. Most recently, generative adversarial network (GAN) models based on deep learning (DL) have demonstrated ground-breaking performance to minimize image noise while maintaining high image quality. These models’ ability to adapt to uncertain noise distributions and representation-learning ability makes them highly desirable for the denoising of CT images. The state-of-the-art GANs used for LDCT image denoising have been comprehensively reviewed in this research paper. The aim of this paper is to highlight the potential of DL-based GAN for CT dose optimization and present future scope of research in the domain of LDCT image denoising.
计算机断层扫描(CT)提供了复杂的身体内部结构的可视化。为了保护患者免受潜在的辐射相关健康风险,CT图像的获取应遵循“尽可能低的合理允许”(ALARA)标准。然而,所获得的低剂量CT (LDCT)图像在采集、存储和传输过程中会被伪影和噪声破坏,降低图像的视觉质量,也会导致图像特征和相关信息的丢失。最近,基于深度学习(DL)的生成对抗网络(GAN)模型已经展示了突破性的性能,可以在保持高图像质量的同时最小化图像噪声。这些模型对不确定噪声分布的适应能力和表示学习能力使其成为CT图像去噪的理想选择。本文对目前用于LDCT图像去噪的gan进行了综述。本文的目的是强调基于dl的GAN在CT剂量优化方面的潜力,并提出未来在LDCT图像去噪领域的研究范围。
{"title":"A Comprehensive Review of GAN-Based Denoising Models for Low-Dose Computed Tomography Images","authors":"Manbir Sandhu, Sumit Kushwaha, Tanvi Arora","doi":"10.1142/s0219467825500305","DOIUrl":"https://doi.org/10.1142/s0219467825500305","url":null,"abstract":"Computed Tomography (CT) offers great visualization of the intricate internal body structures. To protect a patient from the potential radiation-related health risks, the acquisition of CT images should adhere to the “as low as reasonably allowed” (ALARA) standard. However, the acquired Low-dose CT (LDCT) images are inadvertently corrupted by artifacts and noise during the processes of acquisition, storage, and transmission, degrading the visual quality of the image and also causing the loss of image features and relevant information. Most recently, generative adversarial network (GAN) models based on deep learning (DL) have demonstrated ground-breaking performance to minimize image noise while maintaining high image quality. These models’ ability to adapt to uncertain noise distributions and representation-learning ability makes them highly desirable for the denoising of CT images. The state-of-the-art GANs used for LDCT image denoising have been comprehensively reviewed in this research paper. The aim of this paper is to highlight the potential of DL-based GAN for CT dose optimization and present future scope of research in the domain of LDCT image denoising.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135803438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-Based Image Retrieval (CBIR): Using Combined Color and Texture Features (TriCLR and HistLBP) 基于内容的图像检索(CBIR):结合颜色和纹理特征(TriCLR和HistLBP)
Q3 Computer Science Pub Date : 2023-09-26 DOI: 10.1142/s0219467825500214
P. John Bosco, S. Janakiraman
Content-Based Image Retrieval (CBIR) is a broad research field in the current digital world. This paper focuses on content-based image retrieval based on visual properties, consisting of high-level semantic information. The variation between low-level and high-level features is identified as a semantic gap. The semantic gap is the biggest problem in CBIR. The visual characteristics are extracted from low-level features such as color, texture and shape. The low-level feature increases CBIRs performance level. The paper mainly focuses on an image retrieval system called combined color (TriCLR) (RGB, YCbCr, and [Formula: see text]) with the histogram of texture features in LBP (HistLBP), which, is known as a hybrid of three colors (TriCLR) with Histogram of LBP (TriCLR and HistLBP). The study also discusses the hybrid method in light of low-level features. Finally, the hybrid approach uses the (TriCLR and HistLBP) algorithm, which provides a new solution to the CBIR system that is better than the existing methods.
基于内容的图像检索(CBIR)是当今数字世界一个广泛的研究领域。本文主要研究基于视觉属性的基于内容的图像检索,视觉属性由高级语义信息组成。低级特征和高级特征之间的差异被认为是语义差距。语义缺口是CBIR中最大的问题。视觉特征是从颜色、纹理和形状等底层特征中提取出来的。低级特性提高了CBIRs的性能水平。本文主要研究了一种基于LBP纹理特征直方图(HistLBP)的组合颜色(TriCLR) (RGB、YCbCr和[公式:见文])的图像检索系统,称为混合三色(TriCLR)和LBP直方图(TriCLR和HistLBP)。本文还针对低层次特征对混合方法进行了探讨。最后,该混合方法采用了(TriCLR和HistLBP)算法,为CBIR系统提供了一种优于现有方法的新解决方案。
{"title":"Content-Based Image Retrieval (CBIR): Using Combined Color and Texture Features (TriCLR and HistLBP)","authors":"P. John Bosco, S. Janakiraman","doi":"10.1142/s0219467825500214","DOIUrl":"https://doi.org/10.1142/s0219467825500214","url":null,"abstract":"Content-Based Image Retrieval (CBIR) is a broad research field in the current digital world. This paper focuses on content-based image retrieval based on visual properties, consisting of high-level semantic information. The variation between low-level and high-level features is identified as a semantic gap. The semantic gap is the biggest problem in CBIR. The visual characteristics are extracted from low-level features such as color, texture and shape. The low-level feature increases CBIRs performance level. The paper mainly focuses on an image retrieval system called combined color (TriCLR) (RGB, YCbCr, and [Formula: see text]) with the histogram of texture features in LBP (HistLBP), which, is known as a hybrid of three colors (TriCLR) with Histogram of LBP (TriCLR and HistLBP). The study also discusses the hybrid method in light of low-level features. Finally, the hybrid approach uses the (TriCLR and HistLBP) algorithm, which provides a new solution to the CBIR system that is better than the existing methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135718979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Ensemble of Classifiers for Alzheimer’s Disease Detection with Optimal Feature Set 基于最优特征集的阿尔茨海默病深度集成分类器检测
Q3 Computer Science Pub Date : 2023-09-25 DOI: 10.1142/s0219467825500329
R. S. Rajasree, S. Brintha Rajakumari
Machine learning (ML) and deep learning (DL) techniques can considerably enhance the process of making a precise diagnosis of Alzheimer’s disease (AD). Recently, DL techniques have had considerable success in processing medical data. They still have drawbacks, like large data requirements and a protracted training phase. With this concern, we have developed a novel strategy with the four stages. In the initial stage, the input data is subjected to data imbalance processing, which is crucial for enhancing the accuracy of disease detection. Subsequently, entropy-based, correlation-based, and improved mutual information-based features will be extracted from these pre-processed data. However, the curse of dimensionality will be a serious issue in this work, and hence we have sorted it out via optimization strategy. Particularly, the tunicate updated golden eagle optimization (TUGEO) algorithm is proposed to pick out the optimal features from the extracted features. Finally, the ensemble classifier, which integrates models like CNN, DBN, and improved RNN is modeled to diagnose the diseases by training the selected optimal features from the previous stage. The suggested model achieves the maximum F-measure as 97.67, which is better than the extant methods like [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively. The suggested TUGEO-based AD detection is then compared to the traditional models like various performance matrices including accuracy, sensitivity, specificity, and precision.
机器学习(ML)和深度学习(DL)技术可以大大提高对阿尔茨海默病(AD)的精确诊断过程。最近,深度学习技术在处理医疗数据方面取得了相当大的成功。它们仍然有缺点,比如需要大量的数据和漫长的训练阶段。考虑到这一点,我们制定了一个新的四个阶段的战略。在初始阶段,对输入数据进行数据不平衡处理,这对提高疾病检测的准确性至关重要。随后,从这些预处理数据中提取基于熵、基于关联和改进的互信息特征。然而,维数的诅咒在这项工作中将是一个严重的问题,因此我们通过优化策略对其进行了整理。特别地,提出了被囊更新金鹰优化算法(TUGEO),从提取的特征中挑选出最优特征。最后,对集成了CNN、DBN和改进RNN等模型的集成分类器进行建模,通过训练从前一阶段选出的最优特征来诊断疾病。该模型的最大f值为97.67,优于现有的[公式:见文]、[公式:见文]、[公式:见文]、[公式:见文]、[公式:见文]、[公式:见文]等方法。然后将建议的基于tugeo的AD检测与传统模型(如各种性能矩阵,包括准确性、灵敏度、特异性和精密度)进行比较。
{"title":"Deep Ensemble of Classifiers for Alzheimer’s Disease Detection with Optimal Feature Set","authors":"R. S. Rajasree, S. Brintha Rajakumari","doi":"10.1142/s0219467825500329","DOIUrl":"https://doi.org/10.1142/s0219467825500329","url":null,"abstract":"Machine learning (ML) and deep learning (DL) techniques can considerably enhance the process of making a precise diagnosis of Alzheimer’s disease (AD). Recently, DL techniques have had considerable success in processing medical data. They still have drawbacks, like large data requirements and a protracted training phase. With this concern, we have developed a novel strategy with the four stages. In the initial stage, the input data is subjected to data imbalance processing, which is crucial for enhancing the accuracy of disease detection. Subsequently, entropy-based, correlation-based, and improved mutual information-based features will be extracted from these pre-processed data. However, the curse of dimensionality will be a serious issue in this work, and hence we have sorted it out via optimization strategy. Particularly, the tunicate updated golden eagle optimization (TUGEO) algorithm is proposed to pick out the optimal features from the extracted features. Finally, the ensemble classifier, which integrates models like CNN, DBN, and improved RNN is modeled to diagnose the diseases by training the selected optimal features from the previous stage. The suggested model achieves the maximum F-measure as 97.67, which is better than the extant methods like [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively. The suggested TUGEO-based AD detection is then compared to the traditional models like various performance matrices including accuracy, sensitivity, specificity, and precision.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135816967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Trio Optimal Feature Extraction Model for Attention-Based Adaptive Weighted RNN-Based Lung and Colon Cancer Detection Framework Using Histopathological Images 基于注意力自适应加权rnn的肺癌和结肠癌组织病理图像检测框架三最优特征提取模型的开发
Q3 Computer Science Pub Date : 2023-09-09 DOI: 10.1142/s0219467825500275
MD Azam Pasha, M. Narayana
Due to the combination of genetic diseases as well as a variety of biomedical abnormalities, the fatal disease named cancer is caused. Colon and lung cancer are regarded as the two leading diseases for disability and death. The most significant component for demonstrating the best course of action is the histopathological identification of such malignancies. So, in order to minimize the mortality rate caused by cancer, there is a need for early detection of the aliment on both fronts accordingly. In this case, both the deep and machine learning techniques have been utilized to speed up the detection process of cancer which may also help the researchers to study a huge amount of patients over a short period and less loss. Hence, it is highly essential to design a new lung and colon detection model based on deep learning approaches. Initially, a different set of histopathological images is collected from benchmark resources to perform effective analysis. Then, to attain the first set of features, the collected image is offered to the dilated net for attaining deep image features with the help of the Visual Geometry Group (VGG16) and Residual Neural Network (ResNet). Further, the second set of features is attained by the below process. Here, the collected image is given to pre-processing phase and the image is pre–pre-processed with the help of Contrast-limited Adaptive Histogram Equalization (CLAHE) and filter technique. Then, the pre-processed image is offered to the segmentation phase with the help of adaptive binary thresholding and offered to a dilated network that holds VGG16 and ResNet and attained the second set of features. The parameters of adaptive binary thresholding are tuned with the help of a developed hybrid approach called Sand Cat swarm JAya Optimization (SC-JAO) via Sand Cat swarm Optimization (SCO) and JAYA (SC-JAO). Finally, the third set of features is attained by offering the image to pre-processing phase. Then, the pre-processed image is offered to the segmentation phase and the image is a segmented phase and features are tuned by developed SC-JAO. Further, the segmented features are offered to attain the textural features like Gray-Level Co-Occurrence Matrix (GLCM) and Local Weber Pattern (LWP) and attained the third set of features. Then, the attained three different sets of features are given to the optimal weighted feature phase, where the parameters are optimized by the SC-JAO algorithm and then given to the disease prediction phase. Here, disease prediction is made with the help of Attention-based Adaptive Weighted Recurrent Neural Networks (AAW-RNN), and their parameters are tuned by developed SC-JAO. Thus, the developed model achieved an effective lung and colon detection rate over conventional approaches over multiple experimental analyses.
由于遗传疾病以及各种生物医学异常的结合,导致了一种名为癌症的致命疾病。结肠癌和肺癌被认为是导致残疾和死亡的两种主要疾病。证明最佳治疗方案的最重要的组成部分是这种恶性肿瘤的组织病理学鉴定。因此,为了最大限度地减少癌症造成的死亡率,有必要在这两个方面及早发现营养。在这种情况下,深度学习和机器学习技术都被用来加快癌症的检测过程,这也可以帮助研究人员在短时间内研究大量的患者,减少损失。因此,设计一种基于深度学习方法的新的肺和结肠检测模型是非常必要的。首先,从基准资源中收集一组不同的组织病理学图像以进行有效的分析。然后,通过视觉几何组(VGG16)和残差神经网络(ResNet),将采集到的图像提供给扩展网络获取深度图像特征,以获得第一组特征。此外,第二组特征是通过以下过程获得的。在这里,采集到的图像进入预处理阶段,利用对比度有限的自适应直方图均衡化(CLAHE)和滤波技术对图像进行预处理。然后,通过自适应二值阈值将预处理后的图像提供给分割阶段,并提供给包含VGG16和ResNet的扩展网络,获得第二组特征。采用沙猫群优化(SCO)和JAya (SC-JAO)相结合的沙猫群JAya优化方法(SC-JAO)对自适应二值阈值的参数进行了调整。最后,将图像提交到预处理阶段,得到第三组特征。然后,将预处理后的图像提供给分割阶段,图像即为分割阶段,并通过开发的SC-JAO对特征进行调整。在此基础上,利用分割特征得到灰度共生矩阵(GLCM)和局部韦伯模式(LWP)等纹理特征,得到第三组特征。然后,将得到的三种不同的特征集赋给最优加权特征阶段,其中的参数通过SC-JAO算法进行优化,然后赋给疾病预测阶段。本研究利用基于注意力的自适应加权递归神经网络(AAW-RNN)进行疾病预测,并通过开发的SC-JAO对其参数进行调整。因此,在多个实验分析中,开发的模型比传统方法获得了有效的肺和结肠检出率。
{"title":"Development of Trio Optimal Feature Extraction Model for Attention-Based Adaptive Weighted RNN-Based Lung and Colon Cancer Detection Framework Using Histopathological Images","authors":"MD Azam Pasha, M. Narayana","doi":"10.1142/s0219467825500275","DOIUrl":"https://doi.org/10.1142/s0219467825500275","url":null,"abstract":"Due to the combination of genetic diseases as well as a variety of biomedical abnormalities, the fatal disease named cancer is caused. Colon and lung cancer are regarded as the two leading diseases for disability and death. The most significant component for demonstrating the best course of action is the histopathological identification of such malignancies. So, in order to minimize the mortality rate caused by cancer, there is a need for early detection of the aliment on both fronts accordingly. In this case, both the deep and machine learning techniques have been utilized to speed up the detection process of cancer which may also help the researchers to study a huge amount of patients over a short period and less loss. Hence, it is highly essential to design a new lung and colon detection model based on deep learning approaches. Initially, a different set of histopathological images is collected from benchmark resources to perform effective analysis. Then, to attain the first set of features, the collected image is offered to the dilated net for attaining deep image features with the help of the Visual Geometry Group (VGG16) and Residual Neural Network (ResNet). Further, the second set of features is attained by the below process. Here, the collected image is given to pre-processing phase and the image is pre–pre-processed with the help of Contrast-limited Adaptive Histogram Equalization (CLAHE) and filter technique. Then, the pre-processed image is offered to the segmentation phase with the help of adaptive binary thresholding and offered to a dilated network that holds VGG16 and ResNet and attained the second set of features. The parameters of adaptive binary thresholding are tuned with the help of a developed hybrid approach called Sand Cat swarm JAya Optimization (SC-JAO) via Sand Cat swarm Optimization (SCO) and JAYA (SC-JAO). Finally, the third set of features is attained by offering the image to pre-processing phase. Then, the pre-processed image is offered to the segmentation phase and the image is a segmented phase and features are tuned by developed SC-JAO. Further, the segmented features are offered to attain the textural features like Gray-Level Co-Occurrence Matrix (GLCM) and Local Weber Pattern (LWP) and attained the third set of features. Then, the attained three different sets of features are given to the optimal weighted feature phase, where the parameters are optimized by the SC-JAO algorithm and then given to the disease prediction phase. Here, disease prediction is made with the help of Attention-based Adaptive Weighted Recurrent Neural Networks (AAW-RNN), and their parameters are tuned by developed SC-JAO. Thus, the developed model achieved an effective lung and colon detection rate over conventional approaches over multiple experimental analyses.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136193149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combined Shallow and Deep Learning Models for Malware Detection in Wsn 用于Wsn中恶意软件检测的浅层和深度学习组合模型
IF 1.6 Q3 Computer Science Pub Date : 2023-09-07 DOI: 10.1142/s0219467825500342
Madhavarapu Chandan, S. G. Santhi, T. Srinivasa Rao
Due to the major operating restrictions, ensuring security is the fundamental problem of Wireless Sensor Networks (WSNs). Because of their inadequate security mechanisms, WSNs are indeed a simple point for malware (worms, viruses, malicious code, etc.). According to the epidemic nature of worm propagation, it is critical to develop a worm defense mechanism in the network. This concept aims to establish novel malware detection in WSN that consists of several phases: “(i) Preprocessing, (ii) feature extraction, as well as (iii) detection”. At first, the input data is subjected for preprocessing phase. Then, the feature extraction takes place, in which principal component analysis (PCA), improved linear discriminant analysis (LDA), and autoencoder-based characteristics are retrieved. Moreover, the retrieved characteristics are subjected to the detection phase. The detection is performed employing combined shallow learning and DL. Further, the shallow learning includes decision tree (DT), logistic regression (LR), and Naive Bayes (NB); the deep learning (DL) includes deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Here, the DT output is given to the DNN, LR output is subjected to CNN, and the NB output is given to the RNN, respectively. Eventually, the DNN, CNN, and RNN outputs are averaged to generate a successful outcome. The combination can be thought of as an Ensemble classifier. The weight of the RNN is optimally tuned through the Self Improved Shark Smell Optimization with Opposition Learning (SISSOOL) model to improve detection precision and accuracy. Lastly, the outcomes of the suggested approach are computed in terms of different measures.
由于主要的操作限制,确保安全是无线传感器网络的根本问题。由于其安全机制不足,无线传感器网络确实是恶意软件(蠕虫、病毒、恶意代码等)的一个简单点。根据蠕虫传播的流行性,在网络中开发蠕虫防御机制至关重要。这一概念旨在在WSN中建立新的恶意软件检测,该检测由几个阶段组成:“(i)预处理,(ii)特征提取,以及(iii)检测”。首先,对输入数据进行预处理。然后,进行特征提取,其中检索主成分分析(PCA)、改进的线性判别分析(LDA)和基于自动编码器的特征。此外,对检索到的特征进行检测阶段。使用组合的浅层学习和DL来执行检测。此外,浅层学习包括决策树(DT)、逻辑回归(LR)和朴素贝叶斯(NB);深度学习(DL)包括深度神经网络(DNN)、卷积神经网络(CNN)和递归神经网络(RNN)。这里,DT输出分别被给予DNN,LR输出被给予CNN,NB输出被给予RNN。最终,对DNN、CNN和RNN的输出进行平均,以产生成功的结果。该组合可以被认为是一个集合分类器。RNN的权重通过自改进的鲨鱼嗅觉优化与反对学习(SISSOOL)模型进行优化调整,以提高检测精度和准确性。最后,根据不同的衡量标准对所建议的方法的结果进行了计算。
{"title":"Combined Shallow and Deep Learning Models for Malware Detection in Wsn","authors":"Madhavarapu Chandan, S. G. Santhi, T. Srinivasa Rao","doi":"10.1142/s0219467825500342","DOIUrl":"https://doi.org/10.1142/s0219467825500342","url":null,"abstract":"Due to the major operating restrictions, ensuring security is the fundamental problem of Wireless Sensor Networks (WSNs). Because of their inadequate security mechanisms, WSNs are indeed a simple point for malware (worms, viruses, malicious code, etc.). According to the epidemic nature of worm propagation, it is critical to develop a worm defense mechanism in the network. This concept aims to establish novel malware detection in WSN that consists of several phases: “(i) Preprocessing, (ii) feature extraction, as well as (iii) detection”. At first, the input data is subjected for preprocessing phase. Then, the feature extraction takes place, in which principal component analysis (PCA), improved linear discriminant analysis (LDA), and autoencoder-based characteristics are retrieved. Moreover, the retrieved characteristics are subjected to the detection phase. The detection is performed employing combined shallow learning and DL. Further, the shallow learning includes decision tree (DT), logistic regression (LR), and Naive Bayes (NB); the deep learning (DL) includes deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Here, the DT output is given to the DNN, LR output is subjected to CNN, and the NB output is given to the RNN, respectively. Eventually, the DNN, CNN, and RNN outputs are averaged to generate a successful outcome. The combination can be thought of as an Ensemble classifier. The weight of the RNN is optimally tuned through the Self Improved Shark Smell Optimization with Opposition Learning (SISSOOL) model to improve detection precision and accuracy. Lastly, the outcomes of the suggested approach are computed in terms of different measures.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43201003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Enhancement: A Review of Different Deep Learning Methods 语音增强:不同深度学习方法综述
IF 1.6 Q3 Computer Science Pub Date : 2023-09-05 DOI: 10.1142/s021946782550024x
Sivaramakrishna Yechuri, Sunny Dayal Vanabathina
Speech enhancement methods differ depending on the degree of degradation and noise in the speech signal, so research in the field is still difficult, especially when dealing with residual and background noise, which is highly transient. Numerous deep learning networks have been developed that provide promising results for improving the perceptual quality and intelligibility of noisy speech. Innovation and research in speech enhancement have been opened up by the power of deep learning techniques with implications across a wide range of real time applications. By reviewing the important datasets, feature extraction methods, deep learning models, training algorithms and evaluation metrics for speech enhancement, this paper provides a comprehensive overview. We begin by tracing the evolution of speech enhancement research, from early approaches to recent advances in deep learning architectures. By analyzing and comparing the approaches to solving speech enhancement challenges, we categorize them according to their strengths and weaknesses. Moreover, we discuss the challenges and future directions of deep learning in speech enhancement, including the demand for parameter-efficient models for speech enhancement. The purpose of this paper is to examine the development of the field, compare and contrast different approaches, and highlight future directions as well as challenges for further research.
由于语音信号的退化程度和噪声的不同,语音增强的方法也不同,因此该领域的研究仍然很困难,特别是在处理高度瞬态的残余噪声和背景噪声时。许多深度学习网络已经被开发出来,在提高有噪声语音的感知质量和可理解性方面提供了有希望的结果。深度学习技术的力量开启了语音增强的创新和研究,并对广泛的实时应用产生了影响。通过回顾语音增强的重要数据集、特征提取方法、深度学习模型、训练算法和评估指标,本文提供了一个全面的概述。我们首先追溯语音增强研究的演变,从早期的方法到深度学习架构的最新进展。通过分析和比较解决语音增强挑战的方法,根据它们的优缺点对它们进行分类。此外,我们还讨论了语音增强中深度学习的挑战和未来方向,包括对参数高效语音增强模型的需求。本文的目的是考察该领域的发展,比较和对比不同的方法,并强调未来的方向以及进一步研究的挑战。
{"title":"Speech Enhancement: A Review of Different Deep Learning Methods","authors":"Sivaramakrishna Yechuri, Sunny Dayal Vanabathina","doi":"10.1142/s021946782550024x","DOIUrl":"https://doi.org/10.1142/s021946782550024x","url":null,"abstract":"Speech enhancement methods differ depending on the degree of degradation and noise in the speech signal, so research in the field is still difficult, especially when dealing with residual and background noise, which is highly transient. Numerous deep learning networks have been developed that provide promising results for improving the perceptual quality and intelligibility of noisy speech. Innovation and research in speech enhancement have been opened up by the power of deep learning techniques with implications across a wide range of real time applications. By reviewing the important datasets, feature extraction methods, deep learning models, training algorithms and evaluation metrics for speech enhancement, this paper provides a comprehensive overview. We begin by tracing the evolution of speech enhancement research, from early approaches to recent advances in deep learning architectures. By analyzing and comparing the approaches to solving speech enhancement challenges, we categorize them according to their strengths and weaknesses. Moreover, we discuss the challenges and future directions of deep learning in speech enhancement, including the demand for parameter-efficient models for speech enhancement. The purpose of this paper is to examine the development of the field, compare and contrast different approaches, and highlight future directions as well as challenges for further research.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45839286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time Image De-Noising Method Based on Sparse Regularization 基于稀疏正则化的时间图像去噪方法
Q3 Computer Science Pub Date : 2023-09-01 DOI: 10.1142/s0219467825500093
Xin Wang, Xiaogang Dong
The blurring of texture edges often occurs during image data transmission and acquisition. To ensure the detailed clarity of the drag-time images, we propose a time image de-noising method based on sparse regularization. First, the image pixel sparsity index is set, and then an image de-noising model is established based on sparse regularization processing to obtain the neighborhood weights of similar image blocks. Second, a time image de-noising algorithm is designed to determine whether the coding coefficient reaches the standard value, and a new image de-noising method is obtained. Finally, the images of electronic clocks and mechanical clocks are used as two kinds of time images to compare different image de-noising methods, respectively. The results show that the sparsity regularization method has the highest peak signal-to-noise ratio among the six compared methods for different noise standard deviations and two time images. The image structure similarity is always above which shows that the proposed method is better than the other five image de-noising methods.
在图像数据传输和采集过程中,经常会出现纹理边缘模糊的现象。为了保证拖拽时间图像的细节清晰度,提出了一种基于稀疏正则化的时间图像去噪方法。首先设置图像像素稀疏度指数,然后基于稀疏正则化处理建立图像去噪模型,得到相似图像块的邻域权值。其次,设计了一种判断编码系数是否达到标准值的时间图像去噪算法,获得了一种新的图像去噪方法;最后,以电子钟和机械钟的图像作为两种时间图像,分别比较了不同的图像去噪方法。结果表明,对于不同噪声标准差和两种时间图像,稀疏化正则化方法的峰值信噪比最高。图像结构相似度始终在以上,表明该方法优于其他五种图像去噪方法。
{"title":"Time Image De-Noising Method Based on Sparse Regularization","authors":"Xin Wang, Xiaogang Dong","doi":"10.1142/s0219467825500093","DOIUrl":"https://doi.org/10.1142/s0219467825500093","url":null,"abstract":"The blurring of texture edges often occurs during image data transmission and acquisition. To ensure the detailed clarity of the drag-time images, we propose a time image de-noising method based on sparse regularization. First, the image pixel sparsity index is set, and then an image de-noising model is established based on sparse regularization processing to obtain the neighborhood weights of similar image blocks. Second, a time image de-noising algorithm is designed to determine whether the coding coefficient reaches the standard value, and a new image de-noising method is obtained. Finally, the images of electronic clocks and mechanical clocks are used as two kinds of time images to compare different image de-noising methods, respectively. The results show that the sparsity regularization method has the highest peak signal-to-noise ratio among the six compared methods for different noise standard deviations and two time images. The image structure similarity is always above which shows that the proposed method is better than the other five image de-noising methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135688305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Model for Classification of Skin Cancer Images After Segmentation 癌症皮肤图像分割后的混合分类模型
IF 1.6 Q3 Computer Science Pub Date : 2023-08-31 DOI: 10.1142/s0219467825500226
Rasmiranjan Mohakud, Rajashree Dash
For dermatoscopic skin lesion images, deep learning-based algorithms, particularly convolutional neural networks (CNN), have demonstrated good classification and segmentation capabilities. The impact of utilizing lesion segmentation data on classification performance, however, is still up for being subject to discussion. Being driven in this direction, in this work we propose a hybrid deep learning-based model to classify the skin cancer using segmented images. In the first stage, a fully convolutional encoder–decoder network (FCEDN) is employed to segment the skin cancer image and then in the second phase, a CNN is applied on the segmented images for classification. As the model’s success depends on the hyper-parameters it uses and fine-tuning these hyper-parameters by hand is time-consuming, so in this study the hyper-parameters of the hybrid model are optimized by utilizing an exponential neighborhood gray wolf optimization (ENGWO) technique. Extensive experiments are carried out using the International Skin Imaging Collaboration (ISIC) 2016 and ISIC 2017 datasets to show the efficacy of the model. The suggested model has been evaluated on both balanced and unbalanced datasets. With the balanced dataset, the proposed hybrid model achieves training accuracy up to 99.98%, validation accuracy up to 92.13% and testing accuracy up to 89.75%. It is evident from the findings that the proposed hybrid model outperforms previous known models in a competitive manner over balanced data.
对于皮肤镜下的皮肤病变图像,基于深度学习的算法,特别是卷积神经网络(CNN),已经显示出良好的分类和分割能力。然而,利用病灶分割数据对分类性能的影响仍有待讨论。在这个方向上,在这项工作中,我们提出了一种基于混合深度学习的模型,使用分割图像对皮肤癌进行分类。在第一阶段,采用全卷积编码器-解码器网络(FCEDN)对皮肤癌图像进行分割,然后在第二阶段,对分割后的图像进行CNN分类。由于模型的成功取决于其使用的超参数,而手工微调这些超参数非常耗时,因此本研究采用指数邻域灰狼优化(ENGWO)技术对混合模型的超参数进行优化。使用国际皮肤成像协作(ISIC) 2016和ISIC 2017数据集进行了大量实验,以显示该模型的有效性。建议的模型已经在平衡和不平衡数据集上进行了评估。在平衡数据集下,混合模型的训练准确率高达99.98%,验证准确率高达92.13%,测试准确率高达89.75%。从研究结果中可以明显看出,所提出的混合模型在平衡数据的竞争方式上优于以前已知的模型。
{"title":"A Hybrid Model for Classification of Skin Cancer Images After Segmentation","authors":"Rasmiranjan Mohakud, Rajashree Dash","doi":"10.1142/s0219467825500226","DOIUrl":"https://doi.org/10.1142/s0219467825500226","url":null,"abstract":"For dermatoscopic skin lesion images, deep learning-based algorithms, particularly convolutional neural networks (CNN), have demonstrated good classification and segmentation capabilities. The impact of utilizing lesion segmentation data on classification performance, however, is still up for being subject to discussion. Being driven in this direction, in this work we propose a hybrid deep learning-based model to classify the skin cancer using segmented images. In the first stage, a fully convolutional encoder–decoder network (FCEDN) is employed to segment the skin cancer image and then in the second phase, a CNN is applied on the segmented images for classification. As the model’s success depends on the hyper-parameters it uses and fine-tuning these hyper-parameters by hand is time-consuming, so in this study the hyper-parameters of the hybrid model are optimized by utilizing an exponential neighborhood gray wolf optimization (ENGWO) technique. Extensive experiments are carried out using the International Skin Imaging Collaboration (ISIC) 2016 and ISIC 2017 datasets to show the efficacy of the model. The suggested model has been evaluated on both balanced and unbalanced datasets. With the balanced dataset, the proposed hybrid model achieves training accuracy up to 99.98%, validation accuracy up to 92.13% and testing accuracy up to 89.75%. It is evident from the findings that the proposed hybrid model outperforms previous known models in a competitive manner over balanced data.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44995806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Brain Tumor Prediction Using Pteropus Unicinctus Optimization on Deep Neural Network 基于深度神经网络的翼龙优化脑肿瘤预测
IF 1.6 Q3 Computer Science Pub Date : 2023-08-31 DOI: 10.1142/s0219467825500238
Sumit Chhabra, Khushboo Bansal
Human brain tumors are now the most serious and horrible diseases for people, causing certain deaths. The patient’s life also becomes more complicated over time as a result of the brain tumor. Thus, it is essential to find tumors early to safeguard and extend the patient’s life. Hence, new improvements are highly essential in the techniques of brain tumor detection in medical areas. To address this, research has introduced automatic brain tumor prediction using Pteropus unicinctus optimization on deep neural networks (PUO-deep NNs). Initially, the data are gathered from the BraTS MICCAI brain tumor dataset and preprocessing and ROI extraction are performed to remove the noise from the data. Then the extracted RoI is forwarded to the fuzzy c-means (FCM) clustering to segment the brain image. The parameters of the FCM tune the PUO algorithm so the image is segmented into the tumor region and the non-tumor region. Then the feature extraction takes place on ResNet. Finally, the deep NN classifier successfully predicted the brain tumor by utilizing the PUO method, which improved the classifier performance and produced extremely accurate results. For dataset 1, the PUO-deep NN achieved values of 87.69% accuracy, 93.81% sensitivity, and 99.01% specificity. The suggested PUO-deep NN also attained the values for dataset 2 of 98.49%, 98.55%, and 95.60%, which is significantly more effective than the current approaches.
人类脑肿瘤是目前人类最严重、最可怕的疾病,会导致某些人死亡。随着时间的推移,由于脑肿瘤,患者的生活也变得更加复杂。因此,早期发现肿瘤是保障和延长患者生命的关键。因此,在医学领域对脑肿瘤检测技术进行新的改进是非常必要的。为了解决这个问题,研究人员在深度神经网络(PUO-deep NNs)上引入了利用翼虎优化的自动脑肿瘤预测。首先,从BraTS MICCAI脑肿瘤数据集中收集数据,进行预处理和ROI提取,去除数据中的噪声。然后将提取的感兴趣区域转发到模糊c均值聚类中进行脑图像分割。FCM的参数调整了PUO算法,从而将图像分割为肿瘤区域和非肿瘤区域。然后在ResNet上进行特征提取。最后,深度神经网络分类器利用PUO方法成功地预测了脑肿瘤,提高了分类器的性能,产生了非常准确的结果。对于数据集1,PUO-deep NN的准确率为87.69%,灵敏度为93.81%,特异性为99.01%。所建议的PUO-deep NN在数据集2上也达到了98.49%、98.55%和95.60%的值,明显比目前的方法更有效。
{"title":"An Efficient Brain Tumor Prediction Using Pteropus Unicinctus Optimization on Deep Neural Network","authors":"Sumit Chhabra, Khushboo Bansal","doi":"10.1142/s0219467825500238","DOIUrl":"https://doi.org/10.1142/s0219467825500238","url":null,"abstract":"Human brain tumors are now the most serious and horrible diseases for people, causing certain deaths. The patient’s life also becomes more complicated over time as a result of the brain tumor. Thus, it is essential to find tumors early to safeguard and extend the patient’s life. Hence, new improvements are highly essential in the techniques of brain tumor detection in medical areas. To address this, research has introduced automatic brain tumor prediction using Pteropus unicinctus optimization on deep neural networks (PUO-deep NNs). Initially, the data are gathered from the BraTS MICCAI brain tumor dataset and preprocessing and ROI extraction are performed to remove the noise from the data. Then the extracted RoI is forwarded to the fuzzy c-means (FCM) clustering to segment the brain image. The parameters of the FCM tune the PUO algorithm so the image is segmented into the tumor region and the non-tumor region. Then the feature extraction takes place on ResNet. Finally, the deep NN classifier successfully predicted the brain tumor by utilizing the PUO method, which improved the classifier performance and produced extremely accurate results. For dataset 1, the PUO-deep NN achieved values of 87.69% accuracy, 93.81% sensitivity, and 99.01% specificity. The suggested PUO-deep NN also attained the values for dataset 2 of 98.49%, 98.55%, and 95.60%, which is significantly more effective than the current approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48086675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abnormal Behavior Recognition for Human Motion Based on Improved Deep Reinforcement Learning 基于改进深度强化学习的人体运动异常行为识别
IF 1.6 Q3 Computer Science Pub Date : 2023-08-30 DOI: 10.1142/s0219467825500299
Xueying Duan
Recognizing abnormal behavior recognition (ABR) is an important part of social security work. To ensure social harmony and stability, it is of great significance to study the identification methods of abnormal human motion behavior. Aiming at the low accuracy of human motion ABR method, ABR method for human motion based on improved deep reinforcement learning (DRL) is proposed. First, the background image is processed in combination with the Gaussian model; second, the background features and human motion trajectory features are extracted, respectively; finally, the improved DRL model is constructed, and the feature information is input into the improvement model to further extract the abnormal behavior features, and the ABR of human motion is realized through the interaction between the agent and the environment. The different methods were examined based on UCF101 data set and HiEve data set. The results show that the accuracy of human motion key point acquisition and posture estimation accuracy is high, the proposed method sensitivity is good, and the recognition accuracy of human motion abnormal behavior is as high as 95.5%. It can realize the ABR for human motion and lay a foundation for the further development of follow-up social security management.
异常行为识别(ABR)是社会保障工作的重要组成部分。研究人体异常运动行为的识别方法,对保障社会和谐稳定具有重要意义。针对人体运动ABR方法准确率较低的问题,提出了基于改进深度强化学习(DRL)的人体运动ABR方法。首先,结合高斯模型对背景图像进行处理;其次,分别提取背景特征和人体运动轨迹特征;最后,构建改进的DRL模型,将特征信息输入改进模型,进一步提取异常行为特征,通过智能体与环境的交互实现人体运动的ABR。基于UCF101数据集和HiEve数据集对不同的方法进行了检验。结果表明,该方法对人体运动关键点的获取精度和姿态估计精度高,灵敏度好,对人体运动异常行为的识别精度高达95.5%。实现了人体运动的ABR,为后续社会保障管理的进一步发展奠定了基础。
{"title":"Abnormal Behavior Recognition for Human Motion Based on Improved Deep Reinforcement Learning","authors":"Xueying Duan","doi":"10.1142/s0219467825500299","DOIUrl":"https://doi.org/10.1142/s0219467825500299","url":null,"abstract":"Recognizing abnormal behavior recognition (ABR) is an important part of social security work. To ensure social harmony and stability, it is of great significance to study the identification methods of abnormal human motion behavior. Aiming at the low accuracy of human motion ABR method, ABR method for human motion based on improved deep reinforcement learning (DRL) is proposed. First, the background image is processed in combination with the Gaussian model; second, the background features and human motion trajectory features are extracted, respectively; finally, the improved DRL model is constructed, and the feature information is input into the improvement model to further extract the abnormal behavior features, and the ABR of human motion is realized through the interaction between the agent and the environment. The different methods were examined based on UCF101 data set and HiEve data set. The results show that the accuracy of human motion key point acquisition and posture estimation accuracy is high, the proposed method sensitivity is good, and the recognition accuracy of human motion abnormal behavior is as high as 95.5%. It can realize the ABR for human motion and lay a foundation for the further development of follow-up social security management.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41533888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Image and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1