首页 > 最新文献

International Journal of Image and Graphics最新文献

英文 中文
Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms 基于机器学习算法的脑肿瘤检测混合分割方法
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-21 DOI: 10.1142/s0219467823400089
M. Praveena, M. Rao
Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.
肿瘤对人类来说是最危险的,如果患者在早期没有注意到它,就会导致死亡。水肿是大脑肿胀的一种,由人类大脑中的有毒颗粒组成。特别是在脑部,肿瘤是通过磁共振成像(MRI)扫描来识别的。这种扫描在检测给定输入图像中受影响区域的区域方面起着主要作用。肿瘤可能含有癌细胞或非癌细胞。许多专家使用该MRI报告作为肿瘤或水肿为癌细胞的初步确认。脑肿瘤分割是对正常组织和肿瘤组织进行分类的一项重要任务。本文提出了一种混合分割方法(HSA),对给定的脑输入图像进行精确的肿瘤和水肿区域检测。HSA是一种先进的分割模型和边缘检测技术的结合,用于发现肿瘤或水肿的状态。将HSA应用于由MRI扫描图像组成的Kaggle脑图像数据集。边缘检测技术提高了对肿瘤或水肿区域的检测。将HSA算法与支持向量机全自动异构分割、支持向量机正常分割等算法进行了性能比较。利用均方误差(MSE)、峰值信噪比(PSNR)和精度来计算所提出工作的性能。该方法通过提高精度获得了更好的性能。
{"title":"Hybrid Segmentation Approach for Tumors Detection in Brain Using Machine Learning Algorithms","authors":"M. Praveena, M. Rao","doi":"10.1142/s0219467823400089","DOIUrl":"https://doi.org/10.1142/s0219467823400089","url":null,"abstract":"Tumors are most dangerous to humans and cause death when patient not noticed it in the early stages. Edema is one type of brain swelling that consists of toxic particles in the human brain. Especially in the brain, the tumors are identified with magnetic resonance imaging (MRI) scanning. This scanning plays a major role in detecting the area of the affected area in the given input image. Tumors may contain cancer or non-cancerous cells. Many experts have used this MRI report as the primary confirmation of the tumors or edemas as cancer cells. Brain tumor segmentation is a significant task that is used to classify the normal and tumor tissues. In this paper, a hybrid segmentation approach (HSA) is introduced to detect the accurate regions of tumors and edemas to the given brain input image. HSA is the combination of an advanced segmentation model and edge detection technique used to find the state of the tumors or edemas. HSA is applied on the Kaggle brain image dataset consisting of MRI scanning images. Edge detection technique improves the detection of tumor or edema region. The performance of the HSA is compared with various algorithms such as Fully Automatic Heterogeneous Segmentation using support vector machine (FAHS-SVM), SVM with Normal Segmentation, etc. Performance of proposed work is calculated using mean square error (MSE), peak signal noise ratio (PSNR), and accuracy. The proposed approach achieved better performance by improving accuracy.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42142265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet 基于蜜蜂优化和概率U-RSNet的混合人工智能多类脑肿瘤图像高效分类
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500591
Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive
The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.
人类的生命被认为是最宝贵的,在过去的二十年里,平均寿命从75岁减少到50岁。平均寿命的缩短是由于各种健康危害,即癌症等。脑瘤是十大最常见的死亡原因之一。虽然脑瘤不是全球死亡的主要原因,但40%的其他癌症(如乳腺癌或肺癌)转移到大脑并成为脑瘤。尽管活检是肿瘤诊断的金标准,但它有许多缺点,包括灵敏度/特异性较差,活检时存在威胁,等待结果的时间较长。与长短期记忆网络(LSTM)、卷积神经网络、生成对抗网络、循环神经网络和深度信念网络等深度学习算法相比,本研究采用集成了蜜蜂优化(HBO)的人工智能来检测脑肿瘤,在准确率、查全率、精密度、F1分数和Jaccard指数方面具有较高的执行水平。在这项工作中,为了提高预测水平,图像分割方法由概率U-RSNet执行。使用BraTS 2020、BraTS 2021和OASIS数据集对准确性、精密度、召回率、F1分数、Jaccard指数和PPV等重要参数进行了分析。
{"title":"An Efficient Classification of Multiclass Brain Tumor Image Using Hybrid Artificial Intelligence with Honey Bee Optimization and Probabilistic U-RSNet","authors":"Hariharan Ramamoorthy, Mohan Ramasundaram, S. Raja, Krunal Randive","doi":"10.1142/s0219467824500591","DOIUrl":"https://doi.org/10.1142/s0219467824500591","url":null,"abstract":"The life of the human beings are considered as the most precious and the average life time has reduced from 75 to 50 age over the past two decades. This reduction of average life time is due to various health hazards namely cancer and many more. The brain tumor ranks among the top ten most common source of demise. Although brain tumors are not the leading cause of death globally, 40% of other cancers (such as breast or lung cancers) metastasize to the brain and become brain tumors. Despite being the gold norm for tumor diagnosis, a biopsy has a number of drawbacks, including inferior sensitivity/specificity, and menace when performing the biopsy, and lengthy wait times for the results. This work employs artificial intelligence integrated with the honey bee optimization (HBO) in detecting the brain tumor with high level of execution in terms of accuracy, recall, precision, F1 score and Jaccard index when compared to the deep learning algorithms of long short term memory networks (LSTM), convolutional neural networks, generative adversarial networks, recurrent neural networks, and deep belief networks. In this work, to enhance the level of prediction, the image segmentation methodology is performed by the probabilistic U-RSNet. This work is analyzed employing the BraTS 2020, BraTS 2021, and OASIS dataset for the vital parameters like accuracy, precision, recall, F1 score, Jaccard index and PPV.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46186203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN 利用ResNet-152 FPN改进基于调制变形ConvNets v2主干掩码评分的边界盒和实例分割精度
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-21 DOI: 10.1142/s0219467824500542
Suresh Shanmugasundaram, Natarajan Palaniappan
A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.
一项具有挑战性的任务是确保深度学习网络能够自己学习预测精度。ground truth和instance mask之间的交集-over- union (IoU)决定了mask的质量。分类评分与掩膜质量无相关性。我们的任务是研究这个问题,并了解预测的实例掩码的准确性。该网络通过比较预测的掩码和相应的实例特征来回归MaskIoU。掩码评分策略确定掩码评分与掩码质量之间的无序性,并对参数进行相应的调整。对物体几何变化的适应能力决定了可变形卷积网络的性能。利用增强的建模能力和更强的训练,重构的可变形卷积神经网络提高了对相关图像区域的聚焦能力。调制技术的引入拓宽了变形建模的范围,并在网络内全面集成了变形卷积,提高了建模能力。该网络借助DCNv2的特征模拟方案学习与基于区域的卷积神经网络(R-CNN)特征的分类能力及其对象焦点相似的特征。DCNv2的特征模拟方案指导网络训练,有效地控制这种增强的建模能力。所提出的掩码评分R-CNN网络的主干采用ResNet-152 FPN和DCNv2网络设计。基于DCNv2网络的掩码评分R-CNN网络也在ResNet-50和ResNet-101骨干网上进行了测试。使用该网络在COCO基准和cityscape数据集上实现了高精度的实例分割和目标检测。
{"title":"Improvement of Bounding Box and Instance Segmentation Accuracy Using ResNet-152 FPN with Modulated Deformable ConvNets v2 Backbone-based Mask Scoring R-CNN","authors":"Suresh Shanmugasundaram, Natarajan Palaniappan","doi":"10.1142/s0219467824500542","DOIUrl":"https://doi.org/10.1142/s0219467824500542","url":null,"abstract":"A challenging task is to make sure that the deep learning network learns prediction accuracy by itself. Intersection-over-Union (IoU) amidst ground truth and instance mask determines mask quality. There is no relationship between classification score and mask quality. The mission is to investigate this problem and learn the predicted instance mask’s accuracy. The proposed network regresses the MaskIoU by comparing the predicted mask and the respective instance feature. The mask scoring strategy determines the disorder among mask score and mask quality, then adjusts the parameters accordingly. Adaptation ability to the object’s geometric variations decides deformable convolutional network’s performance. Using increased modeling power and stronger training, focusing ability on pertinent image regions is improved by a reformulated Deformable ConvNets. The introduction of modulation technique, which broadens the deformation modeling scope, and the integration of deformable convolution comprehensively within the network enhance the modeling power. The features which resemble region-based convolutional neural network (R-CNN) feature’s classification capability and its object focus are learned by the network with the help of feature mimicking scheme of DCNv2. Feature mimicking scheme of DCNv2 guides the network training to efficiently control this enhanced modeling capability. The backbone of the proposed Mask Scoring R-CNN network is designed with ResNet-152 FPN and DCNv2 network. The proposed Mask Scoring R-CNN network with DCNv2 network is also tested with other backbones ResNet-50 and ResNet-101. Instance segmentation and object detection on COCO benchmark and Cityscapes dataset are achieved with top accuracy and improved performance using the proposed network.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48564995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Fake Colorized Images based on Deep Learning 基于深度学习的假彩色图像检测
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-21 DOI: 10.1142/s0219467825500020
Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi
Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.
先进的图像编辑技术可以显著增强图像,但也可以被恶意使用。着色是一种新的图像编辑技术,它使用逼真的颜色对灰度照片进行着色。然而,这种策略可以用于自然颜色图像的恶意目的(例如,混淆依赖物体颜色进行识别的物体识别系统)。图像取证是一个发达的领域,通过检查特定条件下的照片来建立信心和真实性。本文提出了一种基于特殊残差网络(ResNet)架构的伪彩色图像检测方法。ResNets是卷积神经网络(Convolutional Neural Networks, cnn)的一种架构,已被广泛应用于各种任务。首先,通过一种特殊的图像表示来重建输入图像,该图像表示结合了来自三个独立颜色空间(HSV, Lab和Ycbcr)的颜色信息;然后,将重构后的图像用于ResNet模型的训练。实验结果表明,我们提出的方法具有高度的泛化性和显著的鲁棒性,可以显示由各种着色方法生成的假彩色图像。
{"title":"Detection of Fake Colorized Images based on Deep Learning","authors":"Khalid A. Salman, Khalid Shaker, Sufyan T. Faraj Al-Janabi","doi":"10.1142/s0219467825500020","DOIUrl":"https://doi.org/10.1142/s0219467825500020","url":null,"abstract":"Image editing technologies have been advanced that can significantly enhance the image, but can also be used maliciously. Colorization is a new image editing technology that uses realistic colors to colorize grayscale photos. However, this strategy can be used on natural color images for a malicious purpose (e.g. to confuse object recognition systems that depend on the colors of objects for recognition). Image forensics is a well-developed field that examines photos of specified conditions to build confidence and authenticity. This work proposes a new fake colorized image detection approach based on the special Residual Network (ResNet) architecture. ResNets are a kind of Convolutional Neural Networks (CNNs) architecture that has been widely adopted and applied for various tasks. At first, the input image is reconstructed via a special image representation that combines color information from three separate color spaces (HSV, Lab, and Ycbcr); then, the new reconstructed images have been used for training the proposed ResNet model. Experimental results have demonstrated that our proposed method is highly generalized and significantly robust for revealing fake colorized images generated by various colorization methods.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44758172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network 基于卷积神经网络的PET/CT肺癌计算机辅助分类
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-15 DOI: 10.1142/s0219467824500402
Dhekra El Hamdi, Ines Elouedi, I. Slim
Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.
肺癌是全球癌症相关死亡的主要原因。因此,早期诊断对于获得适当的治疗策略仍然至关重要。本文提出了一种新的方法来评估与人工智能技术相关的正电子发射断层扫描/计算机断层扫描(PET/CT)图像对肺癌分类的能力。在这项工作中,我们建立了一个多输出卷积神经网络(CNN)作为辅助肺癌患者分期的工具。参照TNM分期系统及组织学亚型分型。将VGG - 16网络应用于PET/CT图像,提取图像中最相关的特征。然后将获得的特征传输到三分支分类器,以指定淋巴结(N),肿瘤(T)和组织学亚型分类。实验结果表明,我们的CNN模型在TN分期和组织学分类方面取得了很好的效果。当在Lung-PET-CT-Dx数据集上测试时,所提出的架构对肿瘤大小的分类准确率为0.94,曲线下面积(AUC)为0.97。它也产生了N分期的高性能,准确率为0.98。此外,我们的方法在组织学分类方面取得了比最先进的方法更好的准确性。
{"title":"Computer-Aided Classification of Cell Lung Cancer Via PET/CT Images Using Convolutional Neural Network","authors":"Dhekra El Hamdi, Ines Elouedi, I. Slim","doi":"10.1142/s0219467824500402","DOIUrl":"https://doi.org/10.1142/s0219467824500402","url":null,"abstract":"Lung cancer is the leading cause of cancer-related death worldwide. Therefore, early diagnosis remains essential to allow access to appropriate curative treatment strategies. This paper presents a novel approach to assess the ability of Positron Emission Tomography/Computed Tomography (PET/CT) images for the classification of lung cancer in association with artificial intelligence techniques. We have built, in this work, a multi output Convolutional Neural Network (CNN) as a tool to assist the staging of patients with lung cancer. The TNM staging system as well as histologic subtypes classification were adopted as a reference. The VGG 16 network is applied to the PET/CT images to extract the most relevant features from images. The obtained features are then transmitted to a three-branch classifier to specify Nodal (N), Tumor (T) and histologic subtypes classification. Experimental results demonstrated that our CNN model achieves good results in TN staging and histology classification. The proposed architecture classified the tumor size with a high accuracy of 0.94 and the area under the curve (AUC) of 0.97 when tested on the Lung-PET-CT-Dx dataset. It also has yielded high performance for N staging with an accuracy of 0.98. Besides, our approach has achieved better accuracy than state-of-the-art methods in histologic classification.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47610541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network RDN-NET:一个使用循环深度神经网络进行哮喘预测和分类的深度学习框架
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-13 DOI: 10.1142/s0219467824500505
Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed
Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.
哮喘是一种重要的疾病类型,它导致全世界所有年龄组的大量死亡。因此,早期发现和预防哮喘疾病可以挽救无数生命,也有助于医疗领域。但传统的机器学习方法无法从语音信号中检测出哮喘,导致准确率较低。为此,本文提出了一种基于循环深度神经网络(RDN-Net)的基于深度学习的哮喘预测与分类方法。首先,采用最小均方误差短时谱幅(MMSE-STSA)方法对语音信号进行预处理,去除噪声,增强语音性能。然后,使用改进的Ripplet-II变换(IR2T)提取疾病依赖和疾病特异性特征。然后,采用基于改进灰狼优化(MGWO)的生物优化方法,通过狩猎过程选择最优特征;最后,利用RDN-Net从语音信号中预测哮喘疾病的存在,并将其分为喘息型、噼啪型和正常型。在实时COSWARA数据集上进行了仿真,与最先进的方法相比,所提出的方法在所有指标上都具有更好的性能。
{"title":"RDN-NET: A Deep Learning Framework for Asthma Prediction and Classification Using Recurrent Deep Neural Network","authors":"Md.ASIM Iqbal, K. Devarajan, S. M. Ahmed","doi":"10.1142/s0219467824500505","DOIUrl":"https://doi.org/10.1142/s0219467824500505","url":null,"abstract":"Asthma is the one of the crucial types of disease, which causes the huge deaths of all age groups around the world. So, early detection and prevention of asthma disease can save numerous lives and are also helpful to the medical field. But the conventional machine learning methods have failed to detect the asthma from the speech signals and resulted in low accuracy. Thus, this paper presented the advanced deep learning-based asthma prediction and classification using recurrent deep neural network (RDN-Net). Initially, speech signals are preprocessed by using minimum mean-square-error short-time spectral amplitude (MMSE-STSA) method, which is used to remove the noises and enhances the speech properties. Then, improved Ripplet-II Transform (IR2T) is used to extract disease-dependent and disease-specific features. Then, modified gray wolf optimization (MGWO)-based bio-optimization approach is used to select the optimal features by hunting process. Finally, RDN-Net is used to predict the asthma disease present from speech signal and classifies the type as either wheeze, crackle or normal. The simulations are carried out on real-time COSWARA dataset and the proposed method resulted in better performance for all metrics as compared to the state-of-the-art approaches.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46658873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples 基于自注意的卷积GRU增强对抗性语音示例
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-08 DOI: 10.1142/s0219467824500530
Chaitanya Jannu, S. Vanambathina
Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.
最近的研究已经确定了对抗性示例,这些示例是对基于DNN的ASR系统的挑战。在本文中,我们提出了一种基于卷积GRU和自注意U-Net的新模型,称为[公式:见正文],以改进对抗性语音信号。为了表示相邻噪声语音帧之间的相关性,在U-Net的瓶颈中添加了两层GRU,并在上采样单元中插入了注意门,以提高对抗性稳定性。使用GRU的目标是将权重共享技术与使用门相结合,以控制多个特征图之间的数据流。因此,它优于[公式:见正文]中使用的原始1D卷积。特别是,通过可解释的语音识别指标来评估该模型的性能,并通过改进的对抗性训练来分析其性能。我们使用对抗性音频攻击来进行自动语音识别(ASR)实验。我们看到(i)使用基于注意力的GRU网络所掌握的时间特征,可以提高基于DNN的ASR模型的鲁棒性;(ii)通过对抗性训练,包括一些附加的对抗性数据增强,我们可以提高基于DNN的自动语音识别模型的泛化能力。单词错误率(WER)指标证实了增强能力优于最先进的[公式:见正文]。这种增强的原因是GRU单元能够提取特征图中的全局信息。基于所进行的实验,所提出的[公式:见正文]在语音增强中提高了对抗性语音示例的语音传输指数(STI)、语音质量感知评估(PESQ)和短期目标可理解性(STOI)的得分。
{"title":"Self-Attention-Based Convolutional GRU for Enhancement of Adversarial Speech Examples","authors":"Chaitanya Jannu, S. Vanambathina","doi":"10.1142/s0219467824500530","DOIUrl":"https://doi.org/10.1142/s0219467824500530","url":null,"abstract":"Recent research has identified adversarial examples which are the challenges to DNN-based ASR systems. In this paper, we propose a new model based on Convolutional GRU and Self-attention U-Net called [Formula: see text] to improve adversarial speech signals. To represent the correlation between neighboring noisy speech frames, a two-Layer GRU is added in the bottleneck of U-Net and an attention gate is inserted in up-sampling units to increase the adversarial stability. The goal of using GRU is to combine the weights sharing technique with the use of gates to control the flow of data across multiple feature maps. As a result, it outperforms the original 1D convolution used in [Formula: see text]. Especially, the performance of the model is evaluated by explainable speech recognition metrics and its performance is analyzed by the improved adversarial training. We used adversarial audio attacks to perform experiments on automatic speech recognition (ASR). We saw (i) the robustness of ASR models which are based on DNN can be improved using the temporal features grasped by the attention-based GRU network; (ii) through adversarial training, including some additive adversarial data augmentation, we could improve the generalization power of Automatic Speech Recognition models which are based on DNN. The word-error-rate (WER) metric confirmed that the enhancement capabilities are better than the state-of-the-art [Formula: see text]. The reason for this enhancement is the ability of GRU units to extract global information within the feature maps. Based on the conducted experiments, the proposed [Formula: see text] increases the score of Speech Transmission Index (STI), Perceptual Evaluation of Speech Quality (PESQ), and the Short-term Objective Intelligibility (STOI) with adversarial speech examples in speech enhancement.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41729473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures 基于混合深度学习架构的异常事件检测的双流时空特征提取和分类模型
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-08 DOI: 10.1142/s0219467824500529
P. Mangai, M. Geetha, G. Kumaravelan
Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.
利用监控录像识别事件是减少犯罪和非法活动的主要来源。具体地,异常事件检测获得更多关注,从而可以提供即时响应。使用传统技术的视频处理可以识别事件,但无法对其进行分类。最近,基于深度学习的视频处理应用提供了优异的性能,然而该架构考虑了事件检测的空间或时间特征。为了提高视频关键帧异常事件检测的检测率和分类精度,必须同时考虑空间和时间特征。早期的方法考虑关键帧中的任何一个特征来检测视频帧中的异常。然而,由于视频环境和其他因素,结果并不准确,有时容易出错。因此,提出了双流混合深度学习架构来处理视频异常检测过程中的空间和时间特征,以获得增强的检测性能。所提出的混合模型使用YOLO-V4和VGG-16提取空间特征,并使用光学FlowNet和VGG-1提取时间特征。使用混合CNN-LSTM模型对提取的特征进行融合和分类。使用基准UCF犯罪数据集的实验验证了所提出的模型相对于现有异常检测方法的性能。所提出的模型达到了95.6%的最大精度,这表明与最先进的技术相比具有更好的性能。
{"title":"Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures","authors":"P. Mangai, M. Geetha, G. Kumaravelan","doi":"10.1142/s0219467824500529","DOIUrl":"https://doi.org/10.1142/s0219467824500529","url":null,"abstract":"Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42437675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artistic Image Style Transfer Based on CycleGAN Network Model 基于CycleGAN网络模型的艺术图像风格传递
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-07 DOI: 10.1142/s0219467824500499
Yanxi Wei
With the development of computer technology, image stylization has become one of the hottest technologies in image processing. To optimize the effect of artistic image style conversion, a method of artistic image style conversion optimized by attention mechanism is proposed. The CycleGAN network model is introduced, and then the generator is optimized by the attention mechanism. Finally, the application effect of the improved model is tested and analyzed. The results show that the improved model tends to be stable after 40 iterations, the loss value remains at 0.3, and the PSNR value can reach up to 15. From the perspective of the generated image effect, the model has a better visual effect than the CycleGAN model. In the subjective evaluation, 63 people expressed satisfaction with the converted artistic image. As a result, the cyclic generative adversarial network model optimized by the attention mechanism improves the clarity of the generated image, enhances the effect of blurring the target boundary contour, retains the detailed information of the image, optimizes the image stylization effect, and improves the image quality of the method and application value of the processing field.
随着计算机技术的发展,图像风格化已成为图像处理中最热门的技术之一。为了优化艺术图像风格转换的效果,提出了一种利用注意力机制优化艺术图像样式转换的方法。引入CycleGAN网络模型,利用注意力机制对生成器进行优化。最后,对改进模型的应用效果进行了测试和分析。结果表明,改进后的模型在40次迭代后趋于稳定,损失值保持在0.3,PSNR值可达15。从生成的图像效果来看,该模型比CycleGAN模型具有更好的视觉效果。在主观评价中,63人对转换后的艺术形象表示满意。因此,通过注意力机制优化的循环生成对抗性网络模型提高了生成图像的清晰度,增强了模糊目标边界轮廓的效果,保留了图像的详细信息,优化了图像风格化效果,提高了该方法的图像质量和处理领域的应用价值。
{"title":"Artistic Image Style Transfer Based on CycleGAN Network Model","authors":"Yanxi Wei","doi":"10.1142/s0219467824500499","DOIUrl":"https://doi.org/10.1142/s0219467824500499","url":null,"abstract":"With the development of computer technology, image stylization has become one of the hottest technologies in image processing. To optimize the effect of artistic image style conversion, a method of artistic image style conversion optimized by attention mechanism is proposed. The CycleGAN network model is introduced, and then the generator is optimized by the attention mechanism. Finally, the application effect of the improved model is tested and analyzed. The results show that the improved model tends to be stable after 40 iterations, the loss value remains at 0.3, and the PSNR value can reach up to 15. From the perspective of the generated image effect, the model has a better visual effect than the CycleGAN model. In the subjective evaluation, 63 people expressed satisfaction with the converted artistic image. As a result, the cyclic generative adversarial network model optimized by the attention mechanism improves the clarity of the generated image, enhances the effect of blurring the target boundary contour, retains the detailed information of the image, optimizes the image stylization effect, and improves the image quality of the method and application value of the processing field.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48180670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Classification of Objects in Video Content Analysis Using Ensemble Convolutional Neural Network Model 基于集成卷积神经网络模型的视频内容分析中对象的检测与分类
IF 1.6 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-07 DOI: 10.1142/s0219467825500068
Sita M. Yadav, S. Chaware
Video content analysis (VCA) is the process of analyzing the contents in the video for various applications. Video classification and content analysis are two of the most difficult challenges that computer vision researchers must solve. Object detection plays an important role in the VCA and is used for identification, detection and classification of objects in the images. The Chaser Prairie Wolf optimization-based deep Convolutional Neural Network classifier (CPW opt-deep CNN classifier) is used in this research to identify and classify the objects in the videos. The deep CNN classifier correctly detected the objects in the video, and the CPW optimization boosted the deep CNN classifier’s performance, where the decision-making behavior of the chasers is enhanced by the sharing nature of the prairie wolves. The classifier’s parameters were successfully tuned by the enabled optimization, which also aids in producing better results. The Ensemble model developed for the object detection adds value to the research and is initiated by the standard hybridization of the YOLOv4 and Resnet 101 model, which evaluated the research’s accuracy, sensitivity, and specificity, improving its efficacy. The proposed CPW opt-deep CNN classifier attained the values of 89.74%, 89.50%, and 89.19% while classifying objects in dataset 1, 91.66%, 86.01%, and 91.52% while classifying objects in dataset 2, compared to the preceding method that is efficient.
视频内容分析(VCA)是为各种应用分析视频中的内容的过程。视频分类和内容分析是计算机视觉研究人员必须解决的两个最困难的挑战。目标检测在VCA中起着重要作用,用于识别、检测和分类图像中的目标。本研究使用基于Chaser Prairie Wolf优化的深度卷积神经网络分类器(CPW opt deep CNN分类器)对视频中的对象进行识别和分类。深度CNN分类器正确地检测到了视频中的对象,CPW优化提高了深度CNN分类器的性能,草原狼的共享性增强了追逐者的决策行为。启用的优化成功地调整了分类器的参数,这也有助于产生更好的结果。为物体检测开发的Ensemble模型为研究增加了价值,由YOLOv4和Resnet 101模型的标准杂交启动,该模型评估了研究的准确性、敏感性和特异性,提高了其疗效。与前面的有效方法相比,所提出的CPW opt deep CNN分类器在对数据集1中的对象进行分类时获得了89.74%、89.50%和89.19%的值,在对数据集中2的对象进行归类时获得了91.66%、86.01%和91.52%的值。
{"title":"Detection and Classification of Objects in Video Content Analysis Using Ensemble Convolutional Neural Network Model","authors":"Sita M. Yadav, S. Chaware","doi":"10.1142/s0219467825500068","DOIUrl":"https://doi.org/10.1142/s0219467825500068","url":null,"abstract":"Video content analysis (VCA) is the process of analyzing the contents in the video for various applications. Video classification and content analysis are two of the most difficult challenges that computer vision researchers must solve. Object detection plays an important role in the VCA and is used for identification, detection and classification of objects in the images. The Chaser Prairie Wolf optimization-based deep Convolutional Neural Network classifier (CPW opt-deep CNN classifier) is used in this research to identify and classify the objects in the videos. The deep CNN classifier correctly detected the objects in the video, and the CPW optimization boosted the deep CNN classifier’s performance, where the decision-making behavior of the chasers is enhanced by the sharing nature of the prairie wolves. The classifier’s parameters were successfully tuned by the enabled optimization, which also aids in producing better results. The Ensemble model developed for the object detection adds value to the research and is initiated by the standard hybridization of the YOLOv4 and Resnet 101 model, which evaluated the research’s accuracy, sensitivity, and specificity, improving its efficacy. The proposed CPW opt-deep CNN classifier attained the values of 89.74%, 89.50%, and 89.19% while classifying objects in dataset 1, 91.66%, 86.01%, and 91.52% while classifying objects in dataset 2, compared to the preceding method that is efficient.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48347690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Image and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1