首页 > 最新文献

International Journal of Innovative Computing Information and Control最新文献

英文 中文
Diabetic Retinopathy Image Classification Using Shift Window Transformer 利用移位窗变压器对糖尿病视网膜病变图像进行分类
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.415
Rasha Ali Dihin, Waleed A Mahmoud Al-Jawher, Ebtesam N AlShemmary
Diabetic retinopathy is one of the most dangerous complications for diabetic patients, leading to blindness if not diagnosed early. However, early diagnosis can control and prevent the disease from progressing to blindness. Transformers are considered state-of-the-art models in natural language processing that do not use convolutional layers. In transformers, means of multi-head attention mechanisms capture long-range contextual relations between pixels. For grading diabetic retinopathy, CNNs currently dominate deep learning solutions. However, the benefits of transformers, have led us to propose an appropriate transformer-based method to recognize diabetic retinopathy grades. A major objective of this research is to demonstrate that the pure attention mechanism can be used to determine diabetic retinopathy and that transformers can replace standard CNNs in identifying the degrees of diabetic retinopathy. In this study, a Swin Transformer-based technique for diagnosing diabetic retinopathy is presented by dividing fundus images into nonoverlapping batches, flattening them, and maintaining positional information using a linear and positional embedding procedure. Several multi-headed attention layers are fed into the resulting sequence to construct the final representation. In the classification step, the initial token sequence is passed into the SoftMax classification layer, which produces the recognition output. This work introduced the Swin transformer performance on the APTOS 2019 Kaggle for training and testing using fundus images of different resolutions and patches. The test accuracy, test loss, and test top 2 accuracies were 69.44%, 1.13, and 78.33%, respectively for 160*160 image size, patch size=2, and embedding dimension C=64. While the test accuracy was 68.85%, test loss: 1.12, and test top 2 accuracy: 79.96% when the patch size=4, and embedding dimension C=96. And when the size image is 224*224, patch size=2, and embedding dimension C=64, the test accuracy: 72.5%, test loss: 1.07, and test top 2 accuracy: 83.7%. When the patch size =4, embedding dimension C=96, the test accuracy was 74.51%, test loss: 1.02, and the test top 2 accuracy was 85.3%. The results showed that the Swin Transformer can achieve flexible memory savings. The proposed method highlights that an attention mechanism based on the Swin Transformer model is promising for the diabetic retinopathy grade recognition task.
糖尿病视网膜病变是糖尿病患者最危险的并发症之一,如果不及早诊断,可能导致失明。然而,早期诊断可以控制和防止疾病发展为失明。变形金刚被认为是自然语言处理中最先进的模型,不使用卷积层。在变压器中,多头注意机制的手段捕获像素之间的远程上下文关系。对于糖尿病视网膜病变的分级,cnn目前主导着深度学习解决方案。然而,变压器的好处,使我们提出了一种适当的基于变压器的方法来识别糖尿病视网膜病变的等级。本研究的一个主要目的是证明纯注意机制可以用来判断糖尿病视网膜病变,并且transformer可以取代标准cnn来识别糖尿病视网膜病变的程度。在这项研究中,提出了一种基于Swin变压器的诊断糖尿病视网膜病变的技术,该技术将眼底图像划分为不重叠的批,使其平坦,并使用线性和位置嵌入程序维护位置信息。将多个多头注意层输入到生成的序列中以构建最终的表示。在分类步骤中,将初始标记序列传递给SoftMax分类层,由SoftMax分类层产生识别输出。本文介绍了Swin变压器在APTOS 2019 Kaggle上的性能,使用不同分辨率和补丁的眼底图像进行训练和测试。当图像尺寸为160*160,patch尺寸为2,嵌入维数C=64时,测试准确率为69.44%,测试损失为1.13%,测试前2位准确率为78.33%。当patch大小=4,嵌入维数C=96时,测试准确率为68.85%,测试损失为1.12,测试前2准确率为79.96%。当图像尺寸为224*224,patch尺寸为2,嵌入维数C=64时,测试准确率为72.5%,测试损失为1.07,测试前2准确率为83.7%。当贴片大小=4,嵌入维数C=96时,测试准确率为74.51%,测试损失为1.02,测试前2名准确率为85.3%。结果表明,Swin变压器可以实现灵活的内存节省。该方法强调了一种基于Swin Transformer模型的注意机制在糖尿病视网膜病变等级识别任务中的应用前景。
{"title":"Diabetic Retinopathy Image Classification Using Shift Window Transformer","authors":"Rasha Ali Dihin, Waleed A Mahmoud Al-Jawher, Ebtesam N AlShemmary","doi":"10.11113/ijic.v13n1-2.415","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.415","url":null,"abstract":"Diabetic retinopathy is one of the most dangerous complications for diabetic patients, leading to blindness if not diagnosed early. However, early diagnosis can control and prevent the disease from progressing to blindness. Transformers are considered state-of-the-art models in natural language processing that do not use convolutional layers. In transformers, means of multi-head attention mechanisms capture long-range contextual relations between pixels. For grading diabetic retinopathy, CNNs currently dominate deep learning solutions. However, the benefits of transformers, have led us to propose an appropriate transformer-based method to recognize diabetic retinopathy grades. A major objective of this research is to demonstrate that the pure attention mechanism can be used to determine diabetic retinopathy and that transformers can replace standard CNNs in identifying the degrees of diabetic retinopathy. In this study, a Swin Transformer-based technique for diagnosing diabetic retinopathy is presented by dividing fundus images into nonoverlapping batches, flattening them, and maintaining positional information using a linear and positional embedding procedure. Several multi-headed attention layers are fed into the resulting sequence to construct the final representation. In the classification step, the initial token sequence is passed into the SoftMax classification layer, which produces the recognition output. This work introduced the Swin transformer performance on the APTOS 2019 Kaggle for training and testing using fundus images of different resolutions and patches. The test accuracy, test loss, and test top 2 accuracies were 69.44%, 1.13, and 78.33%, respectively for 160*160 image size, patch size=2, and embedding dimension C=64. While the test accuracy was 68.85%, test loss: 1.12, and test top 2 accuracy: 79.96% when the patch size=4, and embedding dimension C=96. And when the size image is 224*224, patch size=2, and embedding dimension C=64, the test accuracy: 72.5%, test loss: 1.07, and test top 2 accuracy: 83.7%. When the patch size =4, embedding dimension C=96, the test accuracy was 74.51%, test loss: 1.02, and the test top 2 accuracy was 85.3%. The results showed that the Swin Transformer can achieve flexible memory savings. The proposed method highlights that an attention mechanism based on the Swin Transformer model is promising for the diabetic retinopathy grade recognition task.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135781465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Dust Sand Image Enhancement Based on Color Correction and New Fuzzy Intensification Operators 基于颜色校正和新的模糊增强算子的沙尘图像快速增强
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.416
Ali Hakem Alsaeedi, Yarub Alazzawi, Suha Mohammed Hadi
Images captured in dusty environments suffering from poor visibility and quality. Enhancement of these images such as sand dust images plays a critical role in various atmospheric optics applications. In this work, proposed a new model based on Color Correction and New Fuzzy Intensification Operators to enhance san dust images. The proposed model consists of three phases: correction of color shift, removal of haze, and enhancement of contrast and brightness. The color shift is corrected using a fuzzy intensification operator to adjust the values of U and V in the YUV color space. The Adaptive Dark Channel Prior (A-DCP) is used for haze removal. The stretching contrast and improving image brightness are based on Contrast Limited Adaptive Histogram Equalization (CLAHE). The proposed model tests and evaluates through many real sand dust images. The experimental results show that the proposed solution is outperformed the current studies in terms of effectively removing the red and yellow cast and provides high quality and quantity dust images.
在尘土飞扬的环境中拍摄的图像能见度和质量都很差。增强这些图像,如沙尘图像在各种大气光学应用中起着至关重要的作用。本文提出了一种新的基于色彩校正和模糊增强算子的沙尘图像增强模型。该模型包括三个阶段:色移校正、雾霾去除、对比度和亮度增强。使用模糊强化算子来调整YUV色彩空间中的U和V值来校正色移。自适应暗通道先验(A-DCP)用于雾霾去除。拉伸对比度和提高图像亮度是基于对比度限制自适应直方图均衡化(CLAHE)。该模型通过大量真实沙尘图像进行了测试和评价。实验结果表明,该解决方案在有效去除红、黄色斑方面优于现有研究,并提供了高质量、高数量的尘埃图像。
{"title":"Fast Dust Sand Image Enhancement Based on Color Correction and New Fuzzy Intensification Operators","authors":"Ali Hakem Alsaeedi, Yarub Alazzawi, Suha Mohammed Hadi","doi":"10.11113/ijic.v13n1-2.416","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.416","url":null,"abstract":"Images captured in dusty environments suffering from poor visibility and quality. Enhancement of these images such as sand dust images plays a critical role in various atmospheric optics applications. In this work, proposed a new model based on Color Correction and New Fuzzy Intensification Operators to enhance san dust images. The proposed model consists of three phases: correction of color shift, removal of haze, and enhancement of contrast and brightness. The color shift is corrected using a fuzzy intensification operator to adjust the values of U and V in the YUV color space. The Adaptive Dark Channel Prior (A-DCP) is used for haze removal. The stretching contrast and improving image brightness are based on Contrast Limited Adaptive Histogram Equalization (CLAHE). The proposed model tests and evaluates through many real sand dust images. The experimental results show that the proposed solution is outperformed the current studies in terms of effectively removing the red and yellow cast and provides high quality and quantity dust images.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135689720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
New Proposed Mixed Transforms: CAW and FAW and Their Application in Medical Image Classification 新的混合变换:CAW和FAW及其在医学图像分类中的应用
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.414
Maryam I Mousa Al-Khuzaay, Waleed A. Mahmoud Al-Jawher
The transformation model plays a vital role in medical image processing. This paper proposed new two Mixed Transforms models that are the hybrid combination of linear and nonlinear Transformations techniques. The first mixed transform is computed in three steps: calculate 2D discrete cosine transform (DCT) of the image, and applying Arnold Transform (AT) on the DCT coefficients, and applying the discrete Wavelet Transform (DWT) on the result to get which was abbreviated as (CAW). The second mixed transform consists of firstly computing the discrete Fourier transform (DFT), net applying the Arnold Transform (AT), and finally, the computation of discrete Wavelet Transform (DWT) which was abbreviated as (FAW). These transforms have superior directional representations as compared to other multiresolution representations such as DWT or DCT and work as non-adaptive mixed transformations for multi-scale object analysis. Due to their relationship to the wavelet idea, they are finding increasing use in areas like image processing and scientific computing. These transforms are tested in medical image classification task and their performances are compared with that of the traditional transforms. CAW and FAW transforms are used in the feature extraction stage of a classification VGG16 deep learning (DNN) task of Tumor MRI medical image. The numerical findings favor CAW and FAW over the wavelet transform for estimating and classifying pictures. From the results obtained it was shown that the CAW and FAW transform gave e much higher classification rate than that achieved with the traditional transforms, namely DCT, DFT and DWT. Furthermore, this combination leads to a family of directional and multi-transformation bases for image processing.
变换模型在医学图像处理中起着至关重要的作用。本文提出了两种新的混合变换模型,它们是线性和非线性变换技术的混合结合。第一个混合变换分三步计算:计算图像的二维离散余弦变换(DCT),对DCT系数进行阿诺德变换(AT),对结果进行离散小波变换(DWT),得到(CAW)。第二次混合变换包括首先计算离散傅里叶变换(DFT),然后应用阿诺德变换(AT),最后计算离散小波变换(DWT),简称为FAW。与其他多分辨率表示(如DWT或DCT)相比,这些转换具有优越的方向表示,并且可以作为多尺度对象分析的非自适应混合转换。由于它们与小波思想的关系,它们在图像处理和科学计算等领域的应用越来越广泛。在医学图像分类任务中对这些变换进行了测试,并与传统变换的性能进行了比较。在肿瘤MRI医学图像的分类VGG16深度学习(DNN)任务中,将CAW和FAW变换用于特征提取阶段。数值结果表明,相对于小波变换,CAW和FAW更适合用于图像估计和分类。结果表明,CAW和FAW变换的分类率明显高于传统的DCT、DFT和DWT变换。此外,这种组合还形成了一系列用于图像处理的定向和多变换基。
{"title":"New Proposed Mixed Transforms: CAW and FAW and Their Application in Medical Image Classification","authors":"Maryam I Mousa Al-Khuzaay, Waleed A. Mahmoud Al-Jawher","doi":"10.11113/ijic.v13n1-2.414","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.414","url":null,"abstract":"The transformation model plays a vital role in medical image processing. This paper proposed new two Mixed Transforms models that are the hybrid combination of linear and nonlinear Transformations techniques. The first mixed transform is computed in three steps: calculate 2D discrete cosine transform (DCT) of the image, and applying Arnold Transform (AT) on the DCT coefficients, and applying the discrete Wavelet Transform (DWT) on the result to get which was abbreviated as (CAW). The second mixed transform consists of firstly computing the discrete Fourier transform (DFT), net applying the Arnold Transform (AT), and finally, the computation of discrete Wavelet Transform (DWT) which was abbreviated as (FAW). These transforms have superior directional representations as compared to other multiresolution representations such as DWT or DCT and work as non-adaptive mixed transformations for multi-scale object analysis. Due to their relationship to the wavelet idea, they are finding increasing use in areas like image processing and scientific computing. These transforms are tested in medical image classification task and their performances are compared with that of the traditional transforms. CAW and FAW transforms are used in the feature extraction stage of a classification VGG16 deep learning (DNN) task of Tumor MRI medical image. The numerical findings favor CAW and FAW over the wavelet transform for estimating and classifying pictures. From the results obtained it was shown that the CAW and FAW transform gave e much higher classification rate than that achieved with the traditional transforms, namely DCT, DFT and DWT. Furthermore, this combination leads to a family of directional and multi-transformation bases for image processing.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135689440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proposed DeepFake Detection Method Using Multiwavelet Transform 提出了一种基于多小波变换的深度伪造检测方法
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.420
Saadi Mohammed Saadi, Waleed Ameen Mahmoud Al-Jawher
Videos made by artificial intelligence (A.I.) seem real, but they are not. When making DeepFake videos, face-swapping methods are frequently employed. The misuse of technology when using fakes, even though it was fun at first, these videos were somewhat recognizable to human eyes. However, as machine learning advanced, it became simpler to produce profound fake videos. It's practically impossible to tell it apart from actual videos now. Using GANs (Generative Adversarial Networks) and other deep learning techniques, DeepFake videos are output technology that may mislead people into thinking something is real when it is not. This study used a MultiWavelet transform to analyze the type of edge and its sharpness to develop a blur inconsistency detecting system. With this capability, it can assess whether or not the facial area is obscured in the video. As a result, it will detect fake videos. This paper reviews DeepFake detection techniques and discusses how they might be combined or altered to get more accurate results. A detection rate of more than 93.5% was obtained, which is quite successful.
人工智能(ai)制作的视频看似真实,实则不然。在制作DeepFake视频时,经常使用换脸方法。使用假视频时对技术的滥用,尽管一开始很有趣,但这些视频在某种程度上是人眼可识别的。然而,随着机器学习的进步,制作深奥的假视频变得更简单了。现在几乎不可能把它和真实的视频区分开来。使用gan(生成对抗网络)和其他深度学习技术,DeepFake视频是一种输出技术,它可能会误导人们认为某些东西是真实的,但实际上不是。本研究利用多小波变换分析图像边缘的类型及其清晰度,开发了一种模糊不一致检测系统。有了这个功能,它可以评估视频中面部区域是否被遮挡。因此,它将检测虚假视频。本文回顾了DeepFake检测技术,并讨论了如何将它们组合或改变以获得更准确的结果。该方法的检出率达93.5%以上,是相当成功的。
{"title":"Proposed DeepFake Detection Method Using Multiwavelet Transform","authors":"Saadi Mohammed Saadi, Waleed Ameen Mahmoud Al-Jawher","doi":"10.11113/ijic.v13n1-2.420","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.420","url":null,"abstract":"Videos made by artificial intelligence (A.I.) seem real, but they are not. When making DeepFake videos, face-swapping methods are frequently employed. The misuse of technology when using fakes, even though it was fun at first, these videos were somewhat recognizable to human eyes. However, as machine learning advanced, it became simpler to produce profound fake videos. It's practically impossible to tell it apart from actual videos now. Using GANs (Generative Adversarial Networks) and other deep learning techniques, DeepFake videos are output technology that may mislead people into thinking something is real when it is not. This study used a MultiWavelet transform to analyze the type of edge and its sharpness to develop a blur inconsistency detecting system. With this capability, it can assess whether or not the facial area is obscured in the video. As a result, it will detect fake videos. This paper reviews DeepFake detection techniques and discusses how they might be combined or altered to get more accurate results. A detection rate of more than 93.5% was obtained, which is quite successful.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135781280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Fusion Algorithm using Grey Wolf optimization with Shuffled Frog Leaping Algorithm 基于灰狼优化和青蛙跳跃算法的图像融合算法
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.412
Afrah U Mosa, Waleed A Mahmoud Al-Jawher
Data fusion is a “formal framework in which are expressed the means and tools for the alliance of data originating from different sources.” It aims at obtaining information of greater quality; the exact definition of 'greater quality will depend upon the application. It is a famous technique in digital image processing and is very important in medical image representation for clinical diagnosis. Previously many researchers used many meta-heuristic optimization techniques in image fusion, but the problem of local optimization restricted their searching flow to find optimum search results. In this paper, the Grey Wolf Optimization (GWO) algorithm with the help of the Shuffled Frog Leaping Algorithm (SFLA) has been proposed. That helps to find the object and allows doctors to take some action. The optimization algorithm is examined with a demonstrated example in order to simplify its steps. The result of the proposed algorithm is compared with other optimization algorithms. The proposed method's performance was always the best among them.
数据融合是一个“正式的框架,其中表达了来自不同来源的数据联盟的手段和工具”。它旨在获得更高质量的信息;“更高质量”的确切定义将取决于应用。它是数字图像处理领域的一项著名技术,在医学图像表示、临床诊断等方面具有重要意义。以往许多研究人员在图像融合中使用了许多元启发式优化技术,但局部优化问题限制了它们的搜索流程,无法找到最优的搜索结果。本文提出了一种基于洗牌青蛙跳跃算法的灰狼优化算法(GWO)。这有助于找到物体,并让医生采取一些行动。通过实例验证了优化算法,简化了优化步骤。并与其他优化算法进行了比较。该方法的性能始终是其中最好的。
{"title":"Image Fusion Algorithm using Grey Wolf optimization with Shuffled Frog Leaping Algorithm","authors":"Afrah U Mosa, Waleed A Mahmoud Al-Jawher","doi":"10.11113/ijic.v13n1-2.412","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.412","url":null,"abstract":"Data fusion is a “formal framework in which are expressed the means and tools for the alliance of data originating from different sources.” It aims at obtaining information of greater quality; the exact definition of 'greater quality will depend upon the application. It is a famous technique in digital image processing and is very important in medical image representation for clinical diagnosis. Previously many researchers used many meta-heuristic optimization techniques in image fusion, but the problem of local optimization restricted their searching flow to find optimum search results. In this paper, the Grey Wolf Optimization (GWO) algorithm with the help of the Shuffled Frog Leaping Algorithm (SFLA) has been proposed. That helps to find the object and allows doctors to take some action. The optimization algorithm is examined with a demonstrated example in order to simplify its steps. The result of the proposed algorithm is compared with other optimization algorithms. The proposed method's performance was always the best among them.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135781281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Hand Gesture Recognition Using YOLO and (Darknet-19) Convolution Neural Networks 使用YOLO和(Darknet-19)卷积神经网络的实时手势识别
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.422
Raad Ahmed Mohamed, Karim Q Hussein
There are at least three hundred and fifty million people in the world that cannot hear or speak. These are what are called deaf and dumb. Often this segment of society is partially isolated from the rest of society due to the difficulty of dealing, communicating and understanding between this segment and the rest of the healthy society. As a result of this problem, a number of solutions have been proposed that attempt to bridge this gap between this segment and the rest of society. The main reason for this is to simplify the understanding of sign language. The basic idea is building program to recognize the hand movement of the interlocutor and convert it from images to symbols or letters found in the dictionary of the deaf and dumb. This process itself follows mainly the applications of artificial intelligence, where it is important to distinguish, identify, and extract the palm of the hand from the regular images received by the camera device, and then convert this image of the movement of the paws or hands into understandable symbols. In this paper, the method of image processing and artificial intelligence, represented by the use of artificial neural networks after synthesizing the problem under research was used. Scanning the image to determine the areas of the right and left palm. Non-traditional methods that use artificial intelligence like Convolutional Neural Networks are used to fulfill this part. YOLO V-2 specifically was used in the current research with excellent results. Part Two: Building a pictorial dictionary of the letters used in teaching the deaf and dumb, after generating the image database for the dictionary, neural network Dark NET-19 were used to identify (classification) the images of characters extracted from the first part of the program. The results obtained from the research show that the use of neural networks, especially convolution neural networks, is very suitable in terms of accuracy, speed of performance, and generality in processing the previously unused input data. Many of the limitations associated with using such a program without specifying specific shapes (general shape) and templates, hand shape, hand speed, hand color and other physical expressions and without using any other physical aids were overcome through the optimal use of artificial convolution neural networks.
世界上至少有3.5亿人不能听或说。这就是我们所说的聋哑人。由于这部分人与健康社会的其他部分难以打交道、沟通和理解,这部分人往往与社会其他部分部分隔离。由于这个问题,已经提出了一些解决办法,试图弥合这部分人与社会其他部分之间的差距。这样做的主要原因是为了简化对手语的理解。基本思想是建立一个程序来识别对话者的手部动作,并将其从图像转换为聋哑人字典中的符号或字母。这个过程本身主要遵循人工智能的应用,其中重要的是从相机设备接收到的常规图像中区分、识别和提取手掌,然后将爪子或手的运动图像转换为可理解的符号。本文采用图像处理与人工智能相结合的方法,对所研究的问题进行综合处理,以人工神经网络为代表。扫描图像以确定左右手掌的区域。使用卷积神经网络等非传统人工智能方法来完成这一部分。在目前的研究中专门使用了YOLO V-2,取得了很好的效果。第二部分:构建聋哑教学用字母的图片字典,在为词典生成图像数据库后,利用神经网络Dark NET-19对第一部分程序中提取的字符图像进行识别(分类)。研究结果表明,使用神经网络,特别是卷积神经网络,在处理以前未使用的输入数据时,在精度、性能速度和通用性方面都是非常合适的。在不指定特定形状(一般形状)和模板、手的形状、手的速度、手的颜色和其他物理表情,不使用任何其他物理辅助的情况下,使用这种程序的许多限制都通过人工卷积神经网络的最佳使用来克服。
{"title":"Real-Time Hand Gesture Recognition Using YOLO and (Darknet-19) Convolution Neural Networks","authors":"Raad Ahmed Mohamed, Karim Q Hussein","doi":"10.11113/ijic.v13n1-2.422","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.422","url":null,"abstract":"There are at least three hundred and fifty million people in the world that cannot hear or speak. These are what are called deaf and dumb. Often this segment of society is partially isolated from the rest of society due to the difficulty of dealing, communicating and understanding between this segment and the rest of the healthy society. As a result of this problem, a number of solutions have been proposed that attempt to bridge this gap between this segment and the rest of society. The main reason for this is to simplify the understanding of sign language. The basic idea is building program to recognize the hand movement of the interlocutor and convert it from images to symbols or letters found in the dictionary of the deaf and dumb. This process itself follows mainly the applications of artificial intelligence, where it is important to distinguish, identify, and extract the palm of the hand from the regular images received by the camera device, and then convert this image of the movement of the paws or hands into understandable symbols. In this paper, the method of image processing and artificial intelligence, represented by the use of artificial neural networks after synthesizing the problem under research was used. Scanning the image to determine the areas of the right and left palm. Non-traditional methods that use artificial intelligence like Convolutional Neural Networks are used to fulfill this part. YOLO V-2 specifically was used in the current research with excellent results. Part Two: Building a pictorial dictionary of the letters used in teaching the deaf and dumb, after generating the image database for the dictionary, neural network Dark NET-19 were used to identify (classification) the images of characters extracted from the first part of the program. The results obtained from the research show that the use of neural networks, especially convolution neural networks, is very suitable in terms of accuracy, speed of performance, and generality in processing the previously unused input data. Many of the limitations associated with using such a program without specifying specific shapes (general shape) and templates, hand shape, hand speed, hand color and other physical expressions and without using any other physical aids were overcome through the optimal use of artificial convolution neural networks.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135741114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Multiwavelet Transform with Grey Wolf Optimization Used for an Efficient Classification of Documents 混合多小波变换与灰狼优化用于高效文档分类
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.418
Ahmed Hussein Salman, Waleed Ameen Mahmoud Al-Jawher
In machine learning, feature selection is crucial to increase performance and shorten the model's learning time. It seeks to discover the pertinent predictors from high-dimensional feature space. However, a tremendous increase in the feature dimension space poses a severe obstacle to feature selection techniques. Study process to address this difficulty, the authors suggest a hybrid feature selection method consisting of the Multiwavelet transform and Gray Wolf optimization. The proposed approach minimizes the overall downsides while cherry picking the benefits of both directions. This notable wavelet transform development employs both wavelet and vector scaling functions. Additionally, multiwavelets have orthogonality, symmetry, compact support, and significant vanishing moments. One of the most advanced areas of study of artificial intelligence is optimization algorithms. Grey Wolf Optimization (GWO) here produced artificial techniques that yielded good performance results and were more responsive to current needs. Keywords — About four key words or phrases in order of importance, separated by commas, used to compile the subject index for the last issue for the year.
在机器学习中,特征选择对于提高性能和缩短模型的学习时间至关重要。它寻求从高维特征空间中发现相关的预测因子。然而,特征维空间的急剧增加给特征选择技术带来了严重的障碍。为了解决这一难题,作者提出了一种由多小波变换和灰狼优化组成的混合特征选择方法。所提出的方法最大限度地减少了整体的缺点,同时兼顾了两个方向的好处。这个值得注意的小波变换开发使用了小波和向量缩放函数。此外,多小波具有正交性、对称性、紧支持和显著的消失矩。人工智能最先进的研究领域之一是优化算法。灰狼优化(GWO)在这里产生了产生良好性能结果的人工技术,并且更能响应当前需求。关键词-按重要性排列的四个关键词或短语,用逗号分隔,用于编制年度最后一期的主题索引。
{"title":"A Hybrid Multiwavelet Transform with Grey Wolf Optimization Used for an Efficient Classification of Documents","authors":"Ahmed Hussein Salman, Waleed Ameen Mahmoud Al-Jawher","doi":"10.11113/ijic.v13n1-2.418","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.418","url":null,"abstract":"In machine learning, feature selection is crucial to increase performance and shorten the model's learning time. It seeks to discover the pertinent predictors from high-dimensional feature space. However, a tremendous increase in the feature dimension space poses a severe obstacle to feature selection techniques. Study process to address this difficulty, the authors suggest a hybrid feature selection method consisting of the Multiwavelet transform and Gray Wolf optimization. The proposed approach minimizes the overall downsides while cherry picking the benefits of both directions. This notable wavelet transform development employs both wavelet and vector scaling functions. Additionally, multiwavelets have orthogonality, symmetry, compact support, and significant vanishing moments. One of the most advanced areas of study of artificial intelligence is optimization algorithms. Grey Wolf Optimization (GWO) here produced artificial techniques that yielded good performance results and were more responsive to current needs. Keywords — About four key words or phrases in order of importance, separated by commas, used to compile the subject index for the last issue for the year.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135689445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAM 3D Discrete Chaotic Map for Secure Communication Applications 用于安全通信应用的WAM 3D离散混沌映射
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.419
Ali Akram Abdul-Kareem, Waleed Ameen Mahmoud Al-Jawher
Chaotic systems have become widely adopted as an effective way for secure data communications, because of its simple mathematical complexity and good security. The relationship between encryption algorithms and chaos systems has gained a lot of attention in the past few years, since it avoids the data spreading as well as lower the transmission delay and costs. In this paper a novel 3D discrete chaotic map is proposed for data encryption and secure communication and named as WAM. For secure communication, the Pecora and Carroll (P-C) method was utilized to achieve synchronization between the master system and the slave system. The simulation results of WAM 3D discrete chaotic map showed that the system has a chaotic behavior and a characteristic randomness and can pass 0-1, Lyapunov exponent (LE) and NIST tests which are usually used to check chaotic behavior. The statistical outcomes of the LE test were 0.0193, the frequency test (FT) was 0.4237, and the run test (RT) yielded a value of 0.0607. As a result, it enrich the theoretical basis of the equations and implementation of chaos, and it is superior for encryption algorithms and communication security applications.
混沌系统由于其简单的数学复杂度和良好的安全性而被广泛采用作为一种有效的数据安全通信方式。近年来,加密算法与混沌系统之间的关系受到了广泛的关注,因为它既避免了数据的传播,又降低了传输的延迟和成本。本文提出了一种新的用于数据加密和安全通信的三维离散混沌映射,并将其命名为WAM。为了保证通信安全,采用Pecora和Carroll (P-C)方法实现主系统和从系统之间的同步。WAM三维离散混沌映射的仿真结果表明,该系统具有混沌行为和特征随机性,能够通过0-1、李雅普诺夫指数(LE)和NIST测试等检测混沌行为的常用方法。LE检验的统计结果为0.0193,频率检验(FT)的统计结果为0.4237,运行检验(RT)的统计结果为0.0607。因此,它丰富了混沌方程和实现的理论基础,在加密算法和通信安全应用方面具有优越性。
{"title":"WAM 3D Discrete Chaotic Map for Secure Communication Applications","authors":"Ali Akram Abdul-Kareem, Waleed Ameen Mahmoud Al-Jawher","doi":"10.11113/ijic.v13n1-2.419","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.419","url":null,"abstract":"Chaotic systems have become widely adopted as an effective way for secure data communications, because of its simple mathematical complexity and good security. The relationship between encryption algorithms and chaos systems has gained a lot of attention in the past few years, since it avoids the data spreading as well as lower the transmission delay and costs. In this paper a novel 3D discrete chaotic map is proposed for data encryption and secure communication and named as WAM. For secure communication, the Pecora and Carroll (P-C) method was utilized to achieve synchronization between the master system and the slave system. The simulation results of WAM 3D discrete chaotic map showed that the system has a chaotic behavior and a characteristic randomness and can pass 0-1, Lyapunov exponent (LE) and NIST tests which are usually used to check chaotic behavior. The statistical outcomes of the LE test were 0.0193, the frequency test (FT) was 0.4237, and the run test (RT) yielded a value of 0.0607. As a result, it enrich the theoretical basis of the equations and implementation of chaos, and it is superior for encryption algorithms and communication security applications.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135689858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Digital Video Summarization: A Survey 数字视频摘要:调查
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.421
Sajjad H. Hendi, Karim Q. Hussein, Hazeem B. Taher
Video summarization has arisen as a method that can help with efficient storage, rapid browsing, indexing, fast retrieval, and quick sharing of the material. The amount of video data created has grown exponentially over time. Huge amounts of video are produced continuously by a large number of cameras. Processing these massive amounts of video requires a lot of time, labor, and hardware storage. In this situation, a video summary is crucial. The architecture of video summarization demonstrates how a lengthy film may be broken down into shorter, story-like segments. Numerous sorts of studies have been conducted in the past and continue now. As a result, several approaches and methods—from traditional computer vision to more modern deep learning approaches—have been offered by academics. However, several issues make video summarization difficult, including computational hardware, complexity, and a lack of datasets. Many researchers have recently concentrated their research efforts on developing efficient methods for extracting relevant information from videos. Given that data is gathered constantly, seven days a week, this study area is crucial for the advancement of video surveillance systems that need a lot of storage capacity and intricate data processing. To make data analysis easier, make it easier to store information, and make it easier to access the video at any time, a summary of video data is necessary for these systems. In this paper, methods for creating static or dynamic summaries from videos are presented. The authors provide many approaches for each literary form. The authors have spoken about some features that are utilized to create video summaries.
视频摘要作为一种有助于高效存储、快速浏览、索引、快速检索和快速共享素材的方法而出现。随着时间的推移,视频数据的数量呈指数级增长。大量的视频是由大量的摄像机连续产生的。处理这些海量的视频需要大量的时间、人力和硬件存储。在这种情况下,视频总结是至关重要的。视频摘要的架构展示了一个冗长的电影如何被分解成更短的,类似故事的片段。各种各样的研究已经在过去进行,现在还在继续。因此,从传统的计算机视觉到更现代的深度学习方法,学术界提供了几种方法和方法。然而,有几个问题使视频摘要变得困难,包括计算硬件、复杂性和缺乏数据集。近年来,许多研究人员致力于开发从视频中提取相关信息的有效方法。鉴于数据是连续收集的,一周七天,这一研究领域对于需要大量存储容量和复杂数据处理的视频监控系统的进步至关重要。为了便于数据分析,便于信息存储,便于随时访问视频,这些系统都需要对视频数据进行汇总。本文介绍了从视频中创建静态或动态摘要的方法。作者为每种文学形式提供了许多方法。作者已经谈到了一些用于创建视频摘要的功能。
{"title":"Digital Video Summarization: A Survey","authors":"Sajjad H. Hendi, Karim Q. Hussein, Hazeem B. Taher","doi":"10.11113/ijic.v13n1-2.421","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.421","url":null,"abstract":"Video summarization has arisen as a method that can help with efficient storage, rapid browsing, indexing, fast retrieval, and quick sharing of the material. The amount of video data created has grown exponentially over time. Huge amounts of video are produced continuously by a large number of cameras. Processing these massive amounts of video requires a lot of time, labor, and hardware storage. In this situation, a video summary is crucial. The architecture of video summarization demonstrates how a lengthy film may be broken down into shorter, story-like segments. Numerous sorts of studies have been conducted in the past and continue now. As a result, several approaches and methods—from traditional computer vision to more modern deep learning approaches—have been offered by academics. However, several issues make video summarization difficult, including computational hardware, complexity, and a lack of datasets. Many researchers have recently concentrated their research efforts on developing efficient methods for extracting relevant information from videos. Given that data is gathered constantly, seven days a week, this study area is crucial for the advancement of video surveillance systems that need a lot of storage capacity and intricate data processing. To make data analysis easier, make it easier to store information, and make it easier to access the video at any time, a summary of video data is necessary for these systems. In this paper, methods for creating static or dynamic summaries from videos are presented. The authors provide many approaches for each literary form. The authors have spoken about some features that are utilized to create video summaries.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135689848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Image Encryption Scheme Based on Block Compressive Sensing and Wavelet Transform 基于块压缩感知和小波变换的鲁棒图像加密方案
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.11113/ijic.v13n1-2.413
Qutaiba K Abed, Waleed A Mahmoud Al-Jawher
In this paper, a modified robust image encryption scheme is developed by combining block compressive sensing (BCS) and Wavelet Transform. It was achieved with a balanced performance of security, compression, robustness and running efficiency. First, the plain image is divided equally and sparsely represented in discrete wavelet transform (DWT) domain, and the coefficient vectors are confused using the coefficient random permutation strategy and encrypted into a secret image by compressive sensing. In pursuit of superior security, the hyper-chaotic Lorenz system is utilized to generate the updated secret code streams for encryption and embedding with assistance from the counter mode. This scheme is suitable for processing the medium and large images in parallel. Additionally, it exhibits superior robustness and efficiency compared with existing related schemes. Simulation results and comprehensive performance analyses are presented to demonstrate the effectiveness, secrecy and robustness of the proposed scheme. The compressive encryption model using BCS with Walsh transform as sensing matrix and WAM chaos system, the scrambling technique and diffusion succeeded in enhancement of secure performance.
将小波变换与块压缩感知技术相结合,提出了一种改进的鲁棒图像加密方案。该算法兼顾了安全性、压缩性、鲁棒性和运行效率。首先,在离散小波变换(DWT)域对图像进行等分稀疏表示,利用系数随机置换策略对系数矢量进行混淆,并通过压缩感知加密成秘密图像。为了追求更高的安全性,利用超混沌洛伦兹系统在计数器模式的辅助下生成更新的密码流进行加密和嵌入。该方案适用于大中型图像的并行处理。此外,与现有的相关方案相比,该方法具有更好的鲁棒性和效率。仿真结果和综合性能分析证明了该方案的有效性、保密性和鲁棒性。以Walsh变换为感知矩阵的BCS压缩加密模型和WAM混沌系统、置乱技术和扩散技术成功地提高了安全性能。
{"title":"A Robust Image Encryption Scheme Based on Block Compressive Sensing and Wavelet Transform","authors":"Qutaiba K Abed, Waleed A Mahmoud Al-Jawher","doi":"10.11113/ijic.v13n1-2.413","DOIUrl":"https://doi.org/10.11113/ijic.v13n1-2.413","url":null,"abstract":"In this paper, a modified robust image encryption scheme is developed by combining block compressive sensing (BCS) and Wavelet Transform. It was achieved with a balanced performance of security, compression, robustness and running efficiency. First, the plain image is divided equally and sparsely represented in discrete wavelet transform (DWT) domain, and the coefficient vectors are confused using the coefficient random permutation strategy and encrypted into a secret image by compressive sensing. In pursuit of superior security, the hyper-chaotic Lorenz system is utilized to generate the updated secret code streams for encryption and embedding with assistance from the counter mode. This scheme is suitable for processing the medium and large images in parallel. Additionally, it exhibits superior robustness and efficiency compared with existing related schemes. Simulation results and comprehensive performance analyses are presented to demonstrate the effectiveness, secrecy and robustness of the proposed scheme. The compressive encryption model using BCS with Walsh transform as sensing matrix and WAM chaos system, the scrambling technique and diffusion succeeded in enhancement of secure performance.","PeriodicalId":50314,"journal":{"name":"International Journal of Innovative Computing Information and Control","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135689301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
International Journal of Innovative Computing Information and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1