首页 > 最新文献

International Journal of Image and Graphics最新文献

英文 中文
Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures 基于混合深度学习架构的异常事件检测的双流时空特征提取和分类模型
IF 1.6 Q3 Computer Science Pub Date : 2023-07-08 DOI: 10.1142/s0219467824500529
P. Mangai, M. Geetha, G. Kumaravelan
Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.
利用监控录像识别事件是减少犯罪和非法活动的主要来源。具体地,异常事件检测获得更多关注,从而可以提供即时响应。使用传统技术的视频处理可以识别事件,但无法对其进行分类。最近,基于深度学习的视频处理应用提供了优异的性能,然而该架构考虑了事件检测的空间或时间特征。为了提高视频关键帧异常事件检测的检测率和分类精度,必须同时考虑空间和时间特征。早期的方法考虑关键帧中的任何一个特征来检测视频帧中的异常。然而,由于视频环境和其他因素,结果并不准确,有时容易出错。因此,提出了双流混合深度学习架构来处理视频异常检测过程中的空间和时间特征,以获得增强的检测性能。所提出的混合模型使用YOLO-V4和VGG-16提取空间特征,并使用光学FlowNet和VGG-1提取时间特征。使用混合CNN-LSTM模型对提取的特征进行融合和分类。使用基准UCF犯罪数据集的实验验证了所提出的模型相对于现有异常检测方法的性能。所提出的模型达到了95.6%的最大精度,这表明与最先进的技术相比具有更好的性能。
{"title":"Two-Stream Spatial–Temporal Feature Extraction and Classification Model for Anomaly Event Detection Using Hybrid Deep Learning Architectures","authors":"P. Mangai, M. Geetha, G. Kumaravelan","doi":"10.1142/s0219467824500529","DOIUrl":"https://doi.org/10.1142/s0219467824500529","url":null,"abstract":"Identifying events using surveillance videos is a major source that reduces crimes and illegal activities. Specifically, abnormal event detection gains more attention so that immediate responses can be provided. Video processing using conventional techniques identifies the events but fails to categorize them. Recently deep learning-based video processing applications provide excellent performances however the architecture considers either spatial or temporal features for event detection. To enhance the detection rate and classification accuracy in abnormal event detection from video keyframes, it is essential to consider both spatial and temporal features. Earlier approaches consider any one of the features from keyframes to detect the anomalies from video frames. However, the results are not accurate and prone to errors sometimes due to video environmental and other factors. Thus, two-stream hybrid deep learning architecture is presented to handle spatial and temporal features in the video anomaly detection process to attain enhanced detection performances. The proposed hybrid models extract spatial features using YOLO-V4 with VGG-16, and temporal features using optical FlowNet with VGG-16. The extracted features are fused and classified using hybrid CNN-LSTM model. Experimentation using benchmark UCF crime dataset validates the proposed model performances over existing anomaly detection methods. The proposed model attains maximum accuracy of 95.6% which indicates better performance compared to state-of-the-art techniques.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42437675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artistic Image Style Transfer Based on CycleGAN Network Model 基于CycleGAN网络模型的艺术图像风格传递
IF 1.6 Q3 Computer Science Pub Date : 2023-07-07 DOI: 10.1142/s0219467824500499
Yanxi Wei
With the development of computer technology, image stylization has become one of the hottest technologies in image processing. To optimize the effect of artistic image style conversion, a method of artistic image style conversion optimized by attention mechanism is proposed. The CycleGAN network model is introduced, and then the generator is optimized by the attention mechanism. Finally, the application effect of the improved model is tested and analyzed. The results show that the improved model tends to be stable after 40 iterations, the loss value remains at 0.3, and the PSNR value can reach up to 15. From the perspective of the generated image effect, the model has a better visual effect than the CycleGAN model. In the subjective evaluation, 63 people expressed satisfaction with the converted artistic image. As a result, the cyclic generative adversarial network model optimized by the attention mechanism improves the clarity of the generated image, enhances the effect of blurring the target boundary contour, retains the detailed information of the image, optimizes the image stylization effect, and improves the image quality of the method and application value of the processing field.
随着计算机技术的发展,图像风格化已成为图像处理中最热门的技术之一。为了优化艺术图像风格转换的效果,提出了一种利用注意力机制优化艺术图像样式转换的方法。引入CycleGAN网络模型,利用注意力机制对生成器进行优化。最后,对改进模型的应用效果进行了测试和分析。结果表明,改进后的模型在40次迭代后趋于稳定,损失值保持在0.3,PSNR值可达15。从生成的图像效果来看,该模型比CycleGAN模型具有更好的视觉效果。在主观评价中,63人对转换后的艺术形象表示满意。因此,通过注意力机制优化的循环生成对抗性网络模型提高了生成图像的清晰度,增强了模糊目标边界轮廓的效果,保留了图像的详细信息,优化了图像风格化效果,提高了该方法的图像质量和处理领域的应用价值。
{"title":"Artistic Image Style Transfer Based on CycleGAN Network Model","authors":"Yanxi Wei","doi":"10.1142/s0219467824500499","DOIUrl":"https://doi.org/10.1142/s0219467824500499","url":null,"abstract":"With the development of computer technology, image stylization has become one of the hottest technologies in image processing. To optimize the effect of artistic image style conversion, a method of artistic image style conversion optimized by attention mechanism is proposed. The CycleGAN network model is introduced, and then the generator is optimized by the attention mechanism. Finally, the application effect of the improved model is tested and analyzed. The results show that the improved model tends to be stable after 40 iterations, the loss value remains at 0.3, and the PSNR value can reach up to 15. From the perspective of the generated image effect, the model has a better visual effect than the CycleGAN model. In the subjective evaluation, 63 people expressed satisfaction with the converted artistic image. As a result, the cyclic generative adversarial network model optimized by the attention mechanism improves the clarity of the generated image, enhances the effect of blurring the target boundary contour, retains the detailed information of the image, optimizes the image stylization effect, and improves the image quality of the method and application value of the processing field.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48180670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Classification of Objects in Video Content Analysis Using Ensemble Convolutional Neural Network Model 基于集成卷积神经网络模型的视频内容分析中对象的检测与分类
IF 1.6 Q3 Computer Science Pub Date : 2023-07-07 DOI: 10.1142/s0219467825500068
Sita M. Yadav, S. Chaware
Video content analysis (VCA) is the process of analyzing the contents in the video for various applications. Video classification and content analysis are two of the most difficult challenges that computer vision researchers must solve. Object detection plays an important role in the VCA and is used for identification, detection and classification of objects in the images. The Chaser Prairie Wolf optimization-based deep Convolutional Neural Network classifier (CPW opt-deep CNN classifier) is used in this research to identify and classify the objects in the videos. The deep CNN classifier correctly detected the objects in the video, and the CPW optimization boosted the deep CNN classifier’s performance, where the decision-making behavior of the chasers is enhanced by the sharing nature of the prairie wolves. The classifier’s parameters were successfully tuned by the enabled optimization, which also aids in producing better results. The Ensemble model developed for the object detection adds value to the research and is initiated by the standard hybridization of the YOLOv4 and Resnet 101 model, which evaluated the research’s accuracy, sensitivity, and specificity, improving its efficacy. The proposed CPW opt-deep CNN classifier attained the values of 89.74%, 89.50%, and 89.19% while classifying objects in dataset 1, 91.66%, 86.01%, and 91.52% while classifying objects in dataset 2, compared to the preceding method that is efficient.
视频内容分析(VCA)是为各种应用分析视频中的内容的过程。视频分类和内容分析是计算机视觉研究人员必须解决的两个最困难的挑战。目标检测在VCA中起着重要作用,用于识别、检测和分类图像中的目标。本研究使用基于Chaser Prairie Wolf优化的深度卷积神经网络分类器(CPW opt deep CNN分类器)对视频中的对象进行识别和分类。深度CNN分类器正确地检测到了视频中的对象,CPW优化提高了深度CNN分类器的性能,草原狼的共享性增强了追逐者的决策行为。启用的优化成功地调整了分类器的参数,这也有助于产生更好的结果。为物体检测开发的Ensemble模型为研究增加了价值,由YOLOv4和Resnet 101模型的标准杂交启动,该模型评估了研究的准确性、敏感性和特异性,提高了其疗效。与前面的有效方法相比,所提出的CPW opt deep CNN分类器在对数据集1中的对象进行分类时获得了89.74%、89.50%和89.19%的值,在对数据集中2的对象进行归类时获得了91.66%、86.01%和91.52%的值。
{"title":"Detection and Classification of Objects in Video Content Analysis Using Ensemble Convolutional Neural Network Model","authors":"Sita M. Yadav, S. Chaware","doi":"10.1142/s0219467825500068","DOIUrl":"https://doi.org/10.1142/s0219467825500068","url":null,"abstract":"Video content analysis (VCA) is the process of analyzing the contents in the video for various applications. Video classification and content analysis are two of the most difficult challenges that computer vision researchers must solve. Object detection plays an important role in the VCA and is used for identification, detection and classification of objects in the images. The Chaser Prairie Wolf optimization-based deep Convolutional Neural Network classifier (CPW opt-deep CNN classifier) is used in this research to identify and classify the objects in the videos. The deep CNN classifier correctly detected the objects in the video, and the CPW optimization boosted the deep CNN classifier’s performance, where the decision-making behavior of the chasers is enhanced by the sharing nature of the prairie wolves. The classifier’s parameters were successfully tuned by the enabled optimization, which also aids in producing better results. The Ensemble model developed for the object detection adds value to the research and is initiated by the standard hybridization of the YOLOv4 and Resnet 101 model, which evaluated the research’s accuracy, sensitivity, and specificity, improving its efficacy. The proposed CPW opt-deep CNN classifier attained the values of 89.74%, 89.50%, and 89.19% while classifying objects in dataset 1, 91.66%, 86.01%, and 91.52% while classifying objects in dataset 2, compared to the preceding method that is efficient.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48347690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise2Split — Single Image Denoising Via Single Channeled Patch-Based Learning Noise2Split -单个图像去噪通过单通道补丁为基础的学习
IF 1.6 Q3 Computer Science Pub Date : 2023-07-07 DOI: 10.1142/s0219467824500578
G. Ashwini, T. Ramashri, Mohammad Rasheed Ahmed
The prominence and popularity of Image Denoising in medical image processing has been obvious since its early conception. Medical Image Denoising is primarily a significant pre-processing method for further image processing steps in various fields. Its ability to speed up the diagnosis by enhancing the sensory quality of noisy images is proven to be working in most of the cases. The efficiency of the deep neural networks for Medical Image Denoising has been well proven traditionally. Both noisy and clean images are equal requirements in most of these training methods. However, it is not always possible to procure clean images for various applications such as Dynamic Imaging, Computed Tomography, Magnetic Resonance Imaging, and Camera Photography due to the inevitable presence of naturally occurring noisy signals which are intrinsic to the images. There have been self-supervised single Image Denoising methods proposed recently. Being inspired by these methods, taking this a step further, we propose a novel and better denoising method for single images by training the learning model on each of the channels of the input data, which is termed as “Noise2Split”. It ultimately proves to reduce the noise granularly in each channel, pixel by pixel, by using Single Channeled Patch-Based (SCPB) learning, which is found to be resulting in a better performance. Further, to obtain optimum results, the method leverages BRISQUE image quality assessment. The model is demonstrated on X-ray, CT, PET, Microscopy, and real-world noisy images.
图像去噪在医学图像处理中的重要性和普及性自其诞生之初就显而易见。医学图像去噪主要是各个领域中进一步图像处理步骤的一种重要预处理方法。事实证明,它通过提高噪声图像的感官质量来加快诊断的能力在大多数情况下都是有效的。传统上,深度神经网络用于医学图像去噪的效率已经得到了很好的证明。在大多数训练方法中,噪声图像和干净图像都是相同的要求。然而,由于不可避免地存在图像固有的自然产生的噪声信号,因此不可能总是为各种应用(如动态成像、计算机断层扫描、磁共振成像和相机摄影)获得干净的图像。最近提出了一种自监督的单图像去噪方法。受这些方法的启发,我们更进一步,通过在输入数据的每个通道上训练学习模型,提出了一种新的、更好的单图像去噪方法,称为“Noise2Split”。最终证明,通过使用基于单通道补丁的(SCPB)学习,可以逐像素地在每个通道中细粒度地降低噪声,从而获得更好的性能。此外,为了获得最佳结果,该方法利用了BRISQUE图像质量评估。该模型在X射线、CT、PET、显微镜和真实世界的噪声图像上进行了演示。
{"title":"Noise2Split — Single Image Denoising Via Single Channeled Patch-Based Learning","authors":"G. Ashwini, T. Ramashri, Mohammad Rasheed Ahmed","doi":"10.1142/s0219467824500578","DOIUrl":"https://doi.org/10.1142/s0219467824500578","url":null,"abstract":"The prominence and popularity of Image Denoising in medical image processing has been obvious since its early conception. Medical Image Denoising is primarily a significant pre-processing method for further image processing steps in various fields. Its ability to speed up the diagnosis by enhancing the sensory quality of noisy images is proven to be working in most of the cases. The efficiency of the deep neural networks for Medical Image Denoising has been well proven traditionally. Both noisy and clean images are equal requirements in most of these training methods. However, it is not always possible to procure clean images for various applications such as Dynamic Imaging, Computed Tomography, Magnetic Resonance Imaging, and Camera Photography due to the inevitable presence of naturally occurring noisy signals which are intrinsic to the images. There have been self-supervised single Image Denoising methods proposed recently. Being inspired by these methods, taking this a step further, we propose a novel and better denoising method for single images by training the learning model on each of the channels of the input data, which is termed as “Noise2Split”. It ultimately proves to reduce the noise granularly in each channel, pixel by pixel, by using Single Channeled Patch-Based (SCPB) learning, which is found to be resulting in a better performance. Further, to obtain optimum results, the method leverages BRISQUE image quality assessment. The model is demonstrated on X-ray, CT, PET, Microscopy, and real-world noisy images.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46915734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FCM with Spatial Constraint Multi-Kernel Distance-Based Segmentation and Optimized Deep Learning for Flood Detection 基于空间约束的多核距离分割和优化深度学习的FCM洪水检测
IF 1.6 Q3 Computer Science Pub Date : 2023-06-30 DOI: 10.1142/s0219467824500414
R. V. Prasad, J. Prasad, B. Chaudhari, Nihar M. Ranjan, Rajat Srivastava
Floods are the deadly and catastrophic disasters, causing loss of life and harm to assets, farmland, and infrastructure. To address this, it is necessary to devise and employ an effective flood management system that can immediately identify flood areas to initiate relief measures as soon as possible. Therefore, this research work develops an effective flood detection method, named Anti- Corona-Shuffled Shepherd Optimization Algorithm-based Deep Quantum Neural Network (ACSSOA-based Deep QNN) for identifying the flooded areas. Here, the segmentation process is performed using Fuzzy C-Means with Spatial Constraint Multi-Kernel Distance (MKFCM_S) wherein the Fuzzy C-Means (FCM) is modified with Spatial Constraints Based on Kernel-Induced Distance (KFCM_S). For flood detection, Deep QNN has been used wherein the training progression of Deep QNN is done using designed optimization algorithm, called ACSSOA. Besides, the designed ACSSOA is newly formed by the hybridization of Anti Corona Virus Optimization (ACVO) and Shuffled Shepherd Optimization Algorithm (SSOA). The devised method was evaluated using the Kerala Floods database, and it acquires the segmentation accuracy, testing accuracy, sensitivity, and specificity with highest values of 0.904, 0.914, 0.927, and 0.920, respectively.
洪水是致命的灾难性灾害,会造成生命损失和财产、农田和基础设施的破坏。为了解决这个问题,有必要设计和采用一个有效的洪水管理系统,可以立即识别洪水区域,并尽快采取救援措施。因此,本研究开发了一种有效的洪水检测方法——基于反电晕洗牌牧羊人优化算法的深度量子神经网络(ACSSOA-based Deep Quantum Neural Network,简称Deep QNN)来识别洪水泛滥区域。在这里,使用空间约束多核距离模糊c均值(MKFCM_S)进行分割过程,其中模糊c均值(FCM)使用基于核诱导距离的空间约束(KFCM_S)进行修改。对于洪水检测,已经使用了深度QNN,其中深度QNN的训练过程是使用设计的优化算法ACSSOA完成的。此外,所设计的ACSSOA是由抗冠状病毒优化算法(ACVO)和shuffle Shepherd优化算法(SSOA)杂交而成的。利用喀拉拉邦洪水数据库对该方法进行了评价,结果表明,该方法的分割精度、检测精度、灵敏度和特异性最高,分别为0.904、0.914、0.927和0.920。
{"title":"FCM with Spatial Constraint Multi-Kernel Distance-Based Segmentation and Optimized Deep Learning for Flood Detection","authors":"R. V. Prasad, J. Prasad, B. Chaudhari, Nihar M. Ranjan, Rajat Srivastava","doi":"10.1142/s0219467824500414","DOIUrl":"https://doi.org/10.1142/s0219467824500414","url":null,"abstract":"Floods are the deadly and catastrophic disasters, causing loss of life and harm to assets, farmland, and infrastructure. To address this, it is necessary to devise and employ an effective flood management system that can immediately identify flood areas to initiate relief measures as soon as possible. Therefore, this research work develops an effective flood detection method, named Anti- Corona-Shuffled Shepherd Optimization Algorithm-based Deep Quantum Neural Network (ACSSOA-based Deep QNN) for identifying the flooded areas. Here, the segmentation process is performed using Fuzzy C-Means with Spatial Constraint Multi-Kernel Distance (MKFCM_S) wherein the Fuzzy C-Means (FCM) is modified with Spatial Constraints Based on Kernel-Induced Distance (KFCM_S). For flood detection, Deep QNN has been used wherein the training progression of Deep QNN is done using designed optimization algorithm, called ACSSOA. Besides, the designed ACSSOA is newly formed by the hybridization of Anti Corona Virus Optimization (ACVO) and Shuffled Shepherd Optimization Algorithm (SSOA). The devised method was evaluated using the Kerala Floods database, and it acquires the segmentation accuracy, testing accuracy, sensitivity, and specificity with highest values of 0.904, 0.914, 0.927, and 0.920, respectively.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45182967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Optimization-Based Neural Network Classifier for Software Defect Prediction 基于混合优化的神经网络分类器软件缺陷预测
IF 1.6 Q3 Computer Science Pub Date : 2023-06-07 DOI: 10.1142/s0219467824500451
M. Prashanthi, M. Chandra Mohan
The software is applied in various areas so the quality of the software is very important. The software defect prediction (SDP) is used to solve the software issues and enhance the quality. The robustness and reliability are the major concerns in the existing SDP approaches. Hence, in this paper, the hybrid optimization-based neural network (Optimized NN) is developed for the effective detection of the defects in the software. The two main steps involved in the Optimized NN-based SDP are feature selection and SDP utilizing Optimized NN. The data is fed forwarded to the feature selection module, where relief algorithm selects the significant features relating to the defect and no-defects. The features are fed to the SDP module, and the optimal tuning of NN classifier is obtained by the hybrid optimization developed by the integration of the social spider algorithm (SSA) and gray wolf optimizer (GWO). The comparative analysis of the developed prediction model reveals the effectiveness of the proposed method that attained the maximum accuracy of 93.64%, maximum sensitivity of 95.14%, maximum specificity of 99%, maximum [Formula: see text]-score of 93.53%, and maximum precision of 99% by considering the [Formula: see text]-fold.
该软件应用于各个领域,因此软件的质量非常重要。软件缺陷预测(SDP)用于解决软件问题和提高质量。鲁棒性和可靠性是现有SDP方法中主要关注的问题。因此,本文开发了基于混合优化的神经网络(Optimized NN)来有效地检测软件中的缺陷。基于优化神经网络的SDP涉及的两个主要步骤是特征选择和利用优化神经网络进行SDP。数据被转发到特征选择模块,在该模块中,起伏算法选择与缺陷和无缺陷相关的重要特征。将特征输入到SDP模块,并通过社会蜘蛛算法(SSA)和灰狼优化器(GWO)的集成开发的混合优化来获得NN分类器的最优调整。对所开发的预测模型的比较分析表明,通过考虑[公式:见正文]的倍数,所提出的方法的有效性达到了93.64%的最大准确度、95.14%的最大灵敏度、99%的最大特异度、93.53%的最大[公式:见图正文]得分和99%的最大精度。
{"title":"Hybrid Optimization-Based Neural Network Classifier for Software Defect Prediction","authors":"M. Prashanthi, M. Chandra Mohan","doi":"10.1142/s0219467824500451","DOIUrl":"https://doi.org/10.1142/s0219467824500451","url":null,"abstract":"The software is applied in various areas so the quality of the software is very important. The software defect prediction (SDP) is used to solve the software issues and enhance the quality. The robustness and reliability are the major concerns in the existing SDP approaches. Hence, in this paper, the hybrid optimization-based neural network (Optimized NN) is developed for the effective detection of the defects in the software. The two main steps involved in the Optimized NN-based SDP are feature selection and SDP utilizing Optimized NN. The data is fed forwarded to the feature selection module, where relief algorithm selects the significant features relating to the defect and no-defects. The features are fed to the SDP module, and the optimal tuning of NN classifier is obtained by the hybrid optimization developed by the integration of the social spider algorithm (SSA) and gray wolf optimizer (GWO). The comparative analysis of the developed prediction model reveals the effectiveness of the proposed method that attained the maximum accuracy of 93.64%, maximum sensitivity of 95.14%, maximum specificity of 99%, maximum [Formula: see text]-score of 93.53%, and maximum precision of 99% by considering the [Formula: see text]-fold.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49066729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Image Recovery from Moving Water Surface Using Multi-Objective Bispectrum Method 一种新的基于多目标双谱法的运动水面图像恢复方法
IF 1.6 Q3 Computer Science Pub Date : 2023-06-05 DOI: 10.1142/s0219467824500384
K. P. Kumar, M. Rao, M. Venkatanarayana
Nowadays, the image degradation field suffers from several challenges while processing underwater color images including color distortion and image blurring due to the scattering media. Moreover, to get appropriate multi-frame super-resolution images, there is essential for recovering a better quantity of images. Traditionally, the shift among images is directly evaluated when considering the under-sampled Low-Resolution (LR) images. On the other hand, the high-frequency LR image faces unreliability owing to the aliasing consequences of sub-sampling, but it will also degrade the recovery accuracy. This task design implements a novel image recovery model from the moving water surface by adopting the multi-objective adaptive higher-order spectral analysis. Image pre-processing, lucky region selection, and image recovery are the three main phases of this model. The bicoherence method and dice coefficient method are adopted for performing the lucky region selection. Finally, the adoption of the multi-objective adaptive bispectra method is used for performing the image recovery from the moving water surface. The improved Adaptive Fitness-oriented Random number-based Galactic Swarm Optimization (AFR-GSO) algorithm is used for optimizing the constraints of the bispectrum method. The experimental results verify the enrichment of image quality by the proposed model over the existing techniques.
目前,在处理水下彩色图像时,图像退化领域面临着一些挑战,包括由于散射介质造成的颜色失真和图像模糊。此外,为了获得合适的多帧超分辨率图像,必须恢复更多的图像。传统上,在考虑欠采样低分辨率(LR)图像时,直接评估图像之间的移动。另一方面,由于子采样的混叠后果,高频LR图像存在不可靠性,但也会降低恢复精度。本课题设计采用多目标自适应高阶光谱分析实现了一种新的运动水面图像恢复模型。图像预处理、幸运区域选择和图像恢复是该模型的三个主要阶段。采用双相干法和骰子系数法进行幸运区选择。最后,采用多目标自适应双光谱方法对运动水面进行图像恢复。采用改进的面向自适应适应度的基于随机数的银河群优化算法(AFR-GSO)对双谱法的约束条件进行优化。实验结果验证了该模型比现有技术更丰富了图像质量。
{"title":"A Novel Image Recovery from Moving Water Surface Using Multi-Objective Bispectrum Method","authors":"K. P. Kumar, M. Rao, M. Venkatanarayana","doi":"10.1142/s0219467824500384","DOIUrl":"https://doi.org/10.1142/s0219467824500384","url":null,"abstract":"Nowadays, the image degradation field suffers from several challenges while processing underwater color images including color distortion and image blurring due to the scattering media. Moreover, to get appropriate multi-frame super-resolution images, there is essential for recovering a better quantity of images. Traditionally, the shift among images is directly evaluated when considering the under-sampled Low-Resolution (LR) images. On the other hand, the high-frequency LR image faces unreliability owing to the aliasing consequences of sub-sampling, but it will also degrade the recovery accuracy. This task design implements a novel image recovery model from the moving water surface by adopting the multi-objective adaptive higher-order spectral analysis. Image pre-processing, lucky region selection, and image recovery are the three main phases of this model. The bicoherence method and dice coefficient method are adopted for performing the lucky region selection. Finally, the adoption of the multi-objective adaptive bispectra method is used for performing the image recovery from the moving water surface. The improved Adaptive Fitness-oriented Random number-based Galactic Swarm Optimization (AFR-GSO) algorithm is used for optimizing the constraints of the bispectrum method. The experimental results verify the enrichment of image quality by the proposed model over the existing techniques.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45247148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved COVID-19 Lung X-Ray Image Classification Algorithm Based on ConvNeXt Network 一种改进的基于ConvNeXt网络的新冠肺炎肺部X射线图像分类算法
IF 1.6 Q3 Computer Science Pub Date : 2023-05-22 DOI: 10.1142/s0219467824500360
Fuxiang Liu, Chen Zang, Junqi Shi, Weiyu He, Yubo Liang, Lei Li
Aiming at the new coronavirus that appeared in 2019, which has caused a large number of infected patients worldwide due to its high contagiousness, in order to detect the source of infection in time and cut off the chain of transmission, we developed a new Chest X-ray (CXR) image classification algorithm with high accuracy, simple operation and fast processing for COVID-19. The algorithm is based on ConvNeXt pure convolutional neural network, we adjusted the network structure and loss function, added some new Data Augmentation methods and introduced attention mechanism. Compared with other classical convolutional neural network classification algorithms such as AlexNet, ResNet-34, ResNet-50, ResNet-101, ConvNeXt-tiny, ConvNeXt-small and ConvNeXt-base, the improved algorithm has better performance on COVID dataset.
针对2019年出现的新型冠状病毒,由于其传染性强,在全球范围内造成了大量的感染患者,为了及时发现传染源,切断传播链,我们针对COVID-19开发了一种准确率高、操作简单、处理速度快的新型胸部x线(CXR)图像分类算法。该算法基于ConvNeXt纯卷积神经网络,对网络结构和损失函数进行了调整,增加了一些新的数据增强方法,并引入了注意机制。与AlexNet、ResNet-34、ResNet-50、ResNet-101、ConvNeXt-tiny、ConvNeXt-small和ConvNeXt-base等经典卷积神经网络分类算法相比,改进算法在COVID数据集上具有更好的性能。
{"title":"An Improved COVID-19 Lung X-Ray Image Classification Algorithm Based on ConvNeXt Network","authors":"Fuxiang Liu, Chen Zang, Junqi Shi, Weiyu He, Yubo Liang, Lei Li","doi":"10.1142/s0219467824500360","DOIUrl":"https://doi.org/10.1142/s0219467824500360","url":null,"abstract":"Aiming at the new coronavirus that appeared in 2019, which has caused a large number of infected patients worldwide due to its high contagiousness, in order to detect the source of infection in time and cut off the chain of transmission, we developed a new Chest X-ray (CXR) image classification algorithm with high accuracy, simple operation and fast processing for COVID-19. The algorithm is based on ConvNeXt pure convolutional neural network, we adjusted the network structure and loss function, added some new Data Augmentation methods and introduced attention mechanism. Compared with other classical convolutional neural network classification algorithms such as AlexNet, ResNet-34, ResNet-50, ResNet-101, ConvNeXt-tiny, ConvNeXt-small and ConvNeXt-base, the improved algorithm has better performance on COVID dataset.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43948823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting Epileptic Seizures Using Symplectic Geometry Decomposition-Based Features and Gaussian Deep Boltzmann Machines 基于辛几何分解的特征和高斯深度玻尔兹曼机检测癫痫发作
IF 1.6 Q3 Computer Science Pub Date : 2023-05-05 DOI: 10.1142/s021946782450044x
K. Visalini, Saravanan Alagarsamy, S. Raja
Studies deem that about 1 percent of the human population is affected by epileptic seizures on a global scale. It is characterized as an undue neuronal discharge in the brain and degrades the quality of life of the patients to a large extent. Children being unaware of a sudden onset of seizures could be affected by severe injury or even mortality. Machine-learning-based epileptic seizure detection from EEG (Electro-Encephalogram) signals have always been a hot area of research. However, the majority of the research works rely on correlated non-linear features extracted from the EEG signals, causing a high-computational overhead, and challenging their application in real-time clinical diagnosis. This study proposes a robust seizure detection framework using Gaussian Deep Boltzmann Machine-based classifier and Symplectic Geometric Decomposition (SGD)-based features. The simplified eigenvalues derived through Symplectic Similarity Transform (SST) are employed as feature vectors for the classifier, eliminating the need for a deliberate feature extraction procedure. The study examines the transferability capability of the suggested framework in discriminating seizures in both neonates and pediatric subjects in unison, experimenting with classical annotated datasets. The model yielded a mean accuracy of about 97.91% and an F1 Score of 0.935 in pediatric seizure detection, and mean sensitivity and specificity of 99.05% and 98.28%, in neonatal seizure detection tasks, respectively. Thus, the model can be deemed comparable to the available state-of-the-art seizure detection frameworks.
研究认为,在全球范围内,大约1%的人口受到癫痫发作的影响。它的特征是大脑中过度的神经元放电,在很大程度上降低了患者的生活质量。没有意识到突然发作癫痫的儿童可能会受到严重伤害甚至死亡。基于机器学习的脑电图信号癫痫发作检测一直是研究的热点。然而,大多数研究工作依赖于从脑电图信号中提取的相关非线性特征,这造成了很高的计算开销,并挑战了它们在实时临床诊断中的应用。本研究提出了一种基于高斯深度玻尔兹曼机的分类器和基于辛几何分解(SGD)特征的鲁棒癫痫检测框架。通过辛相似变换(SST)得到的简化特征值作为分类器的特征向量,消除了刻意提取特征过程的需要。该研究考察了建议的框架在新生儿和儿科受试者中区分癫痫发作的可转移性能力,并对经典的注释数据集进行了实验。该模型在小儿癫痫发作检测中的平均准确率约为97.91%,F1评分为0.935,在新生儿癫痫发作检测任务中的平均灵敏度和特异性分别为99.05%和98.28%。因此,该模型可以被认为与现有的最先进的缉获检测框架相媲美。
{"title":"Detecting Epileptic Seizures Using Symplectic Geometry Decomposition-Based Features and Gaussian Deep Boltzmann Machines","authors":"K. Visalini, Saravanan Alagarsamy, S. Raja","doi":"10.1142/s021946782450044x","DOIUrl":"https://doi.org/10.1142/s021946782450044x","url":null,"abstract":"Studies deem that about 1 percent of the human population is affected by epileptic seizures on a global scale. It is characterized as an undue neuronal discharge in the brain and degrades the quality of life of the patients to a large extent. Children being unaware of a sudden onset of seizures could be affected by severe injury or even mortality. Machine-learning-based epileptic seizure detection from EEG (Electro-Encephalogram) signals have always been a hot area of research. However, the majority of the research works rely on correlated non-linear features extracted from the EEG signals, causing a high-computational overhead, and challenging their application in real-time clinical diagnosis. This study proposes a robust seizure detection framework using Gaussian Deep Boltzmann Machine-based classifier and Symplectic Geometric Decomposition (SGD)-based features. The simplified eigenvalues derived through Symplectic Similarity Transform (SST) are employed as feature vectors for the classifier, eliminating the need for a deliberate feature extraction procedure. The study examines the transferability capability of the suggested framework in discriminating seizures in both neonates and pediatric subjects in unison, experimenting with classical annotated datasets. The model yielded a mean accuracy of about 97.91% and an F1 Score of 0.935 in pediatric seizure detection, and mean sensitivity and specificity of 99.05% and 98.28%, in neonatal seizure detection tasks, respectively. Thus, the model can be deemed comparable to the available state-of-the-art seizure detection frameworks.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47565113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Convolutional Generative Adversarial Network (DC-GAN) and Variational Auto Encoders (VAE) Models with Transfer Learning Approaches for Diabetic Retinopathy Detection 基于迁移学习的深度卷积生成对抗网络(DC-GAN)和变分自编码器(VAE)模型用于糖尿病视网膜病变检测
IF 1.6 Q3 Computer Science Pub Date : 2023-04-12 DOI: 10.1142/s0219467823400090
Y. Sravani Devi, S. Phani Kumar
{"title":"A Deep Convolutional Generative Adversarial Network (DC-GAN) and Variational Auto Encoders (VAE) Models with Transfer Learning Approaches for Diabetic Retinopathy Detection","authors":"Y. Sravani Devi, S. Phani Kumar","doi":"10.1142/s0219467823400090","DOIUrl":"https://doi.org/10.1142/s0219467823400090","url":null,"abstract":"","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45124464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Image and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1