Pub Date : 2023-03-31DOI: 10.1142/s0219467824500487
V. Lavanya, P. C. Sekhar
Cybersecurity has received greater attention in modern times due to the emergence of IoT (Internet-of-Things) and CNs (Computer Networks). Because of the massive increase in Internet access, various malicious malware have emerged and pose significant computer security threats. The numerous computing processes across the network have a high risk of being tampered with or exploited, which necessitates developing effective intrusion detection systems. Therefore, it is essential to build an effective cybersecurity model to detect the different anomalies or cyber-attacks in the network. This work introduces a new method known as Wavelet Deep Convolutional Neural Network (WDCNN) to classify cyber-attacks. The presented network combines WDCNN with Enhanced Rain Optimization Algorithm (EROA) to minimize the loss in the network. This proposed algorithm is designed to detect attacks in large-scale data and reduces the complexities of detection with maximum detection accuracy. The proposed method is implemented in PYTHON. The classification process is completed with the help of the two most famous datasets, KDD cup 1999 and CICMalDroid 2020. The performance of WDCNN_EROA can be assessed using parameters like specificity, accuracy, precision F-measure and recall. The results showed that the proposed method is about 98.72% accurate for the first dataset and 98.64% for the second dataset.
由于物联网和计算机网络的出现,网络安全在现代受到了更多的关注。由于互联网访问的大量增加,各种恶意恶意软件已经出现,并对计算机安全构成重大威胁。跨网络的众多计算过程具有被篡改或利用的高风险,这就需要开发有效的入侵检测系统。因此,建立一个有效的网络安全模型来检测网络中的不同异常或网络攻击是至关重要的。本文介绍了一种新的方法,即小波深度卷积神经网络(WDCNN)来对网络攻击进行分类。该网络将WDCNN与增强降雨优化算法(EROA)相结合,以最大限度地减少网络中的损失。该算法设计用于检测大规模数据中的攻击,并以最大的检测精度降低了检测的复杂性。所提出的方法已在PYTHON中实现。分类过程是在两个最著名的数据集KDD cup 1999和CICMalDroid 2020的帮助下完成的。WDCNN_EROA的性能可以使用特异性、准确性、精密度F测量和召回等参数进行评估。结果表明,该方法对第一个数据集和第二个数据集的准确率分别为98.72%和98.64%。
{"title":"Efficient Cybersecurity Model Using Wavelet Deep CNN and Enhanced Rain Optimization Algorithm","authors":"V. Lavanya, P. C. Sekhar","doi":"10.1142/s0219467824500487","DOIUrl":"https://doi.org/10.1142/s0219467824500487","url":null,"abstract":"Cybersecurity has received greater attention in modern times due to the emergence of IoT (Internet-of-Things) and CNs (Computer Networks). Because of the massive increase in Internet access, various malicious malware have emerged and pose significant computer security threats. The numerous computing processes across the network have a high risk of being tampered with or exploited, which necessitates developing effective intrusion detection systems. Therefore, it is essential to build an effective cybersecurity model to detect the different anomalies or cyber-attacks in the network. This work introduces a new method known as Wavelet Deep Convolutional Neural Network (WDCNN) to classify cyber-attacks. The presented network combines WDCNN with Enhanced Rain Optimization Algorithm (EROA) to minimize the loss in the network. This proposed algorithm is designed to detect attacks in large-scale data and reduces the complexities of detection with maximum detection accuracy. The proposed method is implemented in PYTHON. The classification process is completed with the help of the two most famous datasets, KDD cup 1999 and CICMalDroid 2020. The performance of WDCNN_EROA can be assessed using parameters like specificity, accuracy, precision F-measure and recall. The results showed that the proposed method is about 98.72% accurate for the first dataset and 98.64% for the second dataset.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43506448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-31DOI: 10.1142/s0219467824500438
Chaitanya Jannu, S. Vanambathina
Over the past 10 years, deep learning has enabled significant advancements in the improvement of noisy speech. In an end-to-end speech enhancement, the deep neural networks transform a noisy speech signal to a clean speech signal in the time domain directly without any conversion or estimation of mask. Recently, the U-Net-based models achieved good enhancement performance. Despite this, some of them may neglect context-related information and detailed features of input speech in case of ordinary convolution. To address the above issues, recent studies have upgraded the performance of the model by adding various network modules such as attention mechanisms, long and short-term memory (LSTM). In this work, we propose a new U-Net-based speech enhancement model using a novel lightweight and efficient Shuffle Attention (SA), Gated Recurrent Unit (GRU), residual blocks with dilated convolutions. Residual block will be followed by a multi-scale convolution block (MSCB). The proposed hybrid structure enables the temporal context aggregation in time domain. The advantage of shuffle attention mechanism is that the channel and spatial attention are carried out simultaneously for each sub-feature in order to prevent potential noises while also highlighting the proper semantic feature areas by combining the same features from all locations. MSCB is employed for extracting rich temporal features. To represent the correlation between neighboring noisy speech frames, a two Layer GRU is added in the bottleneck of U-Net. The experimental findings demonstrate that the proposed model outperformed the other existing models in terms of short-time objective intelligibility (STOI), and perceptual evaluation of the speech quality (PESQ).
{"title":"Shuffle Attention U-Net for Speech Enhancement in Time Domain","authors":"Chaitanya Jannu, S. Vanambathina","doi":"10.1142/s0219467824500438","DOIUrl":"https://doi.org/10.1142/s0219467824500438","url":null,"abstract":"Over the past 10 years, deep learning has enabled significant advancements in the improvement of noisy speech. In an end-to-end speech enhancement, the deep neural networks transform a noisy speech signal to a clean speech signal in the time domain directly without any conversion or estimation of mask. Recently, the U-Net-based models achieved good enhancement performance. Despite this, some of them may neglect context-related information and detailed features of input speech in case of ordinary convolution. To address the above issues, recent studies have upgraded the performance of the model by adding various network modules such as attention mechanisms, long and short-term memory (LSTM). In this work, we propose a new U-Net-based speech enhancement model using a novel lightweight and efficient Shuffle Attention (SA), Gated Recurrent Unit (GRU), residual blocks with dilated convolutions. Residual block will be followed by a multi-scale convolution block (MSCB). The proposed hybrid structure enables the temporal context aggregation in time domain. The advantage of shuffle attention mechanism is that the channel and spatial attention are carried out simultaneously for each sub-feature in order to prevent potential noises while also highlighting the proper semantic feature areas by combining the same features from all locations. MSCB is employed for extracting rich temporal features. To represent the correlation between neighboring noisy speech frames, a two Layer GRU is added in the bottleneck of U-Net. The experimental findings demonstrate that the proposed model outperformed the other existing models in terms of short-time objective intelligibility (STOI), and perceptual evaluation of the speech quality (PESQ).","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44076899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-02DOI: 10.1142/s0219467824500323
S. Sathish Kumar, A. Sigappi, G. Thomas, Y. Harold Robinson, S. Raja
Pistachios are a tremendous source of fiber, protein, antioxidants, healthy fats, and other minerals like thiamine and vitamin B6. They may help people lose weight, lower cholesterol, and blood sugar levels, lead to better gut, eye, and blood vessel health. The two main varieties farmed and exported in Turkey are kirmizi and siirt pistachios. Understanding how to detect the type of pistachio is essential as it plays an important role in trade. In this study, it is aimed to classify these two types of pistachios and analyze the performance of the various small-scale machine learning algorithms. 2148 sample images for these two kinds of pistachios were considered for this study which includes 1232 of Kirmizi type and 916 of Siirt type. In order to evaluate the model fairly, stratified random sampling is applied on the dataset. For feature extraction, we used deep neural network-based embeddings to acquire the vector representation of images. The classification of pistachio species is then performed using a variety of small-scale machine learning algorithms29,31 that have been trained using these feature vectors. As a result of this study, the success rate obtained from Logistic Regression through features extracted from the penultimate layer of Painters network is 97.20%. The performance of the models was evaluated through Class Accuracy, Precision, Recall, F1 Score, and values of Area under the curve (AUC). The outcomes show that the method suggested in this study may quickly and precisely identify different varieties of pistachios while also meeting agricultural production needs.
开心果富含纤维、蛋白质、抗氧化剂、健康脂肪和其他矿物质,如硫胺素和维生素B6。它们可以帮助人们减肥,降低胆固醇和血糖水平,改善肠道、眼睛和血管的健康。土耳其种植和出口的两个主要品种是kirmizi和siirt开心果。了解如何检测开心果的类型是至关重要的,因为它在贸易中起着重要作用。在本研究中,旨在对这两种开心果进行分类,并分析各种小规模机器学习算法的性能。本研究选取了这两种开心果的2148张样本图像,其中Kirmizi型1232张,Siirt型916张。为了公平地评价模型,对数据集进行分层随机抽样。对于特征提取,我们使用基于深度神经网络的嵌入来获取图像的向量表示。然后使用使用这些特征向量训练的各种小型机器学习算法进行开心果物种的分类。通过本研究,从painter网络的倒数第二层提取特征,通过Logistic回归得到的成功率为97.20%。通过分类准确率、精确度、召回率、F1分数和曲线下面积(Area under The curve, AUC)值来评价模型的性能。结果表明,该方法在满足农业生产需求的同时,可以快速、准确地鉴定不同品种的开心果。
{"title":"Classification and Analysis of Pistachio Species Through Neural Embedding-Based Feature Extraction and Small-Scale Machine Learning Techniques","authors":"S. Sathish Kumar, A. Sigappi, G. Thomas, Y. Harold Robinson, S. Raja","doi":"10.1142/s0219467824500323","DOIUrl":"https://doi.org/10.1142/s0219467824500323","url":null,"abstract":"Pistachios are a tremendous source of fiber, protein, antioxidants, healthy fats, and other minerals like thiamine and vitamin B6. They may help people lose weight, lower cholesterol, and blood sugar levels, lead to better gut, eye, and blood vessel health. The two main varieties farmed and exported in Turkey are kirmizi and siirt pistachios. Understanding how to detect the type of pistachio is essential as it plays an important role in trade. In this study, it is aimed to classify these two types of pistachios and analyze the performance of the various small-scale machine learning algorithms. 2148 sample images for these two kinds of pistachios were considered for this study which includes 1232 of Kirmizi type and 916 of Siirt type. In order to evaluate the model fairly, stratified random sampling is applied on the dataset. For feature extraction, we used deep neural network-based embeddings to acquire the vector representation of images. The classification of pistachio species is then performed using a variety of small-scale machine learning algorithms29,31 that have been trained using these feature vectors. As a result of this study, the success rate obtained from Logistic Regression through features extracted from the penultimate layer of Painters network is 97.20%. The performance of the models was evaluated through Class Accuracy, Precision, Recall, F1 Score, and values of Area under the curve (AUC). The outcomes show that the method suggested in this study may quickly and precisely identify different varieties of pistachios while also meeting agricultural production needs.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47575152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-22DOI: 10.1142/s021946782450030x
Suresh Shanmugasundaram, Natarajan Palaniappan
Localization-loss and classification-loss are optimized at the same time to train the one-stage object detectors. Because of the large number of anchors, the severe foreground–background class disproportion causes significant classification-loss. This paper discusses using a ranking module instead of the classification module to mitigate this difficulty and also Average-Precision loss (AP-loss) is utilized on the ranking module. An optimization algorithm is used to make the AP-loss as effective as possible. Optimization algorithm blends the error-driven updating method of perceptron learning and the deep network backpropagation technique. This optimization algorithm handles the foreground–background class disproportion issues. One-stage detector with AP-loss and backbone with ResNet-152 attains improvement in the detection performance compared to the classification-losses-based detectors.
{"title":"Detection Accuracy Improvement on One-Stage Object Detection Using Ap-Loss-Based Ranking Module and Resnet-152 Backbone","authors":"Suresh Shanmugasundaram, Natarajan Palaniappan","doi":"10.1142/s021946782450030x","DOIUrl":"https://doi.org/10.1142/s021946782450030x","url":null,"abstract":"Localization-loss and classification-loss are optimized at the same time to train the one-stage object detectors. Because of the large number of anchors, the severe foreground–background class disproportion causes significant classification-loss. This paper discusses using a ranking module instead of the classification module to mitigate this difficulty and also Average-Precision loss (AP-loss) is utilized on the ranking module. An optimization algorithm is used to make the AP-loss as effective as possible. Optimization algorithm blends the error-driven updating method of perceptron learning and the deep network backpropagation technique. This optimization algorithm handles the foreground–background class disproportion issues. One-stage detector with AP-loss and backbone with ResNet-152 attains improvement in the detection performance compared to the classification-losses-based detectors.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46676507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-03DOI: 10.1142/s0219467824500372
Nurul Izzatie Husna Fauzi, Z. Musa, Fadhl Hujainah
Correct object detection plays a key role in generating an accurate object tracking result. Feature-based methods have the capability of handling the critical process of extracting features of an object. This paper aims to investigate object tracking using feature-based methods in terms of (1) identifying and analyzing the existing methods; (2) reporting and scrutinizing the evaluation performance matrices and their implementation usage in measuring the effectiveness of object tracking and detection; (3) revealing and investigating the challenges that affect the accuracy performance of identified tracking methods; (4) measuring the effectiveness of identified methods in terms of revealing to what extent the challenges can impact the accuracy and precision performance based on the evaluation performance matrices reported; and (5) presenting the potential future directions for improvement. The review process of this research was conducted based on standard systematic literature review (SLR) guidelines by Kitchenam’s and Charters’. Initially, 157 prospective studies were identified. Through a rigorous study selection strategy, 32 relevant studies were selected to address the listed research questions. Thirty-two methods were identified and analyzed in terms of their aims, introduced improvements, and results achieved, along with presenting a new outlook on the classification of identified methods based on the feature-based method used in detection and tracking process.
{"title":"Feature-Based Object Detection and Tracking: A Systematic Literature Review","authors":"Nurul Izzatie Husna Fauzi, Z. Musa, Fadhl Hujainah","doi":"10.1142/s0219467824500372","DOIUrl":"https://doi.org/10.1142/s0219467824500372","url":null,"abstract":"Correct object detection plays a key role in generating an accurate object tracking result. Feature-based methods have the capability of handling the critical process of extracting features of an object. This paper aims to investigate object tracking using feature-based methods in terms of (1) identifying and analyzing the existing methods; (2) reporting and scrutinizing the evaluation performance matrices and their implementation usage in measuring the effectiveness of object tracking and detection; (3) revealing and investigating the challenges that affect the accuracy performance of identified tracking methods; (4) measuring the effectiveness of identified methods in terms of revealing to what extent the challenges can impact the accuracy and precision performance based on the evaluation performance matrices reported; and (5) presenting the potential future directions for improvement. The review process of this research was conducted based on standard systematic literature review (SLR) guidelines by Kitchenam’s and Charters’. Initially, 157 prospective studies were identified. Through a rigorous study selection strategy, 32 relevant studies were selected to address the listed research questions. Thirty-two methods were identified and analyzed in terms of their aims, introduced improvements, and results achieved, along with presenting a new outlook on the classification of identified methods based on the feature-based method used in detection and tracking process.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43337641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-02DOI: 10.1142/s0219467824500232
B. K. M. Enturi, A. Suhasini, Narayana Satyala
Segmentation of skin lesions is a significant and demanding task in dermoscopy images. This paper proposes a new skin cancer recognition scheme, with: “Pre-processing, Segmentation, Feature extraction, Optimal Feature Selection and Classification”. Here, pre-processing is done with certain processes. The pre-processed images are segmented via the “Otsu Thresholding model”. The third phase is feature extraction, where Deviation Relevance-based “Local Binary Pattern (DRLBP), Gray-Level Co-Occurrence Matrix (GLCM) features and Gray Level Run-Length Matrix (GLRM) features” are extracted. From these extracted features, the optimal features are chosen via Particle Updated WOA (PU-WOA) model. Subsequently, classification occurs via Optimized DCNN and NN to classify the skin lesion. To make the classification more precise, the DCNN is optimized by the introduced algorithm. The result has shown a higher accuracy of 0.998737, when compared with other extant models like IPSO, IWOA, PSO+CNN, WOA+CNN and CNN schemes.
{"title":"Optimized Deep CNN with Deviation Relevance-based LBP for Skin Cancer Detection: Hybrid Metaheuristic Enabled Feature Selection","authors":"B. K. M. Enturi, A. Suhasini, Narayana Satyala","doi":"10.1142/s0219467824500232","DOIUrl":"https://doi.org/10.1142/s0219467824500232","url":null,"abstract":"Segmentation of skin lesions is a significant and demanding task in dermoscopy images. This paper proposes a new skin cancer recognition scheme, with: “Pre-processing, Segmentation, Feature extraction, Optimal Feature Selection and Classification”. Here, pre-processing is done with certain processes. The pre-processed images are segmented via the “Otsu Thresholding model”. The third phase is feature extraction, where Deviation Relevance-based “Local Binary Pattern (DRLBP), Gray-Level Co-Occurrence Matrix (GLCM) features and Gray Level Run-Length Matrix (GLRM) features” are extracted. From these extracted features, the optimal features are chosen via Particle Updated WOA (PU-WOA) model. Subsequently, classification occurs via Optimized DCNN and NN to classify the skin lesion. To make the classification more precise, the DCNN is optimized by the introduced algorithm. The result has shown a higher accuracy of 0.998737, when compared with other extant models like IPSO, IWOA, PSO+CNN, WOA+CNN and CNN schemes.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46857798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-02DOI: 10.1142/s0219467824500347
Mona Mulchandani, P. Nair
Blockchain mining pools assist in reducing computational load on individual miner nodes via distributing mining tasks. This distribution must be done in a non-redundant manner, so that each miner is able to calculate block hashes with optimum efficiency. To perform this task, a wide variety of mining optimization methods are proposed by researchers, and most of them distribute mining tasks via statistical request processing models. These models segregate mining requests into non-redundant sets, each of which will be processed by individual miners. But this division of requests follows a static procedure, and does not consider miner specific parameters for set creation, due to which overall efficiency of the underlying model is limited, which reduces its mining performance under real-time scenarios. To overcome this issue, an Incremental & Continuous Q-Learning Framework for generation of miner-specific task groups is proposed in this text. The model initially uses a Genetic Algorithm (GA) method to improve individual miner performance, and then applies Q-Learning to individual mining requests. The Reason for selecting GA model is that it assists in maintaining better speed-to-power (S2P) ratio by optimization of miner resources that are utilized during computations. While, the reason for selecting Q-Learning Model is that it is able to continuously identify miners performance, and create performance-based mining pools at a per-miner level. Due to application of Q-Learning, the model is able to assign capability specific mining tasks to individual miner nodes. Because of this capability-driven approach, the model is able to maximize efficiency of mining, while maintaining its QoS performance. The model was tested on different consensus methods including Practical Byzantine Fault Tolerance Algorithm (PBFT), Proof-of-Work (PoW), Proof-of-Stake (PoS), and Delegated PoS (DPoS), and its performance was evaluated in terms of mining delay, miner efficiency, number of redundant calculations per miner, and energy efficiency for mining nodes. It was observed that the proposed GA based Q-Learning Model was able to reduce mining delay by 4.9%, improve miners efficiency by 7.4%, reduce number of redundant computations by 3.5%, and reduce energy required for mining by 7.1% when compared with various state-of-the-art mining optimization techniques. Similar performance improvement was observed when the model was applied on different blockchain deployments, thus indicating better scalability and deployment capability for multiple application scenarios.
{"title":"EBMICQL: Improving Efficiency of Blockchain Miner Pools via Incremental and Continuous Q-Learning Framework","authors":"Mona Mulchandani, P. Nair","doi":"10.1142/s0219467824500347","DOIUrl":"https://doi.org/10.1142/s0219467824500347","url":null,"abstract":"Blockchain mining pools assist in reducing computational load on individual miner nodes via distributing mining tasks. This distribution must be done in a non-redundant manner, so that each miner is able to calculate block hashes with optimum efficiency. To perform this task, a wide variety of mining optimization methods are proposed by researchers, and most of them distribute mining tasks via statistical request processing models. These models segregate mining requests into non-redundant sets, each of which will be processed by individual miners. But this division of requests follows a static procedure, and does not consider miner specific parameters for set creation, due to which overall efficiency of the underlying model is limited, which reduces its mining performance under real-time scenarios. To overcome this issue, an Incremental & Continuous Q-Learning Framework for generation of miner-specific task groups is proposed in this text. The model initially uses a Genetic Algorithm (GA) method to improve individual miner performance, and then applies Q-Learning to individual mining requests. The Reason for selecting GA model is that it assists in maintaining better speed-to-power (S2P) ratio by optimization of miner resources that are utilized during computations. While, the reason for selecting Q-Learning Model is that it is able to continuously identify miners performance, and create performance-based mining pools at a per-miner level. Due to application of Q-Learning, the model is able to assign capability specific mining tasks to individual miner nodes. Because of this capability-driven approach, the model is able to maximize efficiency of mining, while maintaining its QoS performance. The model was tested on different consensus methods including Practical Byzantine Fault Tolerance Algorithm (PBFT), Proof-of-Work (PoW), Proof-of-Stake (PoS), and Delegated PoS (DPoS), and its performance was evaluated in terms of mining delay, miner efficiency, number of redundant calculations per miner, and energy efficiency for mining nodes. It was observed that the proposed GA based Q-Learning Model was able to reduce mining delay by 4.9%, improve miners efficiency by 7.4%, reduce number of redundant computations by 3.5%, and reduce energy required for mining by 7.1% when compared with various state-of-the-art mining optimization techniques. Similar performance improvement was observed when the model was applied on different blockchain deployments, thus indicating better scalability and deployment capability for multiple application scenarios.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46230671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-02DOI: 10.1142/s0219467824500281
G. Mercy Bai, P. Venkadesh
The most common life-threatening disease, acute lymphoblastic leukemia (ALL), can be lethal within a few weeks if untreated. The early detection and analysis of leukemia is a key dilemma in the field of disease diagnosis, and the methods available for the classification process are time-consuming. To overcome the issues, this paper develops a robust classification technique named Horse Herd Whale Optimization-enabled Deep Neuro-Fuzzy Network (HHWO-enabled DNFN method) for ALL classification and severity analysis using the MapReduce framework. The input image is first preprocessed and segmented, and the useful features necessary for improving the classification performance are extracted during the mapper phase, known as HHWO, which incorporates Horse Herd Optimization Algorithm (HOA) and Whale Optimization Algorithm (WOA). Finally, severity analysis of ALL is done to classify the levels of leukemia to offer optimal treatment. As a result, the developed method performed better than other existing methods, achieving superior performance with a greater testing accuracy of 0.959, sensitivity of 0.965, and specificity of 0.966, respectively.
{"title":"Optimized Deep Neuro-Fuzzy Network with MapPeduce Architecture for Acute Lymphoblastic Leukemia Classification and Severity Analysis","authors":"G. Mercy Bai, P. Venkadesh","doi":"10.1142/s0219467824500281","DOIUrl":"https://doi.org/10.1142/s0219467824500281","url":null,"abstract":"The most common life-threatening disease, acute lymphoblastic leukemia (ALL), can be lethal within a few weeks if untreated. The early detection and analysis of leukemia is a key dilemma in the field of disease diagnosis, and the methods available for the classification process are time-consuming. To overcome the issues, this paper develops a robust classification technique named Horse Herd Whale Optimization-enabled Deep Neuro-Fuzzy Network (HHWO-enabled DNFN method) for ALL classification and severity analysis using the MapReduce framework. The input image is first preprocessed and segmented, and the useful features necessary for improving the classification performance are extracted during the mapper phase, known as HHWO, which incorporates Horse Herd Optimization Algorithm (HOA) and Whale Optimization Algorithm (WOA). Finally, severity analysis of ALL is done to classify the levels of leukemia to offer optimal treatment. As a result, the developed method performed better than other existing methods, achieving superior performance with a greater testing accuracy of 0.959, sensitivity of 0.965, and specificity of 0.966, respectively.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43271727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-02DOI: 10.1142/s0219467824500190
Shakil A. Shaikh, Jayant J. Chopade, Mohini Pramod Sardey
Multiple objects tracking in a video sequence can be performed by detecting and distinguishing the objects that appear in the sequence. In the context of computer vision, the robust multi-object tracking problem is a difficult problem to solve. Visual tracking of multiple objects is a vital part of an autonomous driving vehicle’s vision technology. Wide-area video surveillance is increasingly using advanced imaging devices with increased megapixel resolution and increased frame rates. As a result, there is a huge increase in demand for high-performance computation system of video surveillance systems for real-time processing of high-resolution videos. As a result, in this paper, we used a single stage framework to solve the MOT problem. We proposed a novel architecture in this paper that allows for the efficient use of one and multiple GPUs are used to process Full High Definition video in real time. For high-resolution video and images, the suggested approach is real-time multi-object detection based on Enhanced Yolov5-7S on Multi-GPU Vertex. We added one more layer at the top in backbone to increase the resolution of feature extracted image to detect small object and increase the accuracy of model. In terms of speed and accuracy, our proposed approach outperforms the state-of-the-art techniques.
通过检测和区分出现在视频序列中的对象,可以实现视频序列中的多个对象跟踪。在计算机视觉领域,鲁棒多目标跟踪问题是一个比较难解决的问题。多目标视觉跟踪是自动驾驶车辆视觉技术的重要组成部分。广域视频监控越来越多地使用具有更高百万像素分辨率和更高帧率的先进成像设备。因此,视频监控系统对高分辨率视频实时处理的高性能计算系统的需求大幅增加。因此,在本文中,我们使用单阶段框架来解决MOT问题。在本文中,我们提出了一种新的架构,可以有效地利用一个和多个gpu来实时处理全高清视频。对于高分辨率视频和图像,建议采用基于Enhanced Yolov5-7S on Multi-GPU Vertex的实时多目标检测方法。我们在主干的顶部增加了一层,以提高特征提取图像的分辨率,以检测小目标,提高模型的精度。在速度和准确性方面,我们提出的方法优于最先进的技术。
{"title":"Real-Time Multi-Object Detection Using Enhanced Yolov5-7S on Multi-GPU for High-Resolution Video","authors":"Shakil A. Shaikh, Jayant J. Chopade, Mohini Pramod Sardey","doi":"10.1142/s0219467824500190","DOIUrl":"https://doi.org/10.1142/s0219467824500190","url":null,"abstract":"Multiple objects tracking in a video sequence can be performed by detecting and distinguishing the objects that appear in the sequence. In the context of computer vision, the robust multi-object tracking problem is a difficult problem to solve. Visual tracking of multiple objects is a vital part of an autonomous driving vehicle’s vision technology. Wide-area video surveillance is increasingly using advanced imaging devices with increased megapixel resolution and increased frame rates. As a result, there is a huge increase in demand for high-performance computation system of video surveillance systems for real-time processing of high-resolution videos. As a result, in this paper, we used a single stage framework to solve the MOT problem. We proposed a novel architecture in this paper that allows for the efficient use of one and multiple GPUs are used to process Full High Definition video in real time. For high-resolution video and images, the suggested approach is real-time multi-object detection based on Enhanced Yolov5-7S on Multi-GPU Vertex. We added one more layer at the top in backbone to increase the resolution of feature extracted image to detect small object and increase the accuracy of model. In terms of speed and accuracy, our proposed approach outperforms the state-of-the-art techniques.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135261056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-02DOI: 10.1142/s0219467824500335
P. C. Sau, Manish Gupta, A. Bansal
In recent years, several studies have undergone automatic blood vessel segmentation based on unsupervised and supervised algorithms to reduce user interruption. Deep learning networks have been used to get highly accurate segmentation results. However, the incorrect segmentation of pathological information and low micro-vascular segmentation is considered the challenges present in the existing methods for segmenting the retinal blood vessel. It also affects different degrees of vessel thickness, contextual feature fusion in technique, and perception of details. A deep learning-aided method has been presented to address these challenges in this paper. In the first phase, the preprocessing is performed using the retinal fundus images employed by the black ring removal, LAB conversion, CLAHE-based contrast enhancement, and grayscale image. Thus, the blood vessel segmentation is performed by a new deep learning model termed optimized ResUNet[Formula: see text]. As an improvement to this deep learning architecture, the activation function is optimized by the J-AGSO algorithm. The objective function for the optimized ResUNet[Formula: see text]-based blood vessel segmentation is to minimize the binary cross-entropy loss function. Further, the post-processing of the images is carried out by median filtering and binary thresholding. By verifying the standard benchmark datasets, the proposed model outperforms and attains enhanced performance.
{"title":"Optimized ResUNet++-Enabled Blood Vessel Segmentation for Retinal Fundus Image Based on Hybrid Meta-Heuristic Improvement","authors":"P. C. Sau, Manish Gupta, A. Bansal","doi":"10.1142/s0219467824500335","DOIUrl":"https://doi.org/10.1142/s0219467824500335","url":null,"abstract":"In recent years, several studies have undergone automatic blood vessel segmentation based on unsupervised and supervised algorithms to reduce user interruption. Deep learning networks have been used to get highly accurate segmentation results. However, the incorrect segmentation of pathological information and low micro-vascular segmentation is considered the challenges present in the existing methods for segmenting the retinal blood vessel. It also affects different degrees of vessel thickness, contextual feature fusion in technique, and perception of details. A deep learning-aided method has been presented to address these challenges in this paper. In the first phase, the preprocessing is performed using the retinal fundus images employed by the black ring removal, LAB conversion, CLAHE-based contrast enhancement, and grayscale image. Thus, the blood vessel segmentation is performed by a new deep learning model termed optimized ResUNet[Formula: see text]. As an improvement to this deep learning architecture, the activation function is optimized by the J-AGSO algorithm. The objective function for the optimized ResUNet[Formula: see text]-based blood vessel segmentation is to minimize the binary cross-entropy loss function. Further, the post-processing of the images is carried out by median filtering and binary thresholding. By verifying the standard benchmark datasets, the proposed model outperforms and attains enhanced performance.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45710879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}