A rapid increase in the number of online applications has led to exponential growth in traffic. In data centers, it is hard to dynamically balance such huge amounts of traffic while keeping track of server data. A load-balancing strategy is an effective solution for distributing such huge amounts of traffic. The major contribution of this research work is to improve the performance of the network by designing a dynamic load balancing algorithm based on server data using SDN, reduction of controller overhead and optimizing energy consumption in a server pool. The problem is formulated using a Linear Programming mathematical model. In order to demonstrate the effectiveness and feasibility of the proposed technique, the experimental setup is deployed using real hardware components such as a Zodiac-Fx switch, Ryu controller and various web servers in the data center network. This proposed scheme is compared with round-robin and random load balancing mechanisms. The experimental results show that the performance is improved by 87.4% while saving 78% of the energy.
{"title":"Leveraging Software-Defined Networks for Load Balancing in Data Centre Networks using Linear Programming","authors":"Vani Kurugod Aswathanarayana Reddy, Ramamohan Babu Kasturi Nagappasetty","doi":"10.47839/ijc.22.3.3237","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3237","url":null,"abstract":"A rapid increase in the number of online applications has led to exponential growth in traffic. In data centers, it is hard to dynamically balance such huge amounts of traffic while keeping track of server data. A load-balancing strategy is an effective solution for distributing such huge amounts of traffic. The major contribution of this research work is to improve the performance of the network by designing a dynamic load balancing algorithm based on server data using SDN, reduction of controller overhead and optimizing energy consumption in a server pool. The problem is formulated using a Linear Programming mathematical model. In order to demonstrate the effectiveness and feasibility of the proposed technique, the experimental setup is deployed using real hardware components such as a Zodiac-Fx switch, Ryu controller and various web servers in the data center network. This proposed scheme is compared with round-robin and random load balancing mechanisms. The experimental results show that the performance is improved by 87.4% while saving 78% of the energy.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dewi Kartini Hassan, Hazwani Suhaimi, Muhammad Roil Bilad, Pg Emeroylariffion Abas
Manual cell counting using Hemocytometer is commonly used to quantify cells, as it is an inexpensive and versatile method. However, it is labour-intensive, tedious, and time-consuming. On the other hand, most automated cell counting methods are expensive and require experts to operate. Thus, the use of image analysis software allows one to access low-cost but robust automated cell counting. This study explores the advanced setting of image processing software to obtain routes with the highest counting accuracy. The results show the effectiveness of advanced settings in CellProfiler for counting cells from synthetic images. Two routes were found to give the highest performance, with average image and cell accuracies of 85% and 99.8%, respectively, and the highest F1 score of 0.83. However, the two routes were unable to correctly determine the exact number of cells on the histology images, albeit giving a respectable cell accuracy of 79.6%. Further investigation has shown that CellProfiler is able to correctly identify the bulk of the cells within the histology images. Good image quality with high focus and less blur was identified as the key to successful image-based cell counting. To further enhance the accuracy, other modules can be included to further segment an object hence improving the number of objects identified. Future work can focus on evaluating the robustness of the routes by comparing them with other methods and validating with the manual cell counting method.
{"title":"Automated Cell Counting using Image Processing","authors":"Dewi Kartini Hassan, Hazwani Suhaimi, Muhammad Roil Bilad, Pg Emeroylariffion Abas","doi":"10.47839/ijc.22.3.3224","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3224","url":null,"abstract":"Manual cell counting using Hemocytometer is commonly used to quantify cells, as it is an inexpensive and versatile method. However, it is labour-intensive, tedious, and time-consuming. On the other hand, most automated cell counting methods are expensive and require experts to operate. Thus, the use of image analysis software allows one to access low-cost but robust automated cell counting. This study explores the advanced setting of image processing software to obtain routes with the highest counting accuracy. The results show the effectiveness of advanced settings in CellProfiler for counting cells from synthetic images. Two routes were found to give the highest performance, with average image and cell accuracies of 85% and 99.8%, respectively, and the highest F1 score of 0.83. However, the two routes were unable to correctly determine the exact number of cells on the histology images, albeit giving a respectable cell accuracy of 79.6%. Further investigation has shown that CellProfiler is able to correctly identify the bulk of the cells within the histology images. Good image quality with high focus and less blur was identified as the key to successful image-based cell counting. To further enhance the accuracy, other modules can be included to further segment an object hence improving the number of objects identified. Future work can focus on evaluating the robustness of the routes by comparing them with other methods and validating with the manual cell counting method.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article proposes using Kubernetes (k8s) as a tool for managing FPGA-based devices in a distributed system. This can help automate programming, monitoring, and controlling the state of devices, and also optimize resource usage, ensure high availability and reliability, and provide security and privacy for data processed by specialized processors. The article provides a practical example of integrating an FPGA-based device into a Kubernetes cluster. It will help to scale, maintain and monitor distributed systems with millions of devices and manage such big systems from one place by using Kubernetes API. Also, it will help to integrate other third-party tools into the system, which makes it to possible to extend the systems. As a future work, the proposed approach can help integrate FPGA and its real-time reconfiguration tool into a distributed system, making it possible to control FPGA on different IoT devices. Overall, using k8s to manage FPGA-based devices can provide significant advantages in such fields as telecommunications, information technology, automation, navigation, and energy. However, the implementation may require specialized skills and experience.
{"title":"Organization of FPGA-based Devices in Distributed Systems","authors":"Mykhailo Maidan, Anatoliy Melnyk","doi":"10.47839/ijc.22.3.3231","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3231","url":null,"abstract":"The article proposes using Kubernetes (k8s) as a tool for managing FPGA-based devices in a distributed system. This can help automate programming, monitoring, and controlling the state of devices, and also optimize resource usage, ensure high availability and reliability, and provide security and privacy for data processed by specialized processors. The article provides a practical example of integrating an FPGA-based device into a Kubernetes cluster. It will help to scale, maintain and monitor distributed systems with millions of devices and manage such big systems from one place by using Kubernetes API. Also, it will help to integrate other third-party tools into the system, which makes it to possible to extend the systems. As a future work, the proposed approach can help integrate FPGA and its real-time reconfiguration tool into a distributed system, making it possible to control FPGA on different IoT devices. Overall, using k8s to manage FPGA-based devices can provide significant advantages in such fields as telecommunications, information technology, automation, navigation, and energy. However, the implementation may require specialized skills and experience.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main idea is to create logic-free vector simulation, based on only read-write transactions on address memory. Stuck-at fault vector simulation is leveraged as a technology for assessing the quality of tests for complex IP-cores implemented in Field Programmable Gate Array (FPGA), Application-Specific Integrated Circuit (ASIC). The main task is to implement new simple and reliable models and methods of vector computing based on primitive read-write transactions in the technology of vector flexible interpretive fault simulation. Vector computing is a computational process based on read-write transactions on bits of a binary vector of functionality, where the input data is the addresses of the bits. A vector-deductive method for the synthesis of vectors for propagating input fault lists is proposed, which has a quadratic computational complexity. Analytical expressions of logic that require algorithmically complex computing are replaced by vectors of output states of elements and digital circuits. A new matrix of deductive vectors is synthesized, which is characterized by the following properties: compactness, parallel data processing based on a single read–write transaction in memory, exclusion of traditional logic from fault simulation procedures, full automation of its synthesis process, and focus on the technological solving of many technical diagnostics problems. A new structure of the sequencer for vector deductive fault simulation is proposed, which is characterized by ease of implementation on a single memory block. It eliminates any traditional logic, uses data read-write transactions in memory to form an output fault vector, uses data as addresses to process the data itself.
{"title":"Vector-deductive Faults-as-Address Simulation","authors":"Anna Hahanova","doi":"10.47839/ijc.22.3.3227","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3227","url":null,"abstract":"The main idea is to create logic-free vector simulation, based on only read-write transactions on address memory. Stuck-at fault vector simulation is leveraged as a technology for assessing the quality of tests for complex IP-cores implemented in Field Programmable Gate Array (FPGA), Application-Specific Integrated Circuit (ASIC). The main task is to implement new simple and reliable models and methods of vector computing based on primitive read-write transactions in the technology of vector flexible interpretive fault simulation. Vector computing is a computational process based on read-write transactions on bits of a binary vector of functionality, where the input data is the addresses of the bits. A vector-deductive method for the synthesis of vectors for propagating input fault lists is proposed, which has a quadratic computational complexity. Analytical expressions of logic that require algorithmically complex computing are replaced by vectors of output states of elements and digital circuits. A new matrix of deductive vectors is synthesized, which is characterized by the following properties: compactness, parallel data processing based on a single read–write transaction in memory, exclusion of traditional logic from fault simulation procedures, full automation of its synthesis process, and focus on the technological solving of many technical diagnostics problems. A new structure of the sequencer for vector deductive fault simulation is proposed, which is characterized by ease of implementation on a single memory block. It eliminates any traditional logic, uses data read-write transactions in memory to form an output fault vector, uses data as addresses to process the data itself.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data analytics helps companies to analyze customer trends, make better business decisions and optimize their performances. Scanned document analysis is an important step in data analytics. Automatically extracting information from a scanned receipt has potential applications in industries. Both printed and handwritten letters are present in a receipt. Often these receipt documents are of low resolution due to paper damage and poor scanning quality. So, correctly recognizing each letter is a challenge. This work focuses on building an improved Convolutional Neural Network (CNN) model with regularization technique for classifying all English characters (both uppercase and lowercase) and numbers from 0 to 9. The training data contains about 60000 images of letters (English alphabets and numbers).This training data consists of letter images from windows true type (.ttf ) files and from different scanned receipts. We developed different CNN models for this 62 class classification problem, with different regularization and dropout techniques. Hyperparameters of Convolutional Neural Network are adjusted to obtain the optimum accuracy. Different optimization methods are considered to obtain better accuracy. Performance of each CNN model is analyzed in terms of accuracy, precision value, recall value, F1 score and confusion matrix to find out the best model. Prediction error of the model is calculated for Gaussian noise and impulse noise at different noise levels.
数据分析帮助公司分析客户趋势,做出更好的商业决策并优化他们的业绩。扫描文档分析是数据分析的重要步骤。从扫描收据中自动提取信息在工业上有潜在的应用。打印和手写的信件都包含在收据中。通常,由于纸张损坏和扫描质量差,这些收据文件的分辨率较低。因此,正确识别每个字母是一个挑战。这项工作的重点是用正则化技术构建一个改进的卷积神经网络(CNN)模型,用于对所有英文字符(大写和小写)和从0到9的数字进行分类。训练数据包含约60000个字母(英文字母和数字)图像。该训练数据由来自windows true type (.ttf)文件和来自不同扫描收据的字母图像组成。我们针对这62个类别的分类问题开发了不同的CNN模型,使用了不同的正则化和dropout技术。对卷积神经网络的超参数进行了调整,以获得最佳精度。为了获得更好的精度,考虑了不同的优化方法。从准确率、精度值、召回值、F1分数和混淆矩阵等方面分析每个CNN模型的性能,找出最佳模型。计算了不同噪声水平下高斯噪声和脉冲噪声对模型的预测误差。
{"title":"Classification of Letter Images from Scanned Invoices using CNN","authors":"Desiree Juby Vincent, Hari V. S. Hari V. S.","doi":"10.47839/ijc.22.3.3232","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3232","url":null,"abstract":"Data analytics helps companies to analyze customer trends, make better business decisions and optimize their performances. Scanned document analysis is an important step in data analytics. Automatically extracting information from a scanned receipt has potential applications in industries. Both printed and handwritten letters are present in a receipt. Often these receipt documents are of low resolution due to paper damage and poor scanning quality. So, correctly recognizing each letter is a challenge. This work focuses on building an improved Convolutional Neural Network (CNN) model with regularization technique for classifying all English characters (both uppercase and lowercase) and numbers from 0 to 9. The training data contains about 60000 images of letters (English alphabets and numbers).This training data consists of letter images from windows true type (.ttf ) files and from different scanned receipts. We developed different CNN models for this 62 class classification problem, with different regularization and dropout techniques. Hyperparameters of Convolutional Neural Network are adjusted to obtain the optimum accuracy. Different optimization methods are considered to obtain better accuracy. Performance of each CNN model is analyzed in terms of accuracy, precision value, recall value, F1 score and confusion matrix to find out the best model. Prediction error of the model is calculated for Gaussian noise and impulse noise at different noise levels.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135459588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Adversarial Networks (GANs) are a powerful class of deep learning models that can generate realistic synthetic data. However, designing and optimizing GANs can be a difficult task due to various technical challenges. The article provides a comprehensive analysis of solution methods for GAN performance optimization. The research covers a range of GAN design components, including loss functions, activation functions, batch normalization, weight clipping, gradient penalty, stability problems, performance evaluation, mini-batch discrimination, and other aspects. The article reviews various techniques used to address these challenges and highlights the advancements in the field. The article offers an up-to-date overview of the state-of-the-art methods for structuring, designing, and optimizing GANs, which will be valuable for researchers and practitioners. The implementation of the optimization strategy for the design of standard and deep convolutional GANs (handwritten digits and fingerprints) developed by the authors is discussed in detail, the obtained results confirm the effectiveness of the proposed optimization approach.
{"title":"Optimization Strategy for Generative Adversarial Networks Design","authors":"Oleksandr Striuk, Yuriy Kondratenko","doi":"10.47839/ijc.22.3.3223","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3223","url":null,"abstract":"Generative Adversarial Networks (GANs) are a powerful class of deep learning models that can generate realistic synthetic data. However, designing and optimizing GANs can be a difficult task due to various technical challenges. The article provides a comprehensive analysis of solution methods for GAN performance optimization. The research covers a range of GAN design components, including loss functions, activation functions, batch normalization, weight clipping, gradient penalty, stability problems, performance evaluation, mini-batch discrimination, and other aspects. The article reviews various techniques used to address these challenges and highlights the advancements in the field. The article offers an up-to-date overview of the state-of-the-art methods for structuring, designing, and optimizing GANs, which will be valuable for researchers and practitioners. The implementation of the optimization strategy for the design of standard and deep convolutional GANs (handwritten digits and fingerprints) developed by the authors is discussed in detail, the obtained results confirm the effectiveness of the proposed optimization approach.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135457770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the research in underwater coral farming development is increasing due to the incremental demand for a source of medicines. The coral farms are located in the depth of the seabed and physically monitoring the coral farms is not an easy task in an underwater environment. At the same time, wired communication makes massive deployment and maintenance costs. The terrestrial wireless communication protocols in air and their approaches cannot be directly implemented in underwater communication scenarios as seawater is a highly saline medium. The protocol design in underwater acoustic communication for coral farms is a challenging research domain. This paper proposes the Scheduled Process Cross Layer Medium Access Control (SPCL-MAC) protocol design using stochastic network calculus. The fundamental idea of this protocol is to schedule the handshaking communication during the reserved process cycle and coordinate the process among the physical and network layer in underwater wireless communication. Performance analyses for frame delivery ratio, end-to-end delay, and energy consumption of both transmission and reception are carried out. The proposed mathematical models are also evaluated for its accuracy using discrete event simulation studies.
{"title":"Underwater Cross Layer Protocol Design for Data Link Layer: Stochastic Network Calculus","authors":"M. Saravanan, Rajeev Sukumaran","doi":"10.47839/ijc.22.3.3233","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3233","url":null,"abstract":"Nowadays, the research in underwater coral farming development is increasing due to the incremental demand for a source of medicines. The coral farms are located in the depth of the seabed and physically monitoring the coral farms is not an easy task in an underwater environment. At the same time, wired communication makes massive deployment and maintenance costs. The terrestrial wireless communication protocols in air and their approaches cannot be directly implemented in underwater communication scenarios as seawater is a highly saline medium. The protocol design in underwater acoustic communication for coral farms is a challenging research domain. This paper proposes the Scheduled Process Cross Layer Medium Access Control (SPCL-MAC) protocol design using stochastic network calculus. The fundamental idea of this protocol is to schedule the handshaking communication during the reserved process cycle and coordinate the process among the physical and network layer in underwater wireless communication. Performance analyses for frame delivery ratio, end-to-end delay, and energy consumption of both transmission and reception are carried out. The proposed mathematical models are also evaluated for its accuracy using discrete event simulation studies.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been a rise in the amount of research conducted in the field of human-computer interaction (HCI) employing electrooculography (EOG), which is a technology that is effectively and widely used to detect human eye activity. The use of EOG signals as a control signal for HCI is essential for understanding, characterizing, and classifying eye movements, which can be applied to a wide range of applications including virtual mouse and keyboard control, electric power wheelchairs, industrial assistive robots, and patient rehabilitation or communication purposes. In the field of HCI, EOG signals classification has continuously been performed to make the system more effective and reliable than ever. In this paper, a Recurrent neural network model is proposed for classifying eye movement directions utilizing several informative feature extraction methods and noise filtering. Our classification model is comprised of Gated Recurrent Unit (GRU) with a Bidirectional GRU followed by dense layers. The classifier is investigated to find a better classification performance of four directional eye movements: Up and Down for the vertical channel, along with Left and Right for the horizontal channel of EOG signals. The classifier achieved 99.77% and 99.74% accuracy for vertical and horizontal channels, respectively, which outperforms the compared state-of-the-art studies. The proposed classifier allows disabled people to make life-improving decisions using computers, achieving the highest classification performance for rehabilitation and other applications.
{"title":"An RNN-based Hybrid Model for Classification of Electrooculogram Signal for HCI","authors":"Kowshik Sankar Roy, Sheikh Md. Rabiul Islam","doi":"10.47839/ijc.22.3.3228","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3228","url":null,"abstract":"In recent years, there has been a rise in the amount of research conducted in the field of human-computer interaction (HCI) employing electrooculography (EOG), which is a technology that is effectively and widely used to detect human eye activity. The use of EOG signals as a control signal for HCI is essential for understanding, characterizing, and classifying eye movements, which can be applied to a wide range of applications including virtual mouse and keyboard control, electric power wheelchairs, industrial assistive robots, and patient rehabilitation or communication purposes. In the field of HCI, EOG signals classification has continuously been performed to make the system more effective and reliable than ever. In this paper, a Recurrent neural network model is proposed for classifying eye movement directions utilizing several informative feature extraction methods and noise filtering. Our classification model is comprised of Gated Recurrent Unit (GRU) with a Bidirectional GRU followed by dense layers. The classifier is investigated to find a better classification performance of four directional eye movements: Up and Down for the vertical channel, along with Left and Right for the horizontal channel of EOG signals. The classifier achieved 99.77% and 99.74% accuracy for vertical and horizontal channels, respectively, which outperforms the compared state-of-the-art studies. The proposed classifier allows disabled people to make life-improving decisions using computers, achieving the highest classification performance for rehabilitation and other applications.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ear biometrics is one of the primary biometrics that is definitely standing out. Ear recognition enjoys special benefits and can make distinguishing proof safer and dependable along with other biometrics (for example fingerprints and face). Particularly as a supplement to face recognition schemes that experience issues in genuine circumstances. This is because of the extraordinary variety of a planar representation of a confusing object that is varied in shapes, illumination, and profile shape. This study is an endeavor to conquer these restrictions, by proposing scale-invariant feature transform (SIFT) calculation to extract feature vector descriptors from both left and right ears which is to be melded as one descriptor utilized for verification purposes. Likewise, another plan is proposed for the recognition stage, based on a genetic algorithm-backpropagation neural network as an accurate recognition approach. This approach will be tried by utilizing images from the Indian Institute of Technology Delhi's creation (IITD). The suggested system exhibits a 99.7% accuracy recognition rate.
{"title":"Human Recognition based on Multi-instance Ear Scheme","authors":"Inass Sh. Hussein, Nilam Nur Amir Sjarif","doi":"10.47839/ijc.22.3.3236","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3236","url":null,"abstract":"Ear biometrics is one of the primary biometrics that is definitely standing out. Ear recognition enjoys special benefits and can make distinguishing proof safer and dependable along with other biometrics (for example fingerprints and face). Particularly as a supplement to face recognition schemes that experience issues in genuine circumstances. This is because of the extraordinary variety of a planar representation of a confusing object that is varied in shapes, illumination, and profile shape. This study is an endeavor to conquer these restrictions, by proposing scale-invariant feature transform (SIFT) calculation to extract feature vector descriptors from both left and right ears which is to be melded as one descriptor utilized for verification purposes. Likewise, another plan is proposed for the recognition stage, based on a genetic algorithm-backpropagation neural network as an accurate recognition approach. This approach will be tried by utilizing images from the Indian Institute of Technology Delhi's creation (IITD). The suggested system exhibits a 99.7% accuracy recognition rate.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135458502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic Resonance Imaging is a vital imaging tool for detecting brain malignancies in medical diagnosis. The semantic gap between low-level visual information collected by MRI equipment and high-level information stated by the doctor, on the other hand, is the biggest stumbling block in MR image classification. Large amount of medial image data is generated through various imaging modalities. For processing this large amount of medical data, considerable period of time is required. Due to this, time complexity becomes a measure challenge in medical image analysis. As a result, this paper offers analysis for brain tumour classification method named as Dendritic Cell-Squirrel Search Algorithm-based Classifier in the parallel environment. In this paper a parallel environment is proposed. In the experimentation the input dataset is divided into datasets of equal sizes and given as the input on the multiple cores to reduce the time complexity of the algorithm. Due to this, brain tumor classification becomes faster. Here initially, pre-processing is performed applying Gaussian Filter and ROI, it improves the data quality. Subsequently segmentation is done with sparse fuzzy-c-means (Sparse FCM) for extracting statistical and texture features. Additionally, for feature selection, the Particle Rider mutual information is used, which is created by combining Particle Swarm Optimization (PSO), Rider Optimization Algorithm (ROA), and mutual information. The Dendritic Cell-SSA algorithm, which combines the Dendritic Cell Algorithm and the Squirrel Search Algorithm, is used to classify brain tumors. With a maximum accuracy of 97.79 percent, sensitivity of 97.58 percent, and specificity of 98 percent, the Particle Rider MI-Dendritic Cell-Squirrel Search Algorithm-Artificial Immune Classifier outperforms the competition. The experimental result shows that the proposed parallel technique works efficiently and the time complexity is improved up to 99.94% for Particle Rider MI-Dendritic Cell- Squirrel Search Algorithm-based artificial immune Classifier and 99.92% for Rider Optimization-Dendritic Cell –Squirrel Search Algorithm based Classifier as compared to sequential approach.
{"title":"Classification of Brain Tumor using Dendritic Cell-Squirrel Search Algorithm in a Parallel Environment","authors":"Rahul R. Chakre, Dipak V. Patil","doi":"10.47839/ijc.22.3.3235","DOIUrl":"https://doi.org/10.47839/ijc.22.3.3235","url":null,"abstract":"Magnetic Resonance Imaging is a vital imaging tool for detecting brain malignancies in medical diagnosis. The semantic gap between low-level visual information collected by MRI equipment and high-level information stated by the doctor, on the other hand, is the biggest stumbling block in MR image classification. Large amount of medial image data is generated through various imaging modalities. For processing this large amount of medical data, considerable period of time is required. Due to this, time complexity becomes a measure challenge in medical image analysis. As a result, this paper offers analysis for brain tumour classification method named as Dendritic Cell-Squirrel Search Algorithm-based Classifier in the parallel environment. In this paper a parallel environment is proposed. In the experimentation the input dataset is divided into datasets of equal sizes and given as the input on the multiple cores to reduce the time complexity of the algorithm. Due to this, brain tumor classification becomes faster. Here initially, pre-processing is performed applying Gaussian Filter and ROI, it improves the data quality. Subsequently segmentation is done with sparse fuzzy-c-means (Sparse FCM) for extracting statistical and texture features. Additionally, for feature selection, the Particle Rider mutual information is used, which is created by combining Particle Swarm Optimization (PSO), Rider Optimization Algorithm (ROA), and mutual information. The Dendritic Cell-SSA algorithm, which combines the Dendritic Cell Algorithm and the Squirrel Search Algorithm, is used to classify brain tumors. With a maximum accuracy of 97.79 percent, sensitivity of 97.58 percent, and specificity of 98 percent, the Particle Rider MI-Dendritic Cell-Squirrel Search Algorithm-Artificial Immune Classifier outperforms the competition. The experimental result shows that the proposed parallel technique works efficiently and the time complexity is improved up to 99.94% for Particle Rider MI-Dendritic Cell- Squirrel Search Algorithm-based artificial immune Classifier and 99.92% for Rider Optimization-Dendritic Cell –Squirrel Search Algorithm based Classifier as compared to sequential approach.","PeriodicalId":37669,"journal":{"name":"International Journal of Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135459202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}