Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.930
A. Gaikwad, Kavita Singh
You can’t obtain the outcomes you need without planning, thus it’s at the heart of cloud computing. This article’s major goal is to decrease value-added time, increase resource utilisation, and make cloud services viable for a single activity. In recent years, metaheuristic algorithms have drew attention to the correct functioning of work scheduling algorithms among the many job scheduling techniques. With sports leagues, the algorithm based on the League Championship (LCA) is fascinating because it can be used to identify the best team/task for programming.This article uses the Improved League Championship Algorithm (ILCA) to schedule tasks, reducing deployment time, cloud usage, and cost. The ILCA is implemented through the Cloudsim simulator and the Java programming language with a nonpreventive planning strategy. ILCA also enhances economies of scale and minimises the value of using the cloud. As it has proven to be versatile in terms of time to manufacture, resource usage and economics, ILCA could be a good candidate for a cloud broker as it has proven to be versatile in termsof time to manufacture, resource usage and economics usage.
{"title":"Improved League Championship Algorithm (ILCA) for Load Balancing in Cloud Computing","authors":"A. Gaikwad, Kavita Singh","doi":"10.47164/ijngc.v13i5.930","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.930","url":null,"abstract":"You can’t obtain the outcomes you need without planning, thus it’s at the heart of cloud computing. This article’s major goal is to decrease value-added time, increase resource utilisation, and make cloud services viable for a single activity. In recent years, metaheuristic algorithms have drew attention to the correct functioning of work scheduling algorithms among the many job scheduling techniques. With sports leagues, the algorithm based on the League Championship (LCA) is fascinating because it can be used to identify the best team/task for programming.This article uses the Improved League Championship Algorithm (ILCA) to schedule tasks, reducing deployment time, cloud usage, and cost. The ILCA is implemented through the Cloudsim simulator and the Java programming language with a nonpreventive planning strategy. ILCA also enhances economies of scale and minimises the value of using the cloud. As it has proven to be versatile in terms of time to manufacture, resource usage and economics, ILCA could be a good candidate for a cloud broker as it has proven to be versatile in termsof time to manufacture, resource usage and economics usage.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"50 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90593897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Each year, a large number of youngsters are found missing in India. Among them, a large number of cases are never solved due to various difficulties faced by the police ranging from heavy paperwork to lacking technology. Therefore, one of this work’s key goals is to provide an application that may assist people whose children have been missing and rescued by the public. This will also reduce the time required to find the missing child to reunite the child with their loved ones as soon as possible. The pictures of child victims can be uploaded by the citizens along with landmarks, to our web app. The photographs will be matched to the missing child’s registered photographs if existing in the database. A deep neural network model is trained to locate the lost youngster using a facial picture uploaded by the citizens. Multi-Tasking CNN (MTCNN), the most efficient DNN technique for image-based apps, is used for facial Identification. The images were passed through an augmentation layer to get images of different orientations, brightness, and contrast, which were used ahead to train the EfficientNetB0 model. This model is then used to recognize faces in photographs. Using the MTCNN model for facial recognition with EfficientNetB0 and developing it yields a deep learning model that is free from all types of distortion. The model’s training accuracy is 96.66 percent and its testing accuracy is 76.81 percent, implying that there is approximately 77 percent possibility of finding a match for the missing kid. It was evaluated using 25 Child classes. Each Child class has around 15 to 20 images. These images are taken with different backgrounds and real-time settings so that model will work even when noise is present in the image.
{"title":"Lost + Found: The Lost Angel Investigator","authors":"Harsh Shrirame, Bhavesh Kewalramani, Daksh Kothari, Darshan Jawandhiya, Rina Damdoo","doi":"10.47164/ijngc.v13i5.906","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.906","url":null,"abstract":"Each year, a large number of youngsters are found missing in India. Among them, a large number of cases are never solved due to various difficulties faced by the police ranging from heavy paperwork to lacking technology. Therefore, one of this work’s key goals is to provide an application that may assist people whose children have been missing and rescued by the public. This will also reduce the time required to find the missing child to reunite the child with their loved ones as soon as possible. The pictures of child victims can be uploaded by the citizens along with landmarks, to our web app. The photographs will be matched to the missing child’s registered photographs if existing in the database. A deep neural network model is trained to locate the lost youngster using a facial picture uploaded by the citizens. Multi-Tasking CNN (MTCNN), the most efficient DNN technique for image-based apps, is used for facial Identification. The images were passed through an augmentation layer to get images of different orientations, brightness, and contrast, which were used ahead to train the EfficientNetB0 model. This model is then used to recognize faces in photographs. Using the MTCNN model for facial recognition with EfficientNetB0 and developing it yields a deep learning model that is free from all types of distortion. The model’s training accuracy is 96.66 percent and its testing accuracy is 76.81 percent, implying that there is approximately 77 percent possibility of finding a match for the missing kid. It was evaluated using 25 Child classes. Each Child class has around 15 to 20 images. These images are taken with different backgrounds and real-time settings so that model will work even when noise is present in the image.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"51 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90850833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.970
Dr. Purshottam J. Assudani, Dr. Rakesh K. Kadu, Rizwan Sheikh, Tushar Khanna
The demand for an online college job board network and its copy in relating college pupils and careervacancies Traditionally, career sites are used by talent managers for candid exploration and recruitment. This task is based on an employment portal organized for one of the well-known engineering campuses and is a variation of a job council designed precisely for campus students. Providing job introduction riding to the talents of students and services such as candidate filtering for companies to survey candidates will help learners and companies to find suitable aspirants for the job. We aim to be beneficial. Keywords— Natural Language Processing, Recruitment, Artificial Intelligence, Knowledge Base
{"title":"Smart College Campus Recruitment System","authors":"Dr. Purshottam J. Assudani, Dr. Rakesh K. Kadu, Rizwan Sheikh, Tushar Khanna","doi":"10.47164/ijngc.v13i5.970","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.970","url":null,"abstract":"The demand for an online college job board network and its copy in relating college pupils and careervacancies Traditionally, career sites are used by talent managers for candid exploration and recruitment. This task is based on an employment portal organized for one of the well-known engineering campuses and is a variation of a job council designed precisely for campus students. Providing job introduction riding to the talents of students and services such as candidate filtering for companies to survey candidates will help learners and companies to find suitable aspirants for the job. We aim to be beneficial. Keywords— Natural Language Processing, Recruitment, Artificial Intelligence, Knowledge Base","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"83 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83966743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.936
Roshni Khedgaonkar, Kavita Singh, Sunny Mate
Due to the remarkable data generation abilities of the generative models, many generative adversarial networks (GAN) models have been developed, and several real-world applications in computer vision and machine learning have emerged. The generative models have received significant attention in the field of unsupervised learning via this new and useful framework. In spite of GAN's outstanding performance, steady training remains a challenge. In this model, use of Deep Convolutional Generative Adversarial Networks is incorporated, Main aim is to produce human faces from unlabeled data. Face generation has a wide range of applications in image processing, entertainment, and other industries. Extensive simulation is performed on the CelebA dataset. Key focus is to successfully construct human faces from the unlabeled data and random noise and achieved average loss of 1.115% and 0.5894 % for generator and discriminator respectively.
{"title":"Novel approach to Create Human Faces with DCGAN for Face Recognition","authors":"Roshni Khedgaonkar, Kavita Singh, Sunny Mate","doi":"10.47164/ijngc.v13i5.936","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.936","url":null,"abstract":"Due to the remarkable data generation abilities of the generative models, many generative adversarial networks (GAN) models have been developed, and several real-world applications in computer vision and machine learning have emerged. The generative models have received significant attention in the field of unsupervised learning via this new and useful framework. In spite of GAN's outstanding performance, steady training remains a challenge. In this model, use of Deep Convolutional Generative Adversarial Networks is incorporated, Main aim is to produce human faces from unlabeled data. Face generation has a wide range of applications in image processing, entertainment, and other industries. Extensive simulation is performed on the CelebA dataset. Key focus is to successfully construct human faces from the unlabeled data and random noise and achieved average loss of 1.115% and 0.5894 % for generator and discriminator respectively.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"7 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82779716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.950
Purushottam Assudani, Mehvash Khan, Mukesh Kumar, Tejas V. Bhutada
Virtualization technology is used by cloud systems for the users to utilize cloud resources through Virtual Machines.These VM’s process the task requests made by users. Ever since inefficient hardware utilization is the concernfor the future and the environment, efficient work load balancing and allocation of VMs helps to bring down thehardware usage and results to efficient working. That being said, this paper proposes task scheduling frameworkwhere the task will be assigned to a VMs running on the active hosts(servers) through preemption as required andclassification of the cloudlets. The algorithm that we have taken into consideration will categorize the cloudletsinto three distinct types and allocate them a VM based on first come, first served resource time in regards to thatparticular host. This in turn will reduce the energy consumption by having lesser machines running in the activestate meanwhile preserving efficient utilization of the active servers. Such kind of simulations are fairly achievedusing the CloudSim framework
{"title":"Energy Aware Job Scheduling and Simulation in a Cloud Datacenter","authors":"Purushottam Assudani, Mehvash Khan, Mukesh Kumar, Tejas V. Bhutada","doi":"10.47164/ijngc.v13i5.950","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.950","url":null,"abstract":"Virtualization technology is used by cloud systems for the users to utilize cloud resources through Virtual Machines.These VM’s process the task requests made by users. Ever since inefficient hardware utilization is the concernfor the future and the environment, efficient work load balancing and allocation of VMs helps to bring down thehardware usage and results to efficient working. That being said, this paper proposes task scheduling frameworkwhere the task will be assigned to a VMs running on the active hosts(servers) through preemption as required andclassification of the cloudlets. The algorithm that we have taken into consideration will categorize the cloudletsinto three distinct types and allocate them a VM based on first come, first served resource time in regards to thatparticular host. This in turn will reduce the energy consumption by having lesser machines running in the activestate meanwhile preserving efficient utilization of the active servers. Such kind of simulations are fairly achievedusing the CloudSim framework","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"37 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74521013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, plagiarism that uses the code snippets or program of others without permission has become a social problem. It is widely spread from very familiar student reports to worldwide academic papers. In this paper, we deal with plagiarism in programming assignments, and explain the plagiarism patterns often found in text. Existing plagiarism detection tools utilize string matching algorithms to calculate the plagiarism. We have brought to light the problems associated with existing tools and propose a method to rectify them efficiently with the help of algorithms proposed in the paper. To the existing detection method, we combine some heuristics which are estimation of time complexity and loop detection, to improve the accuracy of the plagiarized sections and propose it as a plagiarism detection method.
{"title":"PLAGIARISM DETECTION IN PROGRAMMING USING PERFORMANCE ANALYZING FEATURES","authors":"D.S. Adane, Abhishek Angale, Ayush Singh, Rituj Aryan, Sumeet Yadav","doi":"10.47164/ijngc.v13i5.964","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.964","url":null,"abstract":"In recent years, plagiarism that uses the code snippets or program of others without permission has become a social problem. It is widely spread from very familiar student reports to worldwide academic papers. In this paper, we deal with plagiarism in programming assignments, and explain the plagiarism patterns often found in text. Existing plagiarism detection tools utilize string matching algorithms to calculate the plagiarism. We have brought to light the problems associated with existing tools and propose a method to rectify them efficiently with the help of algorithms proposed in the paper. To the existing detection method, we combine some heuristics which are estimation of time complexity and loop detection, to improve the accuracy of the plagiarized sections and propose it as a plagiarism detection method.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"1 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77245917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.974
Chetana B. Thaokar, Gayatri Ladsawangikar, Tanaya Wadibhasme, Sandeep Sureka
Nearly all practical applications, including autonomous navigation, visual systems, face recognition, and more, rely on object detection. In this paper, object detection and speech recognition are combined to help visually impaired people who want to use voice commands to find a certain object. People who are blind or visually challenged can move more independently if they are aware of their surroundings. With the use of the OpenCV libraries, a model has been implemented, and good results have been obtained. In this paper, a thorough review of object detection employing region-based conventional neural network (CNN)- based learning systems for practical applications has been conducted. This study examines the various object identification processes utilizing YOLOV4 object detection techniques and talks through detection, including a speech recognition system that was created by transcribing spoken language into text.
{"title":"Object Detection using Speech Recognition","authors":"Chetana B. Thaokar, Gayatri Ladsawangikar, Tanaya Wadibhasme, Sandeep Sureka","doi":"10.47164/ijngc.v13i5.974","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.974","url":null,"abstract":"Nearly all practical applications, including autonomous navigation, visual systems, face recognition, and more, rely on object detection. In this paper, object detection and speech recognition are combined to help visually impaired people who want to use voice commands to find a certain object. People who are blind or visually challenged can move more independently if they are aware of their surroundings. With the use of the OpenCV libraries, a model has been implemented, and good results have been obtained. In this paper, a thorough review of object detection employing region-based conventional neural network (CNN)- based learning systems for practical applications has been conducted. This study examines the various object identification processes utilizing YOLOV4 object detection techniques and talks through detection, including a speech recognition system that was created by transcribing spoken language into text.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"41 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88593335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-26DOI: 10.47164/ijngc.v13i5.966
Rakesh K Kadu, Purshottam J Assudani, Sahil Bhojane, Tanish Agrawal, Vidhi Siddhawar, Yash Kale
Due to security concerns, the biometric trend is being used in many systems. Biometric authentication is a cheap, easy, and reliable technology for multi-factor authentication. Cryptosystems are one such example of using biometric data. However, this could be risky as biometric information is saved for authentication purposes. Therefore, voice biometric systems provide more efficient security and unique identity than commonly used biometric systems. Although, speech recognition-based authentication systems suffer from replay attacks. In this paper, we implement and analyze a text-independent voice-based biometric authentication system based on the randomly generated input text. Since the prompted text phrase is not known to the speaker in advance, it is difficult to launch replay attacks. The system uses Mel-Frequency Cepstrum Coefficients (MFCC) to extract speech features and Gaussian Mixture Models (GMM) for speaker modeling.
{"title":"Voice Based Authentication System for Web Applications using Machine Learning","authors":"Rakesh K Kadu, Purshottam J Assudani, Sahil Bhojane, Tanish Agrawal, Vidhi Siddhawar, Yash Kale","doi":"10.47164/ijngc.v13i5.966","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.966","url":null,"abstract":"Due to security concerns, the biometric trend is being used in many systems. Biometric authentication is a cheap, easy, and reliable technology for multi-factor authentication. Cryptosystems are one such example of using biometric data. However, this could be risky as biometric information is saved for authentication purposes. Therefore, voice biometric systems provide more efficient security and unique identity than commonly used biometric systems. Although, speech recognition-based authentication systems suffer from replay attacks. In this paper, we implement and analyze a text-independent voice-based biometric authentication system based on the randomly generated input text. Since the prompted text phrase is not known to the speaker in advance, it is difficult to launch replay attacks. The system uses Mel-Frequency Cepstrum Coefficients (MFCC) to extract speech features and Gaussian Mixture Models (GMM) for speaker modeling.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"7 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74919592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A major part of the Indian economy relies on agriculture, thus identification of any diseased crop in the initial phase is very important as these diseases cause a significant drop in agricultural production and also affect the economy of the country. Tomato crops are susceptible to various diseases which may be caused due to transmission of diseases through Air or Soil. We have tried to automate the procedure of detection of diseases in the Tomato Plant by studying several attributes related to the leaf of the plant. Using various machine learning algorithms such as Support Vector Machine (SVM), Convolutional Neural Network (CNN), ResNet, and InceptionV3 we have trained the model, and based on the results obtained we have evaluated and compared the performance of these algorithms on different features set. For the dataset we had 10 classes (healthy and other unhealthy classes) having a total of 18,450 images for the training of the models. After implementing all of the algorithms and comparing their results we found that the ResNet was most appropriate for extracting distinct attributes from images. The trained models can be used to detect diseases in Tomato Plant timely and automatically.
{"title":"Detection of Diseases in Tomato Plant using Machine Learning","authors":"Anshul Sharma, Ashish Chandak, Aryan Khandelwal, Raunak Gandhi","doi":"10.47164/ijngc.v13i5.941","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.941","url":null,"abstract":"A major part of the Indian economy relies on agriculture, thus identification of any diseased crop in the initial phase is very important as these diseases cause a significant drop in agricultural production and also affect the economy of the country. Tomato crops are susceptible to various diseases which may be caused due to transmission of diseases through Air or Soil. We have tried to automate the procedure of detection of diseases in the Tomato Plant by studying several attributes related to the leaf of the plant. Using various machine learning algorithms such as Support Vector Machine (SVM), Convolutional Neural Network (CNN), ResNet, and InceptionV3 we have trained the model, and based on the results obtained we have evaluated and compared the performance of these algorithms on different features set. For the dataset we had 10 classes (healthy and other unhealthy classes) having a total of 18,450 images for the training of the models. After implementing all of the algorithms and comparing their results we found that the ResNet was most appropriate for extracting distinct attributes from images. The trained models can be used to detect diseases in Tomato Plant timely and automatically.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"51 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90777345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to heterogenous shape of liver, the segmentation and classification of liver is challenging task. Therefore, Computer-Aided Diagnosis (CAD) is employed for predictive decision making for liver diagnosis. The major intuition of this paper is to detect liver cancer in a precise manner by automatic approach. The developed model initially collects the standard benchmark LiTS dataset, and image preprocessing is done by three techniques like Histogram equalization for contrast enhancement, and median filtering and Anisotropic diffusion filtering for noise removal. Further, the Adaptive thresholding is adopted to perform the liver segmentation. As a novelty, optimized Fuzzy centroid-based region growing model is proposed for tumor segmentation in liver. The main objective of thistumor segmentation model is to maximize the entropy by optimizing the fuzzy centroid and threshold of region growing using Mean Fitness-based Salp Swarm Optimization Algorithm (MF-SSA). From segmented tumor, the features like Local Directional Pattern (LDP) and Gray Level Co-occurrence Matrix (GLCM) are extracted. The extracted features are given as input to NN, and segmented tumor is given to Convolutional Neural Network (CNN). The AND bit operation to both of the outputs obtained from NN and CNN confirms the healthy and unhealthy CT images. Since the number of hidden neurons makes an effect on final classification output, the optimization of neurons is done using MF-SSA. From the experimental analysis, it is confirmed that the proposed model is better as compared with state of art results of previous study can assist radiologists in tumor diagnosis from CT scan images.
{"title":"Improved Salp Swarm Optimization-based Fuzzy Centroid Region Growing for Liver Tumor Segmentation and Deep Learning Oriented Classification","authors":"Ramchand Hablani, Suraj Patil, Dnyaneshwar Kirange","doi":"10.47164/ijngc.v13i5.902","DOIUrl":"https://doi.org/10.47164/ijngc.v13i5.902","url":null,"abstract":"Due to heterogenous shape of liver, the segmentation and classification of liver is challenging task. Therefore, Computer-Aided Diagnosis (CAD) is employed for predictive decision making for liver diagnosis. The major intuition of this paper is to detect liver cancer in a precise manner by automatic approach. The developed model initially collects the standard benchmark LiTS dataset, and image preprocessing is done by three techniques like Histogram equalization for contrast enhancement, and median filtering and Anisotropic diffusion filtering for noise removal. Further, the Adaptive thresholding is adopted to perform the liver segmentation. As a novelty, optimized Fuzzy centroid-based region growing model is proposed for tumor segmentation in liver. The main objective of thistumor segmentation model is to maximize the entropy by optimizing the fuzzy centroid and threshold of region growing using Mean Fitness-based Salp Swarm Optimization Algorithm (MF-SSA). From segmented tumor, the features like Local Directional Pattern (LDP) and Gray Level Co-occurrence Matrix (GLCM) are extracted. The extracted features are given as input to NN, and segmented tumor is given to Convolutional Neural Network (CNN). The AND bit operation to both of the outputs obtained from NN and CNN confirms the healthy and unhealthy CT images. Since the number of hidden neurons makes an effect on final classification output, the optimization of neurons is done using MF-SSA. From the experimental analysis, it is confirmed that the proposed model is better as compared with state of art results of previous study can assist radiologists in tumor diagnosis from CT scan images.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"49 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80950957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}