Pub Date : 2022-12-23DOI: 10.12694/scpe.v23i4.2031
K. Lavanya, R. Jaya Subalakshmi, T. Tamizharasi, Lydia Jane, A. Victor
A crucial component of precision agriculture is the capability to assess the fertility of soil by looking at the precise distribution and composition of its different constituents. This study aims to investigate how different machine learning models may be used to assess soil fertility using hyperspectral pictures. The development of images using a random mixing of different soil components is the first phase, and the hyper spectral bands utilized to create the images are not used again during the analysis procedure. The resulting end members are then acquired by applying the NFINDR algorithm to the process of spectral unmixing this image. The comparison between these end members and the band values of the known elements is then quantified., i.e. it is represented as a graph of band values obtained through spectral unmixing. Finally we quantify the similarities between both graphs and proceed towards the classification of the hyper spectral image as fertile or infertile. In order to classify the hyper spectral image as fertile or infertile, we quantify the similarities between the two graphs. Clustering and picture segmentation algorithms have been devised to help with this process, and a comparison is then made to show which techniques are the most effective.
{"title":"Unsupervised Unmixing and Segmentation of Hyper Spectral Images Accounting for Soil Fertility","authors":"K. Lavanya, R. Jaya Subalakshmi, T. Tamizharasi, Lydia Jane, A. Victor","doi":"10.12694/scpe.v23i4.2031","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2031","url":null,"abstract":"A crucial component of precision agriculture is the capability to assess the fertility of soil by looking at the precise distribution and composition of its different constituents. This study aims to investigate how different machine learning models may be used to assess soil fertility using hyperspectral pictures. The development of images using a random mixing of different soil components is the first phase, and the hyper spectral bands utilized to create the images are not used again during the analysis procedure. The resulting end members are then acquired by applying the NFINDR algorithm to the process of spectral unmixing this image. The comparison between these end members and the band values of the known elements is then quantified., i.e. it is represented as a graph of band values obtained through spectral unmixing. Finally we quantify the similarities between both graphs and proceed towards the classification of the hyper spectral image as fertile or infertile. In order to classify the hyper spectral image as fertile or infertile, we quantify the similarities between the two graphs. Clustering and picture segmentation algorithms have been devised to help with this process, and a comparison is then made to show which techniques are the most effective.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"26 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76885773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2012
Astha Singh, Divya Kumar
During the beginning of COVID-19 pandemic, studies came across the world concerning with health issues. Researches began to find the repercussions of the virus. The virus was found to be versatile as it changes its nature and targets the lungs of a person. Later, it was seen an astonishing massacre around the world due to the virus. Many people have lost their life but many more people are still suffering with bad psychological state. Researchers began to research on the nature virus but very few researches were made on the other side-effects of this pandemic. One such crucial subject to attend in contemporary world is the effect of COVID-19 on psychological state in general population. This side-effect may lead to raise an alarming situation in future that could result in more death cases. The proposed paper presents a study on the detection of stress and depression in people caused by the pandemic. The proposed methodology is based on perceived questionnaire method through which people’s responses are recorded in the form of text. COVID victims have been interrogated against a set of questions and their responses are recorded. The methodology performs text mining of their responses that also include the people’s reaction from social networking sites. The text processing of people’s responses is done by natural language processing (NLP). NLP is used to interpret textural facts into meaningful segments that must be understandable to machine. The refined data has been transformed into PSS (perceived stress scale) scaling factor that ranges from 0 to 4 showing various level of stress. The proposed system utilized artificial intelligence in which naive Bayes classifier, K-nearest neighbor (KNN), Decision tree and Random forest algorithms are applied to predict the emotional state of a person. The proposed system also uses data from social networking site for testing purpose. The model successfully shows a comparative study of such three classifiers for the classification of stress level into stress, anxiety and depression.
{"title":"Gauging Stress, Anxiety, Depression in Student during COVID-19 Pandemic","authors":"Astha Singh, Divya Kumar","doi":"10.12694/scpe.v23i4.2012","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2012","url":null,"abstract":"During the beginning of COVID-19 pandemic, studies came across the world concerning with health issues. Researches began to find the repercussions of the virus. The virus was found to be versatile as it changes its nature and targets the lungs of a person. Later, it was seen an astonishing massacre around the world due to the virus. Many people have lost their life but many more people are still suffering with bad psychological state. Researchers began to research on the nature virus but very few researches were made on the other side-effects of this pandemic. One such crucial subject to attend in contemporary world is the effect of COVID-19 on psychological state in general population. This side-effect may lead to raise an alarming situation in future that could result in more death cases. The proposed paper presents a study on the detection of stress and depression in people caused by the pandemic. The proposed methodology is based on perceived questionnaire method through which people’s responses are recorded in the form of text. COVID victims have been interrogated against a set of questions and their responses are recorded. The methodology performs text mining of their responses that also include the people’s reaction from social networking sites. The text processing of people’s responses is done by natural language processing (NLP). NLP is used to interpret textural facts into meaningful segments that must be understandable to machine. The refined data has been transformed into PSS (perceived stress scale) scaling factor that ranges from 0 to 4 showing various level of stress. The proposed system utilized artificial intelligence in which naive Bayes classifier, K-nearest neighbor (KNN), Decision tree and Random forest algorithms are applied to predict the emotional state of a person. The proposed system also uses data from social networking site for testing purpose. The model successfully shows a comparative study of such three classifiers for the classification of stress level into stress, anxiety and depression.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"224 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85958419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2018
Jahnavi Shankar, C. Nandini
There are number of images that transmitted through the web for various usages like medical imaging, satellite images, military database, broadcasting, confidential enterprise, banking, etc. Thus, it is important to protect the images confidentially by securing sensitive information from an intruder. The present research work proposes a Hybrid Hyper Chaotic Mapping that considers a3D face Mesh model for hiding the secret image. The model has a larger range of chaotic parameters which are helpful in the chaotification approaches. The proposed system provides excellent security for the secret image through the process of encryption and decryption. The encryption of the secret image is performed by using chaos encryption with hyper hybrid mapping. The hyper hybrid mapping includes enhanced logistic and henon mapping to improve the computation efficiency for security to enhance embedding capacity. In the experiment Fingerprint and satellite image is used as secret image. The secret image is encrypted using a Least Significant Bit (LSB) for embedding an image. The results obtained by the proposed method showed better enhancements in terms of SNR for the 3D Mesh model dataset as 77.85 dB better compared to the existing models that achieved Reversible data hiding in the encrypted domain (RDH-ED) of 33.89 dB and Multiple Most Significant Bit (Multi-MSB) 40 dB. Also, the results obtained by the proposed Hybrid Hyper chaotic mapping showed PSNR of 65.73 dB better when compared to the existing Permutation Substitution and Boolean Operation that obtained 21.19 dB and 21.27 dB for the Deoxyribonucleic Acid (DNA) level permutation-based logistic map.
{"title":"Hybrid Hyper Chaotic Map with LSB for Image Encryption and Decryption","authors":"Jahnavi Shankar, C. Nandini","doi":"10.12694/scpe.v23i4.2018","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2018","url":null,"abstract":"There are number of images that transmitted through the web for various usages like medical imaging, satellite images, military database, broadcasting, confidential enterprise, banking, etc. Thus, it is important to protect the images confidentially by securing sensitive information from an intruder. The present research work proposes a Hybrid Hyper Chaotic Mapping that considers a3D face Mesh model for hiding the secret image. The model has a larger range of chaotic parameters which are helpful in the chaotification approaches. The proposed system provides excellent security for the secret image through the process of encryption and decryption. The encryption of the secret image is performed by using chaos encryption with hyper hybrid mapping. The hyper hybrid mapping includes enhanced logistic and henon mapping to improve the computation efficiency for security to enhance embedding capacity. In the experiment Fingerprint and satellite image is used as secret image. The secret image is encrypted using a Least Significant Bit (LSB) for embedding an image. The results obtained by the proposed method showed better enhancements in terms of SNR for the 3D Mesh model dataset as 77.85 dB better compared to the existing models that achieved Reversible data hiding in the encrypted domain (RDH-ED) of 33.89 dB and Multiple Most Significant Bit (Multi-MSB) 40 dB. Also, the results obtained by the proposed Hybrid Hyper chaotic mapping showed PSNR of 65.73 dB better when compared to the existing Permutation Substitution and Boolean Operation that obtained 21.19 dB and 21.27 dB for the Deoxyribonucleic Acid (DNA) level permutation-based logistic map.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"38 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75087902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2011
S. Aruna, Kuchibhotla Swarna
The influence of emotion on attention is particularly strong, changing its selectivity in particular and motivating behavior and action. The degree to which a student participates in class determines their level of conceptual knowledge. Various teaching techniques have been developed over time to improve not only the attention of a student but also their engagement of a student. The level of engagement of a student can help us decide the amount of understanding a student can attain throughout the session. Though these techniques have been developed over time, the basic tests to determine the authenticity of these activities have been done mainly by the use of assessment-based methods. According to research in the field of neuroscience, a person's emotions can assist us to determine a student's level of participation. We also have the affective circumplex model to show us the correlation between emotions and the level of engagement of a person. Taking this into account, we developed an attentivity model with the help of an emotion recognition model (made with the help of VGG-16 architecture in CNN) and the eye tracking system to analyze the amount of engagement being displayed by the student in the class. This model applied to the students on the various teaching models helps us in deciding the effectiveness of various teaching methodologies for the primitive methods of teaching.
{"title":"Cognitive Perception for Scholastic Purposes using Innovative Teaching Strategies","authors":"S. Aruna, Kuchibhotla Swarna","doi":"10.12694/scpe.v23i4.2011","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2011","url":null,"abstract":"The influence of emotion on attention is particularly strong, changing its selectivity in particular and motivating behavior and action. The degree to which a student participates in class determines their level of conceptual knowledge. Various teaching techniques have been developed over time to improve not only the attention of a student but also their engagement of a student. The level of engagement of a student can help us decide the amount of understanding a student can attain throughout the session. Though these techniques have been developed over time, the basic tests to determine the authenticity of these activities have been done mainly by the use of assessment-based methods. According to research in the field of neuroscience, a person's emotions can assist us to determine a student's level of participation. We also have the affective circumplex model to show us the correlation between emotions and the level of engagement of a person. Taking this into account, we developed an attentivity model with the help of an emotion recognition model (made with the help of VGG-16 architecture in CNN) and the eye tracking system to analyze the amount of engagement being displayed by the student in the class. This model applied to the students on the various teaching models helps us in deciding the effectiveness of various teaching methodologies for the primitive methods of teaching.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"97 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84615580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.1987
E. Gothai, V. Muthukumaran, K. Valarmathi, Sathishkumar V E, N. Thillaiarasu, P. Karthikeyan
With the evolution of Internet standards and advancements in various Internet and mobile technologies, especially since web 4.0, more and more web and mobile applications emerge such as e-commerce, social networks, online gaming applications and Internet of Things based applications. Due to the deployment and concurrent access of these applications on the Internet and mobile devices, the amount of data and the kind of data generated increases exponentially and the new era of Big Data has come into existence. Presently available data structures and data analyzing algorithms are not capable to handle such Big Data. Hence, there is a need for scalable, flexible, parallel and intelligent data analyzing algorithms to handle and analyze the complex massive data. In this article, we have proposed a novel distributed supervised machine learning algorithm based on the MapReduce programming model and Distance Weighted k-Nearest Neighbor algorithm called MR-DWkNN to process and analyze the Big Data in the Hadoop cluster environment. The proposed distributed algorithm is based on supervised learning performs both regression tasks as well as classification tasks on large-volume of Big Data applications. Three performance metrics, such as Root Mean Squared Error (RMSE), Determination coefficient (R2) for regression task, and Accuracy for classification tasks are utilized for the performance measure of the proposed MR-DWkNN algorithm. The extensive experimental results shows that there is an average increase of 3% to 4.5% prediction and classification performances as compared to standard distributed k-NN algorithm and a considerable decrease of Root Mean Squared Error (RMSE) with good parallelism characteristics of scalability and speedup thus, proves its effectiveness in Big Data predictive and classification applications.
{"title":"Map-Reduce based Distance Weighted k-Nearest Neighbor Machine Learning Algorithm for Big Data Applications","authors":"E. Gothai, V. Muthukumaran, K. Valarmathi, Sathishkumar V E, N. Thillaiarasu, P. Karthikeyan","doi":"10.12694/scpe.v23i4.1987","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.1987","url":null,"abstract":"With the evolution of Internet standards and advancements in various Internet and mobile technologies, especially since web 4.0, more and more web and mobile applications emerge such as e-commerce, social networks, online gaming applications and Internet of Things based applications. Due to the deployment and concurrent access of these applications on the Internet and mobile devices, the amount of data and the kind of data generated increases exponentially and the new era of Big Data has come into existence. Presently available data structures and data analyzing algorithms are not capable to handle such Big Data. Hence, there is a need for scalable, flexible, parallel and intelligent data analyzing algorithms to handle and analyze the complex massive data. In this article, we have proposed a novel distributed supervised machine learning algorithm based on the MapReduce programming model and Distance Weighted k-Nearest Neighbor algorithm called MR-DWkNN to process and analyze the Big Data in the Hadoop cluster environment. The proposed distributed algorithm is based on supervised learning performs both regression tasks as well as classification tasks on large-volume of Big Data applications. Three performance metrics, such as Root Mean Squared Error (RMSE), Determination coefficient (R2) for regression task, and Accuracy for classification tasks are utilized for the performance measure of the proposed MR-DWkNN algorithm. The extensive experimental results shows that there is an average increase of 3% to 4.5% prediction and classification performances as compared to standard distributed k-NN algorithm and a considerable decrease of Root Mean Squared Error (RMSE) with good parallelism characteristics of scalability and speedup thus, proves its effectiveness in Big Data predictive and classification applications.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"69 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84170064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2015
C. Pavithra, M. Saradha
The recommender system handles the plethora of data by filtering the most crucial information based on the dataset provided by a user and other criterion that are taken into account.(i.e., user's choice and interest). It determines whether a user and an item are compatible and then assumes that they are similar in order to make recommendations. Recommendation system uses Singular value decomposition method as collaborative filtering technique. The objective of this research paper is to propose the recommendation system that has an ability to recommend products to users based on ratings. We collect essential information like ratings given by the users from e-commerce that are required for recommendation, Initially the dataset that are gathered are sparse dataset, cosine similarity is used to find the similarity between the users. Subsequently, we collect non-sparse data and use Euclidian distance and Manhattan distance method to measure the distance between users and the graph is plotted, this ensures the similar liking and preferences between them. This method of making recommendations are more reliable and attainable.
{"title":"Integrating Collaborative Filtering Technique Using Rating Approach to Ascertain Similarity Between the Users","authors":"C. Pavithra, M. Saradha","doi":"10.12694/scpe.v23i4.2015","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2015","url":null,"abstract":"The recommender system handles the plethora of data by filtering the most crucial information based on the dataset provided by a user and other criterion that are taken into account.(i.e., user's choice and interest). It determines whether a user and an item are compatible and then assumes that they are similar in order to make recommendations. Recommendation system uses Singular value decomposition method as collaborative filtering technique. The objective of this research paper is to propose the recommendation system that has an ability to recommend products to users based on ratings. We collect essential information like ratings given by the users from e-commerce that are required for recommendation, Initially the dataset that are gathered are sparse dataset, cosine similarity is used to find the similarity between the users. Subsequently, we collect non-sparse data and use Euclidian distance and Manhattan distance method to measure the distance between users and the graph is plotted, this ensures the similar liking and preferences between them. This method of making recommendations are more reliable and attainable.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"26 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84988516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2024
S. Virupakshappa, Sachinkumar Veerashetty, N. Ambika
The most vital organs in the human body are the brain, heart, and lungs. Because the brain controls and coordinates the operations of all other organs, normal brain function is vital. Brain tumour is a mass of tissues which interrupts the normal functioning of the brain, if left untreated will lead to the death of the subject. The classification of multiclass brain tumours using spatial fuzzy based level sets and artificial neural network (ANN) techniques is proposed in this paper. In the proposed method, images are preprocessed using Median Filtering technique, the boundaries of the Brain Tumor are obtained using Spatial Fuzzy based Level Set method, features are extracted using Gabor Wavelet and Gray-Level Run Length Matrix (GLRLM) methods. Finally ANN technique is used for the classification of the image into Normal or Benign Tumor or Malignant Tumor. The proposed method was implemented in the MATLAB working platform and achieved classification accuracy of 94%, which is significant compared to state-of-the-art classification techniques. Thus, the proposed method assist in differentiating between benign and malignant brain tumours, enabling doctors to provide adequate treatment.
{"title":"Computer-aided Diagnosis applied to MRI images of Brain Tumor using Spatial Fuzzy Level Set and ANN Classifier","authors":"S. Virupakshappa, Sachinkumar Veerashetty, N. Ambika","doi":"10.12694/scpe.v23i4.2024","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2024","url":null,"abstract":"The most vital organs in the human body are the brain, heart, and lungs. Because the brain controls and coordinates the operations of all other organs, normal brain function is vital. Brain tumour is a mass of tissues which interrupts the normal functioning of the brain, if left untreated will lead to the death of the subject. The classification of multiclass brain tumours using spatial fuzzy based level sets and artificial neural network (ANN) techniques is proposed in this paper. In the proposed method, images are preprocessed using Median Filtering technique, the boundaries of the Brain Tumor are obtained using Spatial Fuzzy based Level Set method, features are extracted using Gabor Wavelet and Gray-Level Run Length Matrix (GLRLM) methods. Finally ANN technique is used for the classification of the image into Normal or Benign Tumor or Malignant Tumor. The proposed method was implemented in the MATLAB working platform and achieved classification accuracy of 94%, which is significant compared to state-of-the-art classification techniques. Thus, the proposed method assist in differentiating between benign and malignant brain tumours, enabling doctors to provide adequate treatment.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"133 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85253776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2022
A. Naidu, D. Bhavana
In a variety of clinical applications, image fusion is critical for merging data from multiple sources into a single, more understandable outcome. The use of medical image fusion technologies to assist the physician in executing combination procedures can be advantageous. The diagnostic process includes preoperative planning, intra operative supervision, an interventional treatment. In this thesis, a technique for image fusion was suggested that used a combination model of PCA and CNN. A method of real-time image fusion that employs pre-trained neural networks to synthesize a single image from several sources in real-time. A innovative technique for merging the images is created based on deep neural network feature maps and a convolution network. Picture fusion has become increasingly popular as a result of the large variety of capturing techniques available. The proposed design is implemented using deep learning technique. The accuracy of the proposed design is around 15% higher than the existing design. The proposed fusion algorithm is verified through a simulation experiment on different multimodality images. Experimental results are evaluated by the number of well-known performance evaluation metrics
{"title":"Multimodal Medical Image Fusion using Hybrid Domains","authors":"A. Naidu, D. Bhavana","doi":"10.12694/scpe.v23i4.2022","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2022","url":null,"abstract":"In a variety of clinical applications, image fusion is critical for merging data from multiple sources into a single, more understandable outcome. The use of medical image fusion technologies to assist the physician in executing combination procedures can be advantageous. The diagnostic process includes preoperative planning, intra operative supervision, an interventional treatment. In this thesis, a technique for image fusion was suggested that used a combination model of PCA and CNN. A method of real-time image fusion that employs pre-trained neural networks to synthesize a single image from several sources in real-time. A innovative technique for merging the images is created based on deep neural network feature maps and a convolution network. Picture fusion has become increasingly popular as a result of the large variety of capturing techniques available. The proposed design is implemented using deep learning technique. The accuracy of the proposed design is around 15% higher than the existing design. The proposed fusion algorithm is verified through a simulation experiment on different multimodality images. Experimental results are evaluated by the number of well-known performance evaluation metrics \u0000 ","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"9 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82045973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2019
P. Santosh, M. C. Sekhar
Pancreatic cancer is right now the fourth largest cause of cancer-related deaths. Early diagnosis is one good solution for pancreatic cancer patients and reduces the mortality rate. Accurate and earlier diagnosis of the pancreatic tumor is a demanding task due to several factors such as delayed diagnosis and absence of early warning symptoms. The conventional distributed machine learning techniques such as SVM and logistic regression were not efficient to minimize the error rate and improve the classification of pancreatic cancer with higher accuracy. Therefore, a novel technique called Distributed Hybrid Elitism gene Quadratic discriminant Reinforced Learning Classifier System (DHEGQDRLCS) is developed in this paper. First, the number of data samples is collected from the repository dataset. This repository contains all the necessary files for the identification of prognostic biomarkers for pancreatic cancer. After the data collection, the separation of training and testing samples is performed for the accurate classification of pancreatic cancer samples. Then the training samples are considered and applied to Distributed Hybrid Elitism gene Quadratic discriminant Reinforced Learning Classifier System. The proposed hybrid classifier system uses the Kernel Quadratic Discriminant Function to analyze the training samples. After that, the Elitism gradient gene optimization is applied for classifying the samples into multiple classes such as non-cancerous pancreas, benign hepatobiliary disease i.e., pancreatic cancer, and Pancreatic ductal adenocarcinoma. Then the Reinforced Learning technique is applied to minimize the loss function based on target classification results and predicted classification results. Finally, the hybridized approach improves pancreatic cancer diagnosing accuracy. Experimental evaluation is carried out with pancreatic cancer dataset with Hadoop distributed system and different quantitative metrics such as Accuracy, balanced accuracy, F1-score, precision, recall, specificity, TN, TP, FN, FP, ROC_AUC, PRC_AUC, and PRC_APS. The performance analysis results indicate that the DHEGQDRLCS provides better diagnosing accuracy when compared to existing methods.
{"title":"An Efficient Novel Approach with Multi Class Label Classification through Machine Learning Models for Pancreatic Cancer","authors":"P. Santosh, M. C. Sekhar","doi":"10.12694/scpe.v23i4.2019","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2019","url":null,"abstract":"Pancreatic cancer is right now the fourth largest cause of cancer-related deaths. Early diagnosis is one good solution for pancreatic cancer patients and reduces the mortality rate. Accurate and earlier diagnosis of the pancreatic tumor is a demanding task due to several factors such as delayed diagnosis and absence of early warning symptoms. The conventional distributed machine learning techniques such as SVM and logistic regression were not efficient to minimize the error rate and improve the classification of pancreatic cancer with higher accuracy. Therefore, a novel technique called Distributed Hybrid Elitism gene Quadratic discriminant Reinforced Learning Classifier System (DHEGQDRLCS) is developed in this paper. First, the number of data samples is collected from the repository dataset. This repository contains all the necessary files for the identification of prognostic biomarkers for pancreatic cancer. After the data collection, the separation of training and testing samples is performed for the accurate classification of pancreatic cancer samples. Then the training samples are considered and applied to Distributed Hybrid Elitism gene Quadratic discriminant Reinforced Learning Classifier System. The proposed hybrid classifier system uses the Kernel Quadratic Discriminant Function to analyze the training samples. After that, the Elitism gradient gene optimization is applied for classifying the samples into multiple classes such as non-cancerous pancreas, benign hepatobiliary disease i.e., pancreatic cancer, and Pancreatic ductal adenocarcinoma. Then the Reinforced Learning technique is applied to minimize the loss function based on target classification results and predicted classification results. Finally, the hybridized approach improves pancreatic cancer diagnosing accuracy. Experimental evaluation is carried out with pancreatic cancer dataset with Hadoop distributed system and different quantitative metrics such as Accuracy, balanced accuracy, F1-score, precision, recall, specificity, TN, TP, FN, FP, ROC_AUC, PRC_AUC, and PRC_APS. The performance analysis results indicate that the DHEGQDRLCS provides better diagnosing accuracy when compared to existing methods.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"12 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86299056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-22DOI: 10.12694/scpe.v23i4.2021
Susmitha Uddaraju, G. P. Saradhi Varma, M. R. Narasingarao
Breast cancer is now the most prominent female cancer in both developing and developed nations, and that it is the largest risk factor for mortality worldwide. Notwithstanding the well-documented declines in breast cancer mortality during the last twenty years, occurrence rates continue to rise, and do so more rapidly in nations where rates were previously low. This has highlighted the significance of survival concerns and illness duration treatment. Patient data after first chemotherapy is collected from the hospital and this data is then analysed using neural network. Proposed architecture gives result as the patient is responding to the chemotherapy or not. Moreover, it also gives the risk factor in surgery. Early prediction of such things gives broader idea about how treatment should go. Once the Breast cancer is detected and if chemotherapy is done, then it becomes very important to check whether patient is responding to the chemotherapy or not. So, the proposed system architecture is designed in such a way that it detects if the patient is responding to the chemotherapy or not. And if patient is not responding to the chemotherapy, then patient should go to the surgery. The proposed system is also compared with the existing algorithms machine learning and neural network techniques like support vector machine (SVM) and Decision Tree(DT) algorithms. The proposed neural network architecture gives 99.19% accuracy where SVM and DT gives 89.15% and 74.82%. Bosom disease is known to have asymptomatic stages, which is distinguished simply by mammography and around 10% of patients getting mammography recovers further assessments, and among them 8 to 10% require bosom biopsy. Alert the cautious consideration of the radiologist to peruse mammograms to perceive mammograms is generally 30 to 60 seconds for every picture. In any case, the weakness and explicitness of human radiologist's mammography was controlled by 77-87% and 89-97%, individually. As of late, twofold peruses are allowed with most screening programs, yet this will additionally disintegrate the time heap of human radiologists. As of late, the headway of man-made brainpower (AI) has made it conceivable to recognize programmed infection on clinical pictures in radiology, pathology, and even gastrointestinalities. For bosom malignant growth screening, all the more profound examinations have additionally been led, 86.1 to 9.0% responsiveness and 79.0 to 90.0% exceptional elements. By and by, there are a couple of distributions for built up disease location of mammography under Asian with higher bosom thickness contrasted with white individuals. Bosom thickness can influence the malignant growth pace of mammography pictures. Hence, the motivation behind this study was to create and approve a profound learning model that consequently recognizes threatening bosom sores in Asian advanced mammograms and to inspect the exhibition of the model by bosom thickness level. We have acquainted our own pret
{"title":"Prediction of NAC Response in Breast Cancer Patients Using Neural Network","authors":"Susmitha Uddaraju, G. P. Saradhi Varma, M. R. Narasingarao","doi":"10.12694/scpe.v23i4.2021","DOIUrl":"https://doi.org/10.12694/scpe.v23i4.2021","url":null,"abstract":"Breast cancer is now the most prominent female cancer in both developing and developed nations, and that it is the largest risk factor for mortality worldwide. Notwithstanding the well-documented declines in breast cancer mortality during the last twenty years, occurrence rates continue to rise, and do so more rapidly in nations where rates were previously low. This has highlighted the significance of survival concerns and illness duration treatment. Patient data after first chemotherapy is collected from the hospital and this data is then analysed using neural network. Proposed architecture gives result as the patient is responding to the chemotherapy or not. Moreover, it also gives the risk factor in surgery. Early prediction of such things gives broader idea about how treatment should go. Once the Breast cancer is detected and if chemotherapy is done, then it becomes very important to check whether patient is responding to the chemotherapy or not. So, the proposed system architecture is designed in such a way that it detects if the patient is responding to the chemotherapy or not. And if patient is not responding to the chemotherapy, then patient should go to the surgery. The proposed system is also compared with the existing algorithms machine learning and neural network techniques like support vector machine (SVM) and Decision Tree(DT) algorithms. The proposed neural network architecture gives 99.19% accuracy where SVM and DT gives 89.15% and 74.82%. Bosom disease is known to have asymptomatic stages, which is distinguished simply by mammography and around 10% of patients getting mammography recovers further assessments, and among them 8 to 10% require bosom biopsy. Alert the cautious consideration of the radiologist to peruse mammograms to perceive mammograms is generally 30 to 60 seconds for every picture. In any case, the weakness and explicitness of human radiologist's mammography was controlled by 77-87% and 89-97%, individually. As of late, twofold peruses are allowed with most screening programs, yet this will additionally disintegrate the time heap of human radiologists. As of late, the headway of man-made brainpower (AI) has made it conceivable to recognize programmed infection on clinical pictures in radiology, pathology, and even gastrointestinalities. For bosom malignant growth screening, all the more profound examinations have additionally been led, 86.1 to 9.0% responsiveness and 79.0 to 90.0% exceptional elements. By and by, there are a couple of distributions for built up disease location of mammography under Asian with higher bosom thickness contrasted with white individuals. Bosom thickness can influence the malignant growth pace of mammography pictures. Hence, the motivation behind this study was to create and approve a profound learning model that consequently recognizes threatening bosom sores in Asian advanced mammograms and to inspect the exhibition of the model by bosom thickness level. We have acquainted our own pret","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"43 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2022-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79251008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}