Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425088
Shaik Abdul Waheed, S. Revathi, Mohammed Abdul Matheen, Amairullah Khan Lodhi, Mohammed Ashrafuddin, G.S. Maboobatcha
Emotions are different biological states brought on by neurophysiological changes associated with the nervous system which are affected and results in thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. These emotions play a vital role in understanding the human response towards any means of action they experience. The system uses an EEG headband device made out of IoT and various machine learning algorithms to understand the human’s emotion of any incident they undergo. In this paper, we propose a method to detect and recognize the emotional changes in a human who is exposed to various images.
{"title":"Processing of Human Motions using Cost Effective EEG Sensor and Machine Learning Approach","authors":"Shaik Abdul Waheed, S. Revathi, Mohammed Abdul Matheen, Amairullah Khan Lodhi, Mohammed Ashrafuddin, G.S. Maboobatcha","doi":"10.1109/CAIDA51941.2021.9425088","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425088","url":null,"abstract":"Emotions are different biological states brought on by neurophysiological changes associated with the nervous system which are affected and results in thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure. These emotions play a vital role in understanding the human response towards any means of action they experience. The system uses an EEG headband device made out of IoT and various machine learning algorithms to understand the human’s emotion of any incident they undergo. In this paper, we propose a method to detect and recognize the emotional changes in a human who is exposed to various images.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"63 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125429496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425249
Shabana Habib, I. Khan, Muhammad Islam, Waleed Albattah, Saleh Mohammed Alyahya, Sheroz Khan, Md. Kamrul Hassan
Saudi Arabia is experiencing depleting water level which ultimately leading to having reduced level of weed and crops farms. The ongoing practice of watering for all kind of weeds at farms is manual which is laborious and slow besides waste of unregulated use of water. Therefore, there arises the need of automated water control, making the automated watering system as the viable option for precision weed control system. This paper has presented the development of real-time automated water sprinkle system for the target weeding area. The technique of wavelet frequency is developed as a software interface using MATLAB program detecting need of water sprinkling from the pictures of the leaves obtained. Software-based results are applied to hardware for real-time grass detection and classification based on shape and density due to reason that the leaves may be wide open, shrunk and leaves those with curved in features. The real-time system is capable of thus deciding the proportionate amount of water needed to be sprinkled over the weeds using a purposely developed hardware system. The system can detect areas where more, more or less water is needed, through a high-accuracy connected camera.
{"title":"Wavelet Frequency Transformation for Specific Weeds Recognition","authors":"Shabana Habib, I. Khan, Muhammad Islam, Waleed Albattah, Saleh Mohammed Alyahya, Sheroz Khan, Md. Kamrul Hassan","doi":"10.1109/CAIDA51941.2021.9425249","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425249","url":null,"abstract":"Saudi Arabia is experiencing depleting water level which ultimately leading to having reduced level of weed and crops farms. The ongoing practice of watering for all kind of weeds at farms is manual which is laborious and slow besides waste of unregulated use of water. Therefore, there arises the need of automated water control, making the automated watering system as the viable option for precision weed control system. This paper has presented the development of real-time automated water sprinkle system for the target weeding area. The technique of wavelet frequency is developed as a software interface using MATLAB program detecting need of water sprinkling from the pictures of the leaves obtained. Software-based results are applied to hardware for real-time grass detection and classification based on shape and density due to reason that the leaves may be wide open, shrunk and leaves those with curved in features. The real-time system is capable of thus deciding the proportionate amount of water needed to be sprinkled over the weeds using a purposely developed hardware system. The system can detect areas where more, more or less water is needed, through a high-accuracy connected camera.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134548986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425233
Ayidh Alqahtani, Ryiad Alshmmari, Mohammed Alzunitan, Amjad M Ahmed, A. Mukhtar, Nasser Alqahtani
Congestive heart failure (CHF) is one of the diseases with a high burden on the healthcare systems. Patients visits and follow-up at the out-patient clinics are associated with high direct and indirect costs and affect the patient treatment outcomes. In this study, we have tried to test and use machine learning models to predict the risk level and class of CHF patients to confidently extend the timing for the next out-patient cardiac clinic visit. The data for 700 patients’ records were statistically analyzed with Waikato Environment Knowledge Analysis version 3.9.4 (Weka) using eight different machine learning models. Among the eight tested models, the Random Forest and Logistic regression models were found to be the best. Overall performance of the models was promising with these excellent measures (Precision, Recall, F-measure, and ROC) for the Random Forest and Logistic regression models with high accuracy around 0.89. Future work with more balanced datasets and records are needed to test such models which could save the healthcare systems a lot of direct and indirect costs and improve patients’ outcomes.
{"title":"Predicting Congestive Heart Failure Risk Factors in King Abdulaziz Medical City A Machine Learning Approach","authors":"Ayidh Alqahtani, Ryiad Alshmmari, Mohammed Alzunitan, Amjad M Ahmed, A. Mukhtar, Nasser Alqahtani","doi":"10.1109/CAIDA51941.2021.9425233","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425233","url":null,"abstract":"Congestive heart failure (CHF) is one of the diseases with a high burden on the healthcare systems. Patients visits and follow-up at the out-patient clinics are associated with high direct and indirect costs and affect the patient treatment outcomes. In this study, we have tried to test and use machine learning models to predict the risk level and class of CHF patients to confidently extend the timing for the next out-patient cardiac clinic visit. The data for 700 patients’ records were statistically analyzed with Waikato Environment Knowledge Analysis version 3.9.4 (Weka) using eight different machine learning models. Among the eight tested models, the Random Forest and Logistic regression models were found to be the best. Overall performance of the models was promising with these excellent measures (Precision, Recall, F-measure, and ROC) for the Random Forest and Logistic regression models with high accuracy around 0.89. Future work with more balanced datasets and records are needed to test such models which could save the healthcare systems a lot of direct and indirect costs and improve patients’ outcomes.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115075211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425236
Vajeeha Mir Khatian, Qasim Ali Arain, Mamdouh Alenezi, Muhammad Owais Raza, Fariha Shaikh, Isma Farah
Functional and non-functional requirements are two important aspects of the requirements gathering phase (RGP) in any system development lifecycle (SDLC) model. The FRs are much simpler to understand and easily extractable from the user stories at RGP. On the other hand, the non-functional requirements (NFRs) are critical but play a significant role to improve the quality of the product and are used in determining the acceptance of a designed system. Inside the NFR, several quality factors focus on the specific quality attribute of a system, like security, performance, reliability, etc. To classify the NFRs for each category is a challenging task. This paper mainly focuses on the prediction of the requirements classification of NFRs by using supervised machine learning (ML) algorithms followed by comparative analysis on five different ML algorithms: decision tree, k-nearest neighbor (KNN), random forest classifier (RFC), naïve Bayes and logistic regression (LR). The study has been conducted in two phases. In the first phase, the model has been designed which accepts a dataset containing textual data where 11 quality attributes are focused for prediction, and evaluation is done based on 15% of test data and 85% of training data, while in the second phase, the performance of each algorithm is evaluated based on four different evaluation metrics: precision, recall, accuracy, and confusion matrix. The exhaustive results of the comparative analysis demonstrate that the performance of the LR algorithm is the best of all algorithms with very high prediction rates and 75% accuracy. Besides, the naïve Bayes resulted in 66% accuracy at second place, the decision tree provided 60% accuracy and marked third, the RFC with 53% accuracy being at fourth, and KNN with 50% accuracy being lowest of all. The LR algorithm should be preferred for the prediction of the classification of NFRs
{"title":"Comparative Analysis for Predicting Non-Functional Requirements using Supervised Machine Learning","authors":"Vajeeha Mir Khatian, Qasim Ali Arain, Mamdouh Alenezi, Muhammad Owais Raza, Fariha Shaikh, Isma Farah","doi":"10.1109/CAIDA51941.2021.9425236","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425236","url":null,"abstract":"Functional and non-functional requirements are two important aspects of the requirements gathering phase (RGP) in any system development lifecycle (SDLC) model. The FRs are much simpler to understand and easily extractable from the user stories at RGP. On the other hand, the non-functional requirements (NFRs) are critical but play a significant role to improve the quality of the product and are used in determining the acceptance of a designed system. Inside the NFR, several quality factors focus on the specific quality attribute of a system, like security, performance, reliability, etc. To classify the NFRs for each category is a challenging task. This paper mainly focuses on the prediction of the requirements classification of NFRs by using supervised machine learning (ML) algorithms followed by comparative analysis on five different ML algorithms: decision tree, k-nearest neighbor (KNN), random forest classifier (RFC), naïve Bayes and logistic regression (LR). The study has been conducted in two phases. In the first phase, the model has been designed which accepts a dataset containing textual data where 11 quality attributes are focused for prediction, and evaluation is done based on 15% of test data and 85% of training data, while in the second phase, the performance of each algorithm is evaluated based on four different evaluation metrics: precision, recall, accuracy, and confusion matrix. The exhaustive results of the comparative analysis demonstrate that the performance of the LR algorithm is the best of all algorithms with very high prediction rates and 75% accuracy. Besides, the naïve Bayes resulted in 66% accuracy at second place, the decision tree provided 60% accuracy and marked third, the RFC with 53% accuracy being at fourth, and KNN with 50% accuracy being lowest of all. The LR algorithm should be preferred for the prediction of the classification of NFRs","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123764632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425031
Ismam Hussain Khan, Md Habib Ullah Khan, M. K. Howlader
In recent times, there are many studies and systems which deal with restaurant rating or individual food rating but rating a recipe using Artificial Intelligence is rare. This study aims to rate recipes based on different attributes using different Machine Learning algorithms. It compares the performance of different classifiers in rating a recipe based on different performance criterion. This can be economically beneficial to restaurants by helping them improve their recipes and getting more customers. It can also be used in a more personal level to improve household recipes and for the customers of restaurants to decide which restaurant is better for a specific dish based on how good their recipe is.
{"title":"An Intelligent Approach for Food Recipe Rating Prediction Using Machine Learning","authors":"Ismam Hussain Khan, Md Habib Ullah Khan, M. K. Howlader","doi":"10.1109/CAIDA51941.2021.9425031","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425031","url":null,"abstract":"In recent times, there are many studies and systems which deal with restaurant rating or individual food rating but rating a recipe using Artificial Intelligence is rare. This study aims to rate recipes based on different attributes using different Machine Learning algorithms. It compares the performance of different classifiers in rating a recipe based on different performance criterion. This can be economically beneficial to restaurants by helping them improve their recipes and getting more customers. It can also be used in a more personal level to improve household recipes and for the customers of restaurants to decide which restaurant is better for a specific dish based on how good their recipe is.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125038611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425129
Ali Tariq Nagi, Mazhar Javed Awan, R. Javed, N. Ayesha
The Diabetic retinopathy is disease of the human eye that causes retinal damage in diabetic patients. It further leads to the blindness. The machine learning techniques plays an important rule to predict the early diabetic retinopathy which avoided from the intensive labor. In this paper we used the novel technique, the Two Stage Classifier, an ensemble technique which combines various machine learning algorithms for classification. In the subject paper, the classifier is applied to predict Diabetic retinopathy (DR), a disease of the human eye that causes retinal damage in diabetic patients and ultimately lead to complete blindness. The problem lies in the fact that it is time consuming to detect this disease but an early detection of the disease is essential to avoid complete blindness. We apply machine learning algorithms to determine the existence of DR and compare the accuracies of the applied techniques. The Two Stage Classifier, turns out to be better not only in terms of parallelism but also in terms of accuracy.
{"title":"A Comparison of Two-Stage Classifier Algorithm with Ensemble Techniques On Detection of Diabetic Retinopathy","authors":"Ali Tariq Nagi, Mazhar Javed Awan, R. Javed, N. Ayesha","doi":"10.1109/CAIDA51941.2021.9425129","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425129","url":null,"abstract":"The Diabetic retinopathy is disease of the human eye that causes retinal damage in diabetic patients. It further leads to the blindness. The machine learning techniques plays an important rule to predict the early diabetic retinopathy which avoided from the intensive labor. In this paper we used the novel technique, the Two Stage Classifier, an ensemble technique which combines various machine learning algorithms for classification. In the subject paper, the classifier is applied to predict Diabetic retinopathy (DR), a disease of the human eye that causes retinal damage in diabetic patients and ultimately lead to complete blindness. The problem lies in the fact that it is time consuming to detect this disease but an early detection of the disease is essential to avoid complete blindness. We apply machine learning algorithms to determine the existence of DR and compare the accuracies of the applied techniques. The Two Stage Classifier, turns out to be better not only in terms of parallelism but also in terms of accuracy.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114337015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425308
Md. Golam Rabby Shuvo, Niger Sultana, Limon Motin, Mohammad Rezaul Islam
Electricity is a form of energy used around the world to power everything in our daily life. The value of energy and its renewable nature assemble energy as one of the vital topics. The correct approximation of hourly energy created on an exceeding power plant is crucial for producing cost-effective energy. In recent times, Machine Learning (ML) algorithms are widely utilized in predictive analysis of the power plants’ estimated energy production. A Combined Cycle Power Plant (CCPP) refers to a distinctive electrical energy producing station, where energy is generated with the help of the two types of turbines (gas and steam) merged into a single cycle. This study explores and evaluates four ML regression techniques for forecasting the total energy output per hour operated by a CCPP. Our entire set of data is collected from Rural Power Company Limited (RPCL), Mymensingh, Bangladesh, which contains 24 input variables, 8768 observations, and net hourly total energy (MW) as the target variable. The performance evaluation of the following regression techniques: Linear, Lasso, Decision Tree, and Random Forest, shows that Linear Regression performs most efficiently our dataset. The value of R2 for Linear Regression is 0.99910896 (99.91%).
{"title":"Prediction of Hourly Total Energy in Combined Cycle Power Plant Using Machine Learning Techniques","authors":"Md. Golam Rabby Shuvo, Niger Sultana, Limon Motin, Mohammad Rezaul Islam","doi":"10.1109/CAIDA51941.2021.9425308","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425308","url":null,"abstract":"Electricity is a form of energy used around the world to power everything in our daily life. The value of energy and its renewable nature assemble energy as one of the vital topics. The correct approximation of hourly energy created on an exceeding power plant is crucial for producing cost-effective energy. In recent times, Machine Learning (ML) algorithms are widely utilized in predictive analysis of the power plants’ estimated energy production. A Combined Cycle Power Plant (CCPP) refers to a distinctive electrical energy producing station, where energy is generated with the help of the two types of turbines (gas and steam) merged into a single cycle. This study explores and evaluates four ML regression techniques for forecasting the total energy output per hour operated by a CCPP. Our entire set of data is collected from Rural Power Company Limited (RPCL), Mymensingh, Bangladesh, which contains 24 input variables, 8768 observations, and net hourly total energy (MW) as the target variable. The performance evaluation of the following regression techniques: Linear, Lasso, Decision Tree, and Random Forest, shows that Linear Regression performs most efficiently our dataset. The value of R2 for Linear Regression is 0.99910896 (99.91%).","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127660250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425161
S. A. E. Hassan, Shahzad Akbar, Sahar Gull, A. Rehman, Hind Alaska
Central Serous Retinopathy (CSR), also known as Central Serous Chorioretinopathy (CSC), occurs due to the clotting of fluids behind the retinal surface. The retina is composed of thin tissues that capture light and transform into visual recognition in the brain. This significant and critical organ may be damaged and causes vision loss and blindness for the individuals. Therefore, early-stage detection of the syndrome may cure complete loss of vision and, in some cases, may recover to its normal state. Hence, accurate and fast detection of CSR saves macula from severe damage and provides a basis for detecting other retinal pathologies. The Optical Coherence Tomographic (OCT) images have been used to detect CSR, but the design of a computationally efficient and accurate system remains a challenge. This research develops a framework for accurate and automatic CSR detection from OCT images using pre-trained deep convolutional neural networks. The preprocessing of OCT image enhances and filters the images for improving contrast and eliminate noise, respectively. Pre-trained network architectures have been employed, which are; AlexNet, ResNet-18, and GoogleNet for classification. The classification scheme followed by preprocessing enhances the foreground objects from OCT images. The performance of deep CNN has been compared through a statistical evaluation of parameters. The statistical parameters evaluation has shown 99.64% classification accuracy for AlexNet using Optical Coherence Tomography Image Database (OCTID). This shows the suitability of the proposed framework in clinical application to help doctors and clinicians diagnose retinal diseases.
{"title":"Deep Learning-Based Automatic Detection of Central Serous Retinopathy using Optical Coherence Tomographic Images","authors":"S. A. E. Hassan, Shahzad Akbar, Sahar Gull, A. Rehman, Hind Alaska","doi":"10.1109/CAIDA51941.2021.9425161","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425161","url":null,"abstract":"Central Serous Retinopathy (CSR), also known as Central Serous Chorioretinopathy (CSC), occurs due to the clotting of fluids behind the retinal surface. The retina is composed of thin tissues that capture light and transform into visual recognition in the brain. This significant and critical organ may be damaged and causes vision loss and blindness for the individuals. Therefore, early-stage detection of the syndrome may cure complete loss of vision and, in some cases, may recover to its normal state. Hence, accurate and fast detection of CSR saves macula from severe damage and provides a basis for detecting other retinal pathologies. The Optical Coherence Tomographic (OCT) images have been used to detect CSR, but the design of a computationally efficient and accurate system remains a challenge. This research develops a framework for accurate and automatic CSR detection from OCT images using pre-trained deep convolutional neural networks. The preprocessing of OCT image enhances and filters the images for improving contrast and eliminate noise, respectively. Pre-trained network architectures have been employed, which are; AlexNet, ResNet-18, and GoogleNet for classification. The classification scheme followed by preprocessing enhances the foreground objects from OCT images. The performance of deep CNN has been compared through a statistical evaluation of parameters. The statistical parameters evaluation has shown 99.64% classification accuracy for AlexNet using Optical Coherence Tomography Image Database (OCTID). This shows the suitability of the proposed framework in clinical application to help doctors and clinicians diagnose retinal diseases.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129409254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425269
A. Rehman, Muhammad Kashif, I. Abunadi, N. Ayesha
Lung cancer is one of the key causes of death amongst humans globally, with a mortality rate of approximately five million cases annually. The mortality rate is even higher than breast cancer and prostate cancer combination. However, early detection and diagnosis can improve the survival rate. Different modalities are used for lung cancer detection and diagnosis, while Computed Tomography (CT) scan images provide the most significant lung infections information. This research’s main contribution is the detection and classification of different kinds of lung cancers such as Adenocarcinoma, Large cell carcinoma, and Squamous cell carcinoma. A novel lung cancer detection technique has been developed using machine learning techniques. The technique comprises feature extraction, fusion using patch base LBP (Local Binary Pattern) and discrete cosine transform (DCT). The machine learning technique such as support vector machine (SVM) and K-nearest neighbors (KNN) evaluated chest CT scan images dataset for texture feature classification. The proposed technique’s performance achieves better accuracy of 93% and 91% for support vector machine and K-nearest neighbors, respectively, than state-of-the-art techniques.
{"title":"Lung Cancer Detection and Classification from Chest CT Scans Using Machine Learning Techniques","authors":"A. Rehman, Muhammad Kashif, I. Abunadi, N. Ayesha","doi":"10.1109/CAIDA51941.2021.9425269","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425269","url":null,"abstract":"Lung cancer is one of the key causes of death amongst humans globally, with a mortality rate of approximately five million cases annually. The mortality rate is even higher than breast cancer and prostate cancer combination. However, early detection and diagnosis can improve the survival rate. Different modalities are used for lung cancer detection and diagnosis, while Computed Tomography (CT) scan images provide the most significant lung infections information. This research’s main contribution is the detection and classification of different kinds of lung cancers such as Adenocarcinoma, Large cell carcinoma, and Squamous cell carcinoma. A novel lung cancer detection technique has been developed using machine learning techniques. The technique comprises feature extraction, fusion using patch base LBP (Local Binary Pattern) and discrete cosine transform (DCT). The machine learning technique such as support vector machine (SVM) and K-nearest neighbors (KNN) evaluated chest CT scan images dataset for texture feature classification. The proposed technique’s performance achieves better accuracy of 93% and 91% for support vector machine and K-nearest neighbors, respectively, than state-of-the-art techniques.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-06DOI: 10.1109/CAIDA51941.2021.9425195
Hilalah F. Al-Turkistani, Alaa AlFaadhel
Cyber resiliency in Cloud computing is one of the most important capability of an enterprise network that provides continues ability to withstand and quick recovery from the adversary conditions. This capability can be measured through cybersecurity risk assessment techniques. However, cybersecurity risk management studies in cloud computing resiliency approaches are deficient. This paper proposes resilient cloud cybersecurity risk assessment tailored specifically to Dropbox with two methods: technical-based solution motivated by a cybersecurity risk assessment of cloud services, and a target personnel-based solution guided by cybersecurity-related survey among employees to identify their knowledge that qualifies them withstand to any cyberattack. The proposed work attempts to identify cloud vulnerabilities, assess threats and detect high risk components, to finally propose appropriate safeguards such as failure predicting and removing, redundancy or load balancing techniques for quick recovery and return to pre-attack state if failure happens.
{"title":"Cyber Resiliency in the Context of Cloud Computing Through Cyber Risk Assessment","authors":"Hilalah F. Al-Turkistani, Alaa AlFaadhel","doi":"10.1109/CAIDA51941.2021.9425195","DOIUrl":"https://doi.org/10.1109/CAIDA51941.2021.9425195","url":null,"abstract":"Cyber resiliency in Cloud computing is one of the most important capability of an enterprise network that provides continues ability to withstand and quick recovery from the adversary conditions. This capability can be measured through cybersecurity risk assessment techniques. However, cybersecurity risk management studies in cloud computing resiliency approaches are deficient. This paper proposes resilient cloud cybersecurity risk assessment tailored specifically to Dropbox with two methods: technical-based solution motivated by a cybersecurity risk assessment of cloud services, and a target personnel-based solution guided by cybersecurity-related survey among employees to identify their knowledge that qualifies them withstand to any cyberattack. The proposed work attempts to identify cloud vulnerabilities, assess threats and detect high risk components, to finally propose appropriate safeguards such as failure predicting and removing, redundancy or load balancing techniques for quick recovery and return to pre-attack state if failure happens.","PeriodicalId":272573,"journal":{"name":"2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132435599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}