With enormous and voluminous data being generated on a regular basis at an exponential speed, there is a demanding need for concise and relevant information to be available for the masses. Traditionally, lengthy textual contents are manually summarized by Linguists or Domain Experts, which are highly time consuming and unfairly biased. There is a dire need for Automatic Text Summarization approaches to be introduced in this broad spectrum. Extractive Summarization is one such approach where the salient information or excerpts are identified from a source and extracted to generate a concise summary. TextRank is an unsupervised extractive summarization technique incorporating graph-based ranking of extracted texts and finding the most relevant excerpts to generate a concise summary. In this paper, the prospects of a domain agnostic algorithm like TextRank for various domains of News Article Summarization are explored, exploring its efficiency in domain specific tasks and conveniently drawing various insights. NLP based pre-processing approaches and Static Word Embeddings were leveraged with semantic cosine similarity for the efficient ranking of textual data and performance evaluation on various domains of BBC News Articles Summarization datasets through ROUGE metrics. A commendable ROUGE score is achieved.
{"title":"Graph Based Extractive News Articles Summarization Approach leveraging Static Word Embeddings","authors":"Utpal Barman, Vishal Barman, Mustafizur Rahman, Nawaz Khan Choudhury","doi":"10.1109/ComPE53109.2021.9752056","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752056","url":null,"abstract":"With enormous and voluminous data being generated on a regular basis at an exponential speed, there is a demanding need for concise and relevant information to be available for the masses. Traditionally, lengthy textual contents are manually summarized by Linguists or Domain Experts, which are highly time consuming and unfairly biased. There is a dire need for Automatic Text Summarization approaches to be introduced in this broad spectrum. Extractive Summarization is one such approach where the salient information or excerpts are identified from a source and extracted to generate a concise summary. TextRank is an unsupervised extractive summarization technique incorporating graph-based ranking of extracted texts and finding the most relevant excerpts to generate a concise summary. In this paper, the prospects of a domain agnostic algorithm like TextRank for various domains of News Article Summarization are explored, exploring its efficiency in domain specific tasks and conveniently drawing various insights. NLP based pre-processing approaches and Static Word Embeddings were leveraged with semantic cosine similarity for the efficient ranking of textual data and performance evaluation on various domains of BBC News Articles Summarization datasets through ROUGE metrics. A commendable ROUGE score is achieved.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127256057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9752429
Pawan Jindal, V. Khemchandani, Sushil Chandra, Vishal Pandey
Creating environments and dangerous scenarios for physical training is very difficult and has a very high cost in terms of money and men’s power.Virtual Reality is a technology that simulates real-life experiences and allows people to don their own cyber avatars in a virtual world and interact with it like they would in the real world. The application of VR technology in the defence paradigm is to make trainees and officers better at using equipment, navigating a mode of transport, gaining experience of potential combat situations, medical training and more. One of the advantages of VR training in defence is that it offers the functionality to immerse users in a virtual yet safe world.Our immersive system provides an intuitive way for the users to interact with the VR or AR world by physically moving around the real world and aiming freely with tangible objects. This encourages physical interaction between the players as they compete or collaborate with other players. We present a new immersive multiplayer simulation game developed for defence training. We developed three game environments which are Combat situation, Bomb defusal, and Hostage rescue, and players can see their performance based on previously played games.
{"title":"A Multiplayer Shooting Game Based Simulation For Defence Training","authors":"Pawan Jindal, V. Khemchandani, Sushil Chandra, Vishal Pandey","doi":"10.1109/ComPE53109.2021.9752429","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752429","url":null,"abstract":"Creating environments and dangerous scenarios for physical training is very difficult and has a very high cost in terms of money and men’s power.Virtual Reality is a technology that simulates real-life experiences and allows people to don their own cyber avatars in a virtual world and interact with it like they would in the real world. The application of VR technology in the defence paradigm is to make trainees and officers better at using equipment, navigating a mode of transport, gaining experience of potential combat situations, medical training and more. One of the advantages of VR training in defence is that it offers the functionality to immerse users in a virtual yet safe world.Our immersive system provides an intuitive way for the users to interact with the VR or AR world by physically moving around the real world and aiming freely with tangible objects. This encourages physical interaction between the players as they compete or collaborate with other players. We present a new immersive multiplayer simulation game developed for defence training. We developed three game environments which are Combat situation, Bomb defusal, and Hostage rescue, and players can see their performance based on previously played games.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126001107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9752357
S. K. Verma, Aman Gupta, Ankita Jyoti
Weather forecasting has been a difficult problem for researchers for many years and continues to be today. The development of new and fast algorithms aids researchers in the pursuit of better weather forecast approximations. This problem attracts researchers because of the changing behavior of the environment, the increase in earth's temperature, and the drastic changes in ecosystem. Almost everywhere in the world is currently experiencing a slew of natural disasters, including storms on land and sea that are destroying infrastructure and taking the lives of many people. Machine learning and deep learning algorithms gave researchers and the general public hope that they would be able to develop fast applications and predict weather alarms in real time. Because of the combination of deep learning and the large amount of weather data that is available, researchers are motivated to investigate the hidden patterns of weather in forecasting. In this paper, the proposed model will be used to analyze intermediate variables, as well as variables associated with weather forecasting. Long Short-Term Model (LSTM) accuracy is affected by the number of layers in the model, as well as the number of layers in the stacked layer LSTM and the number of layers in Bidirectional LSTM. Because of the inclusion of an intermediate signal in the memory block, the methods proposed in this paper are an extended version of the LSTM. The premise is that two extremely connected patterns in the input dataset can rectify the input patterns and make it easier for the model to search for and recognize the pattern from the trained dataset by building a stronger connection between the patterns. In every trial, it is necessary to comprehend a long-lasting model for learning and to recognize the weather pattern. It makes use of predicted information such as visibility, as well as intermediate information such as temperature, pressure, humidity, and saturation, among other things. In bidirectional LSTM, the highest accuracy of 0.9355 and the lowest root mean square error of 0.0628 were achieved.
{"title":"Stack layer & Bidirectional Layer Long Short - Term Memory (LSTM) Time Series Model with Intermediate Variable for weather Prediction","authors":"S. K. Verma, Aman Gupta, Ankita Jyoti","doi":"10.1109/ComPE53109.2021.9752357","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752357","url":null,"abstract":"Weather forecasting has been a difficult problem for researchers for many years and continues to be today. The development of new and fast algorithms aids researchers in the pursuit of better weather forecast approximations. This problem attracts researchers because of the changing behavior of the environment, the increase in earth's temperature, and the drastic changes in ecosystem. Almost everywhere in the world is currently experiencing a slew of natural disasters, including storms on land and sea that are destroying infrastructure and taking the lives of many people. Machine learning and deep learning algorithms gave researchers and the general public hope that they would be able to develop fast applications and predict weather alarms in real time. Because of the combination of deep learning and the large amount of weather data that is available, researchers are motivated to investigate the hidden patterns of weather in forecasting. In this paper, the proposed model will be used to analyze intermediate variables, as well as variables associated with weather forecasting. Long Short-Term Model (LSTM) accuracy is affected by the number of layers in the model, as well as the number of layers in the stacked layer LSTM and the number of layers in Bidirectional LSTM. Because of the inclusion of an intermediate signal in the memory block, the methods proposed in this paper are an extended version of the LSTM. The premise is that two extremely connected patterns in the input dataset can rectify the input patterns and make it easier for the model to search for and recognize the pattern from the trained dataset by building a stronger connection between the patterns. In every trial, it is necessary to comprehend a long-lasting model for learning and to recognize the weather pattern. It makes use of predicted information such as visibility, as well as intermediate information such as temperature, pressure, humidity, and saturation, among other things. In bidirectional LSTM, the highest accuracy of 0.9355 and the lowest root mean square error of 0.0628 were achieved.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127803217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9752239
N. Mohan, R. Murugan, Tripti Goel, Parthapratim Roy
Diabetic retinopathy (DR) is a chronic disease leading cause of blindness. One of the primary symptoms of DR is exudates (EX). The EX is a condition in which proteins, lipids, water leaked to retinal areas causes vision impairment. The two types of EX are hard EX and soft EX based on their appearance and leakage consistency. Early intervention of DR diminishes the likelihood of vision loss. Therefore, an automated technique is required. We present a novel U-Net model that detects both soft and hard EX in this paper. The proposed model is implemented in two stages. Preprocessing of fundus images is included in the first. The custom residual blocks-based designed network is the second phase. The model is tested on two benchmark databases available publicly IDRiD and e-Ophtha. The results achieved using the proposed approach are better than other approaches.
{"title":"Exudate Detection with Improved U-Net Using Fundus Images","authors":"N. Mohan, R. Murugan, Tripti Goel, Parthapratim Roy","doi":"10.1109/ComPE53109.2021.9752239","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752239","url":null,"abstract":"Diabetic retinopathy (DR) is a chronic disease leading cause of blindness. One of the primary symptoms of DR is exudates (EX). The EX is a condition in which proteins, lipids, water leaked to retinal areas causes vision impairment. The two types of EX are hard EX and soft EX based on their appearance and leakage consistency. Early intervention of DR diminishes the likelihood of vision loss. Therefore, an automated technique is required. We present a novel U-Net model that detects both soft and hard EX in this paper. The proposed model is implemented in two stages. Preprocessing of fundus images is included in the first. The custom residual blocks-based designed network is the second phase. The model is tested on two benchmark databases available publicly IDRiD and e-Ophtha. The results achieved using the proposed approach are better than other approaches.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128754052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9752099
S. Khanday, Hoor Fatima, N. Rakesh
During the Covid-19 pandemic world has witnessed the rise of cyber-attacks, especially during the Lockdown time course announced by the countries throughout the world, when almost every aspect of life changed the routine from offline to online. Protecting and securing information resources during pandemics has been a top priority for the modern computing world, with databases, banking, E-commerce and mailing services, etc. being the eye-catching credentials to the attackers. Apart from cryptography, machine learning and deep learning can offer an enormous amount of help in testing, training, and extracting negligible information from the data sets. Deep learning and machine learning have many methods and models in the account to detect and classify the different versions of cyber-attacks occasionally, from the datasets. Some of the most common deep learning methods inspired by the neural networks are Recurrent Neural Networks, Convolutional Neural Networks, Deep Belief Networks, Deep Boltzman Networks, Autoencoders, and Stacked Auto-encoders. Also counting machine learning algorithms into the account, there is a vast variety of algorithms that are meant to perform classification and regression. The survey will provide some of the most important deep learning and machine learning architectures used for Cyber-security and can offer protective services against cyber-attacks. The paper is a survey about various categories of cyber-attacks with a timeline of different attacks that took place in India and some of the other countries in the world. The final section of the report is about what deep learning methods can offer for developing and improving the security policies and examining vulnerabilities of an information system.
{"title":"Deep learning offering resilience from trending cyber-attacks, a review","authors":"S. Khanday, Hoor Fatima, N. Rakesh","doi":"10.1109/ComPE53109.2021.9752099","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752099","url":null,"abstract":"During the Covid-19 pandemic world has witnessed the rise of cyber-attacks, especially during the Lockdown time course announced by the countries throughout the world, when almost every aspect of life changed the routine from offline to online. Protecting and securing information resources during pandemics has been a top priority for the modern computing world, with databases, banking, E-commerce and mailing services, etc. being the eye-catching credentials to the attackers. Apart from cryptography, machine learning and deep learning can offer an enormous amount of help in testing, training, and extracting negligible information from the data sets. Deep learning and machine learning have many methods and models in the account to detect and classify the different versions of cyber-attacks occasionally, from the datasets. Some of the most common deep learning methods inspired by the neural networks are Recurrent Neural Networks, Convolutional Neural Networks, Deep Belief Networks, Deep Boltzman Networks, Autoencoders, and Stacked Auto-encoders. Also counting machine learning algorithms into the account, there is a vast variety of algorithms that are meant to perform classification and regression. The survey will provide some of the most important deep learning and machine learning architectures used for Cyber-security and can offer protective services against cyber-attacks. The paper is a survey about various categories of cyber-attacks with a timeline of different attacks that took place in India and some of the other countries in the world. The final section of the report is about what deep learning methods can offer for developing and improving the security policies and examining vulnerabilities of an information system.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130402739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9751799
Tarun Agrawal, P. Choudhary
COVID-19 was previously identified as 2019-nCoV, however it was reclassified as severe acute respiratory syndrome coronavirus 2 by the International Committee on Taxonomy of Viruses (ICTV) (SARS-CoV-2). It was first discovered in Wuhan, China’s Hubei Province, and has since spread all over the world. The scientific community is working to develop COVID-19 detection technologies that are both quick and accurate. Chest x-ray imaging can aid in the early diagnosis of COVID-19 patients. In COVID-19 individuals, chest x-rays can indicate a variety of lung abnormalities, including lung consolidation, ground-glass opacity, and others. The COVID-19 biomarkers, however, must be identified by qualified and experienced radiologists. Each report must be inspected by the radiologist, which is a time-consuming procedure. The medical infrastructure is currently overburdened due to the huge volume of patients. In this study, we propose automatic COVID-19 identification in chest x-rays using a deep learning technique. COVID-19, pneumonia, and healthy x-rays are included in the dataset for the studies. The proposed model had an average accuracy and sensitivity of 97 percent. The obtained findings demonstrate that the model can compete with existing state-of-the-art models.
{"title":"Automated COVID-19 detection using Deep Convolutional Neural Network and Chest X-ray Images","authors":"Tarun Agrawal, P. Choudhary","doi":"10.1109/ComPE53109.2021.9751799","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9751799","url":null,"abstract":"COVID-19 was previously identified as 2019-nCoV, however it was reclassified as severe acute respiratory syndrome coronavirus 2 by the International Committee on Taxonomy of Viruses (ICTV) (SARS-CoV-2). It was first discovered in Wuhan, China’s Hubei Province, and has since spread all over the world. The scientific community is working to develop COVID-19 detection technologies that are both quick and accurate. Chest x-ray imaging can aid in the early diagnosis of COVID-19 patients. In COVID-19 individuals, chest x-rays can indicate a variety of lung abnormalities, including lung consolidation, ground-glass opacity, and others. The COVID-19 biomarkers, however, must be identified by qualified and experienced radiologists. Each report must be inspected by the radiologist, which is a time-consuming procedure. The medical infrastructure is currently overburdened due to the huge volume of patients. In this study, we propose automatic COVID-19 identification in chest x-rays using a deep learning technique. COVID-19, pneumonia, and healthy x-rays are included in the dataset for the studies. The proposed model had an average accuracy and sensitivity of 97 percent. The obtained findings demonstrate that the model can compete with existing state-of-the-art models.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116798569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9752133
Abhishek Singh, A. Payal
A low-cost Obstacle Detection and Collision Avoidance (ODCA) System stimulated from Coulomb’s inverse-square law has been proposed, deployed, and tested on self-assembled multi-rotor system. The algorithm is focused to be inexpensive in terms of spacio-temporal complexities, cross platform, and able to run on low-cost, easily available hardware. It aims at protecting the drone from entering a complex situation in manual and autonomous flight modes. The ODCA system hardware design is focused to be easily integrable with various flight controllers. The hardware and communication interfacing among various modules required by the ODCA system have been briefly explained. Since, proposed ODCA system is tested on self-assembled drone, a small description about drone hardware, assembly, and communication mechanism is also provided. Furthermore, the ODCA system algorithm that processes sensor data in various stages and culminated actions are explained. Finally, the system is tested and evaluated in multi-obstacle scenario through hardware in the loop (HIL) simulation and their findings are shown.
{"title":"Development of a low-cost Collision Avoidance System based on Coulomb’s inverse-square law for Multi-rotor Drones (UAVs)","authors":"Abhishek Singh, A. Payal","doi":"10.1109/ComPE53109.2021.9752133","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752133","url":null,"abstract":"A low-cost Obstacle Detection and Collision Avoidance (ODCA) System stimulated from Coulomb’s inverse-square law has been proposed, deployed, and tested on self-assembled multi-rotor system. The algorithm is focused to be inexpensive in terms of spacio-temporal complexities, cross platform, and able to run on low-cost, easily available hardware. It aims at protecting the drone from entering a complex situation in manual and autonomous flight modes. The ODCA system hardware design is focused to be easily integrable with various flight controllers. The hardware and communication interfacing among various modules required by the ODCA system have been briefly explained. Since, proposed ODCA system is tested on self-assembled drone, a small description about drone hardware, assembly, and communication mechanism is also provided. Furthermore, the ODCA system algorithm that processes sensor data in various stages and culminated actions are explained. Finally, the system is tested and evaluated in multi-obstacle scenario through hardware in the loop (HIL) simulation and their findings are shown.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132348991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9751877
M. Kumar, S. Das
Two specific conditions, such as maximum torque per inverter ampere (MTPIA) and unity primary power factor (UPPF) are considered in the present work for a comparative performance analysis while speed control of brushless doubly-fed reluctance generator (BDFRG) using primary field-oriented control (PFOC). The study is based on the active power, reactive power, and power factor of both the stator windings of BDFRG in super-synchronous, synchronous, and sub-synchronous speed zones. The study also deals with the assessment of the minimum rating of the inverter required in both the conditions for successful operations. The relevant studies are done in MATLAB/Simulink. The prima facie objective of the present work is to affirm the candidature of BDFRG in wind power generation.
{"title":"Speed Control of Brushless Doubly-fed Reluctance Generator under MTPIA and UPPF Conditions for Wind Power Application","authors":"M. Kumar, S. Das","doi":"10.1109/ComPE53109.2021.9751877","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9751877","url":null,"abstract":"Two specific conditions, such as maximum torque per inverter ampere (MTPIA) and unity primary power factor (UPPF) are considered in the present work for a comparative performance analysis while speed control of brushless doubly-fed reluctance generator (BDFRG) using primary field-oriented control (PFOC). The study is based on the active power, reactive power, and power factor of both the stator windings of BDFRG in super-synchronous, synchronous, and sub-synchronous speed zones. The study also deals with the assessment of the minimum rating of the inverter required in both the conditions for successful operations. The relevant studies are done in MATLAB/Simulink. The prima facie objective of the present work is to affirm the candidature of BDFRG in wind power generation.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128373502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ComPE53109.2021.9752278
B. B, Jeyasakthi R, J. S., Rishwana M, Swathilakshmi P R K, Reshma K K
Deep learning is important in the medical profession, and it has a wide range of applications, including diagnosis, research, and so on. In imaging technology, classifying the medical images in an automatic way is onerous. In the proposed work, the ABO blood group identification using novel deep learning approach for enhancement of bio medical automation. The ABO blood group data set is developed and classify the blood group automatically using Convolute neural network (CNN) which is capable of extracting and learning features from medical image dataset. As a result, the proposed innovative CNN framework is used in the medical field to classify human blood classes. As a result, our proposed dataset is used to train the model and test the sample in order to identify blood group in the shortest time possible with a 96.7 percent accuracy. The results of the proposed model are compared to those of existing CNN models such as Alex net and Lenet5. The findings show that the proposed method is the most appropriate for classifying human blood groups in medical applications.
{"title":"A novel approach of classifying ABO blood group image dataset using deep learning algorithm","authors":"B. B, Jeyasakthi R, J. S., Rishwana M, Swathilakshmi P R K, Reshma K K","doi":"10.1109/ComPE53109.2021.9752278","DOIUrl":"https://doi.org/10.1109/ComPE53109.2021.9752278","url":null,"abstract":"Deep learning is important in the medical profession, and it has a wide range of applications, including diagnosis, research, and so on. In imaging technology, classifying the medical images in an automatic way is onerous. In the proposed work, the ABO blood group identification using novel deep learning approach for enhancement of bio medical automation. The ABO blood group data set is developed and classify the blood group automatically using Convolute neural network (CNN) which is capable of extracting and learning features from medical image dataset. As a result, the proposed innovative CNN framework is used in the medical field to classify human blood classes. As a result, our proposed dataset is used to train the model and test the sample in order to identify blood group in the shortest time possible with a 96.7 percent accuracy. The results of the proposed model are compared to those of existing CNN models such as Alex net and Lenet5. The findings show that the proposed method is the most appropriate for classifying human blood groups in medical applications.","PeriodicalId":211704,"journal":{"name":"2021 International Conference on Computational Performance Evaluation (ComPE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134177918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}