Pub Date : 2023-11-09DOI: 10.1080/21681163.2023.2272976
G. Divya Deepak, Subraya Krishna Bhat
In this work, the segmented dental X-ray images obtained by dentists have been classified into ideal/minimally compromised edentulous area (no clinical treatment needed immediately), partially/moderately compromised edentulous area (require bridges or cast partial denture) and substantially compromised edentulous area (require complete denture prosthesis). A total of 116 image dental X-ray dataset is used, of which 70% of the image dataset is used for training the convolutional neural network (CNN) while 30% is used sfor testing and validation. Three pretrained deep neural networks (DNNs; SqueezeNet, ResNet-50 and EfficientNet-b0) have been implemented using Deep Network Designer module of Matlab 2022. Each of these CNNs were trained, tested and optimised for the best possible accuracy and validation of dental images, which require an appropriate clinical treatment. The highest classification accuracy of 98% was obtained for EfficientNet-b0. This novel research enables the implementation of DNN parameters for automated identification and labelling of edentulous area, which would require clinical treatment. Also, the performance metrics, accuracy, recall, precision and F1 score have been calculated for the best DNN using confusion matrix.
{"title":"Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning","authors":"G. Divya Deepak, Subraya Krishna Bhat","doi":"10.1080/21681163.2023.2272976","DOIUrl":"https://doi.org/10.1080/21681163.2023.2272976","url":null,"abstract":"In this work, the segmented dental X-ray images obtained by dentists have been classified into ideal/minimally compromised edentulous area (no clinical treatment needed immediately), partially/moderately compromised edentulous area (require bridges or cast partial denture) and substantially compromised edentulous area (require complete denture prosthesis). A total of 116 image dental X-ray dataset is used, of which 70% of the image dataset is used for training the convolutional neural network (CNN) while 30% is used sfor testing and validation. Three pretrained deep neural networks (DNNs; SqueezeNet, ResNet-50 and EfficientNet-b0) have been implemented using Deep Network Designer module of Matlab 2022. Each of these CNNs were trained, tested and optimised for the best possible accuracy and validation of dental images, which require an appropriate clinical treatment. The highest classification accuracy of 98% was obtained for EfficientNet-b0. This novel research enables the implementation of DNN parameters for automated identification and labelling of edentulous area, which would require clinical treatment. Also, the performance metrics, accuracy, recall, precision and F1 score have been calculated for the best DNN using confusion matrix.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":" 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1080/21681163.2023.2264402
Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto
ABSTRACTThis study utilised a prototype system which consisted of a person-specific 3D printed jaw tracking harness interfacing with the maxillary and mandibular teeth and custom jaw tracking software implemented on a smartphone. The prototype achieved acceptable results. The prototype demonstrated a static position accuracy of less than 1 mm and 5°. It successfully tracked 30 cycles of a protrusive excursion, left lateral excursion, and 40 mm of jaw opening on a semi-adjustable articulator. The standard error of the tracking accuracy was reported as 0.1377 mm, 0.0449 mm, and 0.9196 mm, with corresponding r2 values of 0.98, 1.00, and 1.00, respectively. Finally, occlusal contacts of left, right, and protrusive excursions were tracked with the prototype system and their trajectories were used to demonstrate kinematic modelling (no occlusal forces) with a biomechanical simulation tool.KEYWORDS: Smartphonedental occlusioncomputer visionjaw trackingbiomechanical simulation AcknowledgmentsThe authors would like to thank the Institute for Reconstructive Science in Medicine at the Misericordia Community Hospital in Edmonton Alberta for their help with the design and 3D printing of the tracking harnesses.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsKieran ArmstrongKieran Armstrong, holds a BEng in biomedical engineering from the University of Victoria and an MSc in rehabilitation science from the University of Alberta. His MSc research focused on computer modeling for dental prosthetic biomechanics in head and neck cancer treatment. Working in the wearable biometric sensing industry, his focus is on exploring how optical biometric sensing methods can be used to make meaningful connections to biological signals, like photoplethysmography to help people monitor their health and fitness.Carolyn KincadeCarolyn Kincade is a seasoned healthcare professional with a strong background in quality management and patient care. As a traditionally trained Dental Technologist she has enjoyed the transition of analog case work to digital. She is currently engaged in furthering her studies with a Master of Technology Management, though Memorial University of Newfoundland, to build upon her Diploma in Dental Technology and Bachelor of Technology from the Northern Alberta Institute of Technology. Carolyn also engages with the regulatory community in many ways, having served in various committee roles as part of the College of Dental Technologists of Alberta. Carolyn continues to make a meaningful impact in the healthcare field, bringing her expertise to the forefront for quality healthcare delivery.Jana RiegerJana Rieger, PhD is a global leader in functional outcomes assessment related to head and neck disorders. Over her 20-year career in this field, Jana has held roles as a professor, clinician, researcher, and most recently, entrepreneur. Jana and her team have developed, tested, and comme
{"title":"A prototype smartphone jaw tracking application to quantitatively model tooth contact","authors":"Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto","doi":"10.1080/21681163.2023.2264402","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264402","url":null,"abstract":"ABSTRACTThis study utilised a prototype system which consisted of a person-specific 3D printed jaw tracking harness interfacing with the maxillary and mandibular teeth and custom jaw tracking software implemented on a smartphone. The prototype achieved acceptable results. The prototype demonstrated a static position accuracy of less than 1 mm and 5°. It successfully tracked 30 cycles of a protrusive excursion, left lateral excursion, and 40 mm of jaw opening on a semi-adjustable articulator. The standard error of the tracking accuracy was reported as 0.1377 mm, 0.0449 mm, and 0.9196 mm, with corresponding r2 values of 0.98, 1.00, and 1.00, respectively. Finally, occlusal contacts of left, right, and protrusive excursions were tracked with the prototype system and their trajectories were used to demonstrate kinematic modelling (no occlusal forces) with a biomechanical simulation tool.KEYWORDS: Smartphonedental occlusioncomputer visionjaw trackingbiomechanical simulation AcknowledgmentsThe authors would like to thank the Institute for Reconstructive Science in Medicine at the Misericordia Community Hospital in Edmonton Alberta for their help with the design and 3D printing of the tracking harnesses.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsKieran ArmstrongKieran Armstrong, holds a BEng in biomedical engineering from the University of Victoria and an MSc in rehabilitation science from the University of Alberta. His MSc research focused on computer modeling for dental prosthetic biomechanics in head and neck cancer treatment. Working in the wearable biometric sensing industry, his focus is on exploring how optical biometric sensing methods can be used to make meaningful connections to biological signals, like photoplethysmography to help people monitor their health and fitness.Carolyn KincadeCarolyn Kincade is a seasoned healthcare professional with a strong background in quality management and patient care. As a traditionally trained Dental Technologist she has enjoyed the transition of analog case work to digital. She is currently engaged in furthering her studies with a Master of Technology Management, though Memorial University of Newfoundland, to build upon her Diploma in Dental Technology and Bachelor of Technology from the Northern Alberta Institute of Technology. Carolyn also engages with the regulatory community in many ways, having served in various committee roles as part of the College of Dental Technologists of Alberta. Carolyn continues to make a meaningful impact in the healthcare field, bringing her expertise to the forefront for quality healthcare delivery.Jana RiegerJana Rieger, PhD is a global leader in functional outcomes assessment related to head and neck disorders. Over her 20-year career in this field, Jana has held roles as a professor, clinician, researcher, and most recently, entrepreneur. Jana and her team have developed, tested, and comme","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":" 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135340497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACTCanine Hip Dysplasia (CHD) is a congenital disease with a polygenic hereditary component, characterised by abnormal development of the coxo-femoral joint which results in poor coaptation of the femoral head in the acetabulum; the disease rapidly progresses to osteoarthritis of the hip. While dysplasia has been recognised in practically all canine breeds, it is much more common and of concern in medium and large dog breeds with rapid development. Dysplasia in predisposed breeds, particularly the German Shepherd, is the object of screening based on systematic radiological control in some countries. Our collected dataset comprises 507 X-ray images of dogs affected by hip dysplasia (HD). These images were meticulously evaluated using six Deep Convolutional Neural Network (CNN) models. Following an extensive analysis of the top-performing models, VGG16 emerged as the leader, achieving remarkable accuracy, recall, and precision scores of 98.32%, 98.35%, and 98.44%, respectively. Leveraging deep learning (DL) techniques, this approach excels in diagnosing CHD from hip X-rays with a high degree of accuracy.KEYWORDS: Canine Hip Dysplasia diagnosisdeep learningtransfer learningX-rayimage classification AcknowledgementSpecial thanks to Dr. Samir DJEMAI, a lecturer at the National Veterinary Institute of the University of Constantine, and the DHONDT NUNES veterinary clinic in France for providing the authors with dog hip radiographic images. This work would not have been possible without their invaluable assistance.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsChaouki BoufenarChaouki Boufenar is an Algerian scientist and researcher known for his work in the field of artificial intelligence and data science. He is currently a lecturer at the Computer Science Department of the University of Algiers. He received a Ph.D. in Computer Science from the University of Constantine 2 (Abdelhamid Mehri) in 2018. Chaouki Boufenar has been affiliated with several academic and research institutions, including the University of Paris-Saclay (Laboratoire de Recherche en Informatique), the University of Constantine, and the University of Jijel in Algeria. He has published several research papers and articles in the field of Computer Science and Artificial Intelligence. His areas of interest include data science, deep learning, and computer vision.Tété Elom Mike Norbert LogoviTete Elom Mike Norbert Logovi is currently working as a teaching assistant at Laval University. He is also currently pursuing his M.Sc. degree in Computer Science with a thesis at the same university. He received his Bachelor's degree in Computer Systems from the Department of Computer Science at Benyoucef Benkhedda Algiers 1 University. His research area includes Machine Learning, Deep Learning, and Computer Vision.Djemai SamirDjemai Samir is currently a lecturer and researcher at the Institute of Veterinary Sciences
{"title":"Computer-aided diagnosis of Canine Hip Dysplasia using deep learning approach in a novel X-ray image dataset","authors":"Chaouki Boufenar, Tété Elom Mike Norbert Logovi, Djemai Samir, Imad Eddine Lassakeur","doi":"10.1080/21681163.2023.2274947","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274947","url":null,"abstract":"ABSTRACTCanine Hip Dysplasia (CHD) is a congenital disease with a polygenic hereditary component, characterised by abnormal development of the coxo-femoral joint which results in poor coaptation of the femoral head in the acetabulum; the disease rapidly progresses to osteoarthritis of the hip. While dysplasia has been recognised in practically all canine breeds, it is much more common and of concern in medium and large dog breeds with rapid development. Dysplasia in predisposed breeds, particularly the German Shepherd, is the object of screening based on systematic radiological control in some countries. Our collected dataset comprises 507 X-ray images of dogs affected by hip dysplasia (HD). These images were meticulously evaluated using six Deep Convolutional Neural Network (CNN) models. Following an extensive analysis of the top-performing models, VGG16 emerged as the leader, achieving remarkable accuracy, recall, and precision scores of 98.32%, 98.35%, and 98.44%, respectively. Leveraging deep learning (DL) techniques, this approach excels in diagnosing CHD from hip X-rays with a high degree of accuracy.KEYWORDS: Canine Hip Dysplasia diagnosisdeep learningtransfer learningX-rayimage classification AcknowledgementSpecial thanks to Dr. Samir DJEMAI, a lecturer at the National Veterinary Institute of the University of Constantine, and the DHONDT NUNES veterinary clinic in France for providing the authors with dog hip radiographic images. This work would not have been possible without their invaluable assistance.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsChaouki BoufenarChaouki Boufenar is an Algerian scientist and researcher known for his work in the field of artificial intelligence and data science. He is currently a lecturer at the Computer Science Department of the University of Algiers. He received a Ph.D. in Computer Science from the University of Constantine 2 (Abdelhamid Mehri) in 2018. Chaouki Boufenar has been affiliated with several academic and research institutions, including the University of Paris-Saclay (Laboratoire de Recherche en Informatique), the University of Constantine, and the University of Jijel in Algeria. He has published several research papers and articles in the field of Computer Science and Artificial Intelligence. His areas of interest include data science, deep learning, and computer vision.Tété Elom Mike Norbert LogoviTete Elom Mike Norbert Logovi is currently working as a teaching assistant at Laval University. He is also currently pursuing his M.Sc. degree in Computer Science with a thesis at the same university. He received his Bachelor's degree in Computer Systems from the Department of Computer Science at Benyoucef Benkhedda Algiers 1 University. His research area includes Machine Learning, Deep Learning, and Computer Vision.Djemai SamirDjemai Samir is currently a lecturer and researcher at the Institute of Veterinary Sciences","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"9 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135936141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACTA method of enhancing colour fundus photographs is proposed to reduce the effect of cataracts. The enhancement method employs a decorrelation stretch (DS) technique in an LCC colour model. The initial designed technique embeds Hubbard’s colouration model into DS parameters to produce enhanced results in a standard form of age-related macular degeneration (AMD) reading centres. The colouration model could modify to enhance the colour of lesions observed in diabetic retinopathy (DR). The proposed algorithm could improve the effect of cataracts on fundus images and provided good results when the density of the cataract was less than grade 2. In the case of images taken through cataracts higher than or equal to grade 2, some output results could become unusable when the cataract was in line with the macula.KEYWORDS: Decorrelation stretchretinal image enhancementcataract Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B04G640070].Notes on contributorsPreecha VonghirandechaPreecha Vonghirandecha is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. His current research interests include data Science, image processing and artificial intelligence applied to medical image analysis. He received a PhD in computer engineering from Prince of Songkla University, Thailand, in 2019.Supaporn KansomkeatSupaporn Kansomkeat is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. Her current research interests include software testing, test process improvement and artificial intelligence applied to medical image analysis. She received a PhD in computer engineering from Chulalongkorn University, Thailand, in 2007.Patama BhurayanontachaiPatama Bhurayanontachai (MD.) is an Associate Professor at the Department of Ophthalmology, Prince of Songkla University, Songkhla, Thailand. She received a certificate in Clinical Fellowship in vitreoretinal surgery from Flinders Medical Centre, Australia, in 2005. Her current research interests involve medical retina, surgical retina, and artificial intelligence applied to clinical diagnosis.Pannipa Sae-UengPannipa Sae-Ueng is a lecturer at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. She received her Ph.D. in Computer Science in 2022 at the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. Recently she has focused on research topics in data science and artificial intelligence.Sathit IntajagSathit Intajag received the M. Eng. and D. Eng. Degree in electrical engineering from the King Mongkut’s Institute of Tec
{"title":"Decorrelation stretch for enhancing colour fundus photographs affected by cataracts","authors":"Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag","doi":"10.1080/21681163.2023.2274948","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274948","url":null,"abstract":"ABSTRACTA method of enhancing colour fundus photographs is proposed to reduce the effect of cataracts. The enhancement method employs a decorrelation stretch (DS) technique in an LCC colour model. The initial designed technique embeds Hubbard’s colouration model into DS parameters to produce enhanced results in a standard form of age-related macular degeneration (AMD) reading centres. The colouration model could modify to enhance the colour of lesions observed in diabetic retinopathy (DR). The proposed algorithm could improve the effect of cataracts on fundus images and provided good results when the density of the cataract was less than grade 2. In the case of images taken through cataracts higher than or equal to grade 2, some output results could become unusable when the cataract was in line with the macula.KEYWORDS: Decorrelation stretchretinal image enhancementcataract Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B04G640070].Notes on contributorsPreecha VonghirandechaPreecha Vonghirandecha is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. His current research interests include data Science, image processing and artificial intelligence applied to medical image analysis. He received a PhD in computer engineering from Prince of Songkla University, Thailand, in 2019.Supaporn KansomkeatSupaporn Kansomkeat is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. Her current research interests include software testing, test process improvement and artificial intelligence applied to medical image analysis. She received a PhD in computer engineering from Chulalongkorn University, Thailand, in 2007.Patama BhurayanontachaiPatama Bhurayanontachai (MD.) is an Associate Professor at the Department of Ophthalmology, Prince of Songkla University, Songkhla, Thailand. She received a certificate in Clinical Fellowship in vitreoretinal surgery from Flinders Medical Centre, Australia, in 2005. Her current research interests involve medical retina, surgical retina, and artificial intelligence applied to clinical diagnosis.Pannipa Sae-UengPannipa Sae-Ueng is a lecturer at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. She received her Ph.D. in Computer Science in 2022 at the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. Recently she has focused on research topics in data science and artificial intelligence.Sathit IntajagSathit Intajag received the M. Eng. and D. Eng. Degree in electrical engineering from the King Mongkut’s Institute of Tec","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"20 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.1080/21681163.2023.2266031
None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi
ABSTRACTThis paper introduces a Computer-Aided Detection (CAD) system for categorizing breast masses in mammogram images from the DDSM database as Benign, Malignant, or Normal. The CAD process involves Pre-processing, Segmentation, Feature Extraction, Feature Selection, and Classification. Three feature selection methods, namely the Genetic Algorithm (GA), t-test, and Particle Swarm Optimization (PSO) are used. In the classification phase, three machine learning algorithms (kNN, multiSVM, and Naive Bayes) are explored. Evaluation metrics like accuracy, AUC, precision, recall, F1-score, MCC, Dice coefficient, and Jaccard coefficient are used for performance assessment. Training and testing accuracy are assessed for the three classes. The system is evaluated using nine algorithm combinations, producing the following AUC values: GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM (0.85), and PSO+NB (0.86). The study shows that the GA and kNN combination outperforms others.KEYWORDS: Mammogramsbreast massfeature selectionGenetic algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funding is used to complete this project.Notes on contributors G Vaira SuganthiDr. Vaira Suganthi G has 20 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. J SuthaDr. Sutha J has more than 25 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. M ParvathyDr. Parvathy M has more than 20 years of teaching experience. Her area of interest include Image Processing, Data Mining, and Machine Learning.N Muthamil SelviMs. Muthamil Selvi N has 1 year of teaching experience. Her area of interest is Machine Learning.
{"title":"Genetic algorithm for feature selection in mammograms for breast masses classification","authors":"None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi","doi":"10.1080/21681163.2023.2266031","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266031","url":null,"abstract":"ABSTRACTThis paper introduces a Computer-Aided Detection (CAD) system for categorizing breast masses in mammogram images from the DDSM database as Benign, Malignant, or Normal. The CAD process involves Pre-processing, Segmentation, Feature Extraction, Feature Selection, and Classification. Three feature selection methods, namely the Genetic Algorithm (GA), t-test, and Particle Swarm Optimization (PSO) are used. In the classification phase, three machine learning algorithms (kNN, multiSVM, and Naive Bayes) are explored. Evaluation metrics like accuracy, AUC, precision, recall, F1-score, MCC, Dice coefficient, and Jaccard coefficient are used for performance assessment. Training and testing accuracy are assessed for the three classes. The system is evaluated using nine algorithm combinations, producing the following AUC values: GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM (0.85), and PSO+NB (0.86). The study shows that the GA and kNN combination outperforms others.KEYWORDS: Mammogramsbreast massfeature selectionGenetic algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funding is used to complete this project.Notes on contributors G Vaira SuganthiDr. Vaira Suganthi G has 20 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. J SuthaDr. Sutha J has more than 25 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. M ParvathyDr. Parvathy M has more than 20 years of teaching experience. Her area of interest include Image Processing, Data Mining, and Machine Learning.N Muthamil SelviMs. Muthamil Selvi N has 1 year of teaching experience. Her area of interest is Machine Learning.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-15DOI: 10.1080/21681163.2023.2266048
R. Bhuvaneswari, M. Diviya, M. Subramanian, Ramya Maranan, R Josphineleela
ABSTRACTOne of the common eye conditions affecting patients with diabetes is diabetic retinopathy (DR). It is characterised by the progressive impairment to the blood vessels with the increase of glucose level in the blood. The grading efficiency still finds challenging because of the existence of intra-class variations and imbalanced data distributions on the retinal images. Traditional machine learning techniques utilise hand-engineered features for classification of the affected retinal images. As convolutional neural network produces better image classification accuracy in many medical images, this work utilises the CNN-based feature extraction method. This feature has been used to build Gaussian mixture model (GMM) for each class that maps the CNN features to log-likelihood dimensional vector spaces. Since the Gaussian mixture model can be realised as a mixture of both parametric and nonparametric density models and has their flexibility in capturing different data distributions, probabilistic outputs, interpretability, efficient parameter estimation, and robustness to outliers, the proposed model aimed to obtain and provide a smooth approximation of the underlying distribution of features for training the model. Then these vector spaces are trained by the SVM classifier. Experimental results illustrate the efficacy of the proposed model with accuracy 86.3% and 89.1%, respectively.KEYWORDS: Retinal imagesCNN feature extractionsupport vector machineGaussian mixture model Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationNotes on contributorsR. BhuvaneswariR. Bhuvaneswari (Member, IEEE) received the Ph.D. degree from Anna University. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 18 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and co-authored a book on computer graphics. Her research interests include machine learning and deep learning for image processing applications.M. DiviyaM.Diviya received the M.E . degree from Anna University. Currently pursuing Ph.D in Vellore Institute of Technology, Chennai. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 7 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and book chapters. Her research interests include machine learning and deep learning for image processing,text processing applications.M. SubramanianSubramanian M received a BE degree in Mechanical Engineering from 2008, and he obtained ME degrees in computer aided design and engineering design in 2011 and 2013, respectively. He is pursuing his PhD degree from Anna University, Chennai, Tamilnadu, India in the field of material
{"title":"Hybrid generative model for grading the severity of diabetic retinopathy images","authors":"R. Bhuvaneswari, M. Diviya, M. Subramanian, Ramya Maranan, R Josphineleela","doi":"10.1080/21681163.2023.2266048","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266048","url":null,"abstract":"ABSTRACTOne of the common eye conditions affecting patients with diabetes is diabetic retinopathy (DR). It is characterised by the progressive impairment to the blood vessels with the increase of glucose level in the blood. The grading efficiency still finds challenging because of the existence of intra-class variations and imbalanced data distributions on the retinal images. Traditional machine learning techniques utilise hand-engineered features for classification of the affected retinal images. As convolutional neural network produces better image classification accuracy in many medical images, this work utilises the CNN-based feature extraction method. This feature has been used to build Gaussian mixture model (GMM) for each class that maps the CNN features to log-likelihood dimensional vector spaces. Since the Gaussian mixture model can be realised as a mixture of both parametric and nonparametric density models and has their flexibility in capturing different data distributions, probabilistic outputs, interpretability, efficient parameter estimation, and robustness to outliers, the proposed model aimed to obtain and provide a smooth approximation of the underlying distribution of features for training the model. Then these vector spaces are trained by the SVM classifier. Experimental results illustrate the efficacy of the proposed model with accuracy 86.3% and 89.1%, respectively.KEYWORDS: Retinal imagesCNN feature extractionsupport vector machineGaussian mixture model Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationNotes on contributorsR. BhuvaneswariR. Bhuvaneswari (Member, IEEE) received the Ph.D. degree from Anna University. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 18 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and co-authored a book on computer graphics. Her research interests include machine learning and deep learning for image processing applications.M. DiviyaM.Diviya received the M.E . degree from Anna University. Currently pursuing Ph.D in Vellore Institute of Technology, Chennai. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 7 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and book chapters. Her research interests include machine learning and deep learning for image processing,text processing applications.M. SubramanianSubramanian M received a BE degree in Mechanical Engineering from 2008, and he obtained ME degrees in computer aided design and engineering design in 2011 and 2013, respectively. He is pursuing his PhD degree from Anna University, Chennai, Tamilnadu, India in the field of material","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136185046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-11DOI: 10.1080/21681163.2023.2258995
S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini
ABSTRACTMachine learning is a computer technique that automatically learns from experience and enhances the effectiveness of producing more precise diabetes predictions. However, large, inclusive, high-quality datasets are needed for training the machine learning networks. In this research work, attention-based approaches are designed for predicting diabetes in the affected individuals. Initially, the collected diabetes data is given into the data cleaning to get noise-free data for the prediction task. Here, extracted feature set 1 is extracted from the Auto encoder, and extracted feature set 2 is extracted from the 1-Dimensional Convolutional Neural Network (1D-CNN). These two sets of extracted features are fused in the adaptive way that is weighted feature fusion. Here, the weight of the selected features is optimized by an Enhanced Path Finder Algorithm (EPFA) to get more accurate results. The weighted fused features are employed for the diabetes prediction phase, in which the developed Attention-based Long Short Term Memory (ALSTM) with architecture optimization by improved PFA for predicting diabetes in affected one. Throughout the result analysis, the designed method attains 95% accuracy and 92%precision rate. Finally, the analysis is made by the proposed and existing prediction methods to showcase the effective performance.KEYWORDS: Diabetes predictionautoencoder1-dimensional convolutional neural networkattention-based long short term memory componentenhanced path finder algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要机器学习是一种自动从经验中学习的计算机技术,可以提高产生更精确的糖尿病预测的有效性。然而,训练机器学习网络需要庞大、包容、高质量的数据集。在这项研究工作中,基于注意力的方法被设计用于预测受影响个体的糖尿病。首先,将收集到的糖尿病数据进行数据清洗,得到用于预测任务的无噪声数据。在这里,提取的特征集1是从Auto encoder中提取的,提取的特征集2是从一维卷积神经网络(1D-CNN)中提取的。将提取的两组特征以加权特征融合的自适应方式进行融合。在这里,通过增强寻径算法(Enhanced Path Finder Algorithm, EPFA)优化所选特征的权重,以获得更准确的结果。在糖尿病预测阶段,采用加权融合特征,利用改进PFA优化结构的基于注意的长短期记忆(ALSTM)来预测受影响者的糖尿病。在整个结果分析中,设计的方法达到95%的准确度和92%的精密度。最后,将本文提出的预测方法与现有的预测方法进行对比分析,以展示其有效性能。关键词:糖尿病预测、自编码器、一维卷积神经网络、基于注意的长短期记忆组件、增强型寻径器算法披露声明作者未报告潜在利益冲突。
{"title":"DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism","authors":"S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini","doi":"10.1080/21681163.2023.2258995","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258995","url":null,"abstract":"ABSTRACTMachine learning is a computer technique that automatically learns from experience and enhances the effectiveness of producing more precise diabetes predictions. However, large, inclusive, high-quality datasets are needed for training the machine learning networks. In this research work, attention-based approaches are designed for predicting diabetes in the affected individuals. Initially, the collected diabetes data is given into the data cleaning to get noise-free data for the prediction task. Here, extracted feature set 1 is extracted from the Auto encoder, and extracted feature set 2 is extracted from the 1-Dimensional Convolutional Neural Network (1D-CNN). These two sets of extracted features are fused in the adaptive way that is weighted feature fusion. Here, the weight of the selected features is optimized by an Enhanced Path Finder Algorithm (EPFA) to get more accurate results. The weighted fused features are employed for the diabetes prediction phase, in which the developed Attention-based Long Short Term Memory (ALSTM) with architecture optimization by improved PFA for predicting diabetes in affected one. Throughout the result analysis, the designed method attains 95% accuracy and 92%precision rate. Finally, the analysis is made by the proposed and existing prediction methods to showcase the effective performance.KEYWORDS: Diabetes predictionautoencoder1-dimensional convolutional neural networkattention-based long short term memory componentenhanced path finder algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-11DOI: 10.1080/21681163.2023.2266008
Mina Esfandiarkhani, Amir Hossein Foruzan
ABSTRACTSetting up a complex CNN requires a powerful platform, several hours of run-time, and a lot of data for training. Here, we propose a generalised lightweight solution that exploits super-resolution and scalable vector graphics and uses a small-scale UNet as the baseline framework to segment different organs in MR and CT data. We selected the UNet since many researchers use it as the baseline, modify it in their proposal, and perform an ablation study to show the effectiveness of the proposed modification. First, we downsample the input 2D CT slices by bicubic interpolation. Using the architecture of the conventional UNet, we reduce the size of the network’s input, and the number of layers and filters to construct a lightweight UNet. The network segments the low-resolution images and prepares the mask of an organ. Then, we upscale the boundary of the output mask by the Support Vector Graphics technique to obtain the final border. This design reduces the number of parameters and the run-time by a factor of two. We segmented several tissues to prove the stability of our method to the type of organ. The experiments proved the feasibility of setting up complex deep neural networks with conventional platforms.KEYWORDS: light-weight deep neural networksscalable vector graphicsgeneralised segmentation frameworksmedical image segmentation Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsMina EsfandiarkhaniMina Esfandiarkhani received a B.Sc. degree from the Azad University of Qazvin in 2013 and an M.Sc. degree in Biomedical Engineering from the Shahed University of Tehran in 2016. She is currently pursuing a Ph.D. degree in the Biomedical Engineering faculty of Shahed University. Her research interests include machine learning, computer vision, medical image processing, and artificial intelligence.Amir Hossein ForuzanAmir Hossein Foruzan received his B.S. from the Sharif University of Technology in Telecommunication Engineering. He received his M.S. and Ph.D. from Tehran University in Biomedical Engineering. Since 2011, he has been a faculty member of Shahed University. His research interest is medical image processing.
摘要建立一个复杂的CNN需要强大的平台、数小时的运行时间和大量的训练数据。在这里,我们提出了一种通用的轻量级解决方案,该解决方案利用超分辨率和可扩展的矢量图形,并使用小规模UNet作为基线框架来分割MR和CT数据中的不同器官。我们之所以选择UNet,是因为许多研究人员将其作为基线,在他们的提案中对其进行修改,并进行消融研究以显示所提议修改的有效性。首先,我们通过双三次插值对输入的二维CT切片进行下采样。利用传统UNet的架构,我们减少了网络输入的大小、层数和过滤器的数量,构建了一个轻量级的UNet。该网络对低分辨率图像进行分割,并准备器官的掩膜。然后,利用支持向量图形技术对输出蒙版的边界进行上移,得到最终的边界。这种设计将参数的数量和运行时间减少了两倍。我们分割了几个组织,以证明我们的方法对器官类型的稳定性。实验证明了在常规平台上建立复杂深度神经网络的可行性。关键词:轻量级深度神经网络可扩展向量图广义分割框架医学图像分割披露声明作者未报告潜在利益冲突。mina Esfandiarkhani于2013年获得Azad University of Qazvin的学士学位,并于2016年获得Shahed University of Tehran的生物医学工程硕士学位。她目前在沙希德大学生物医学工程学院攻读博士学位。她的研究兴趣包括机器学习、计算机视觉、医学图像处理和人工智能。Amir Hossein Foruzan获得谢里夫理工大学电信工程学士学位。他获得了德黑兰大学生物医学工程硕士和博士学位。2011年以来,他一直担任Shahed University的教员。主要研究方向为医学图像处理。
{"title":"Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs","authors":"Mina Esfandiarkhani, Amir Hossein Foruzan","doi":"10.1080/21681163.2023.2266008","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266008","url":null,"abstract":"ABSTRACTSetting up a complex CNN requires a powerful platform, several hours of run-time, and a lot of data for training. Here, we propose a generalised lightweight solution that exploits super-resolution and scalable vector graphics and uses a small-scale UNet as the baseline framework to segment different organs in MR and CT data. We selected the UNet since many researchers use it as the baseline, modify it in their proposal, and perform an ablation study to show the effectiveness of the proposed modification. First, we downsample the input 2D CT slices by bicubic interpolation. Using the architecture of the conventional UNet, we reduce the size of the network’s input, and the number of layers and filters to construct a lightweight UNet. The network segments the low-resolution images and prepares the mask of an organ. Then, we upscale the boundary of the output mask by the Support Vector Graphics technique to obtain the final border. This design reduces the number of parameters and the run-time by a factor of two. We segmented several tissues to prove the stability of our method to the type of organ. The experiments proved the feasibility of setting up complex deep neural networks with conventional platforms.KEYWORDS: light-weight deep neural networksscalable vector graphicsgeneralised segmentation frameworksmedical image segmentation Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsMina EsfandiarkhaniMina Esfandiarkhani received a B.Sc. degree from the Azad University of Qazvin in 2013 and an M.Sc. degree in Biomedical Engineering from the Shahed University of Tehran in 2016. She is currently pursuing a Ph.D. degree in the Biomedical Engineering faculty of Shahed University. Her research interests include machine learning, computer vision, medical image processing, and artificial intelligence.Amir Hossein ForuzanAmir Hossein Foruzan received his B.S. from the Sharif University of Technology in Telecommunication Engineering. He received his M.S. and Ph.D. from Tehran University in Biomedical Engineering. Since 2011, he has been a faculty member of Shahed University. His research interest is medical image processing.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-11DOI: 10.1080/21681163.2023.2264408
Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto
ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech
摘要在许多研究中,机器学习方法被广泛用于利用胸部x线图像预测COVID-19。然而,机器学习模型必须表现出鲁棒性,并为不同的人群提供可靠的预测,而不仅仅是在训练数据中使用的预测,才能真正有价值。不幸的是,在目前的文献中,对模型通用性的评估经常被忽视。在这项研究中,我们研究了三种分类模型——ResNet50v2、MobileNetv2和Swin Transformer——用于使用胸部x线图像预测COVID-19的通用性。我们采用三种并行的方法进行评估:内部和外部验证程序,肺区域裁剪和图像增强。结果表明,两种方法的结合可以使深度模型获得相似的内部和外部泛化能力。关键词:covid -19 x射线机器学习披露声明作者未报告潜在利益冲突。https://github.com/dirtmaxim/lungs-finder2。https://keras.io/examples/vision/swin_transformers/3。https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4。https://github.com/agchung/Actualmed-COVID-chestxray-dataset5。本工作是研究与开发(R&D)项目001/2020的成果,该项目与亚马逊联邦大学和巴西FAEPI签署,该项目由三星资助,使用了西部亚马逊信息学法(第8.387/1991号联邦法)的资源,其披露符合第10.521/2020号法令第39条。关于贡献者的说明natalia de Sousa Freire renalia de Sousa Freire目前是亚马逊联邦大学(UFAM)软件工程专业的学生。他的主要研究兴趣包括机器学习和计算机视觉。Pedro Paulo de Souza le,于2023年在巴西亚马逊联邦大学获得软件工程学士学位。他的主要研究兴趣是机器学习。Leonardo Albuquerque Tiago目前正在亚马逊联邦大学(巴西)攻读软件工程学士学位。他的主要研究兴趣是机器学习和软件测试。Alberto de Almeida Campos gonalalves于2022年获得亚马逊联邦大学计算机科学学士学位。他的研究兴趣包括机器学习和计算机视觉领域。Rafael Albuquerque Pinto于2017年获得罗赖马联邦大学(UFRR)的计算机科学学士学位,并于2022年获得亚马逊联邦大学(UFAM)的信息学硕士学位。他目前正在UFAM攻读信息学博士学位,他的研究重点是使用机器学习技术的生物信号。ulanda Miranda dos Santos是亚马逊联邦大学计算机研究所(IComp)的副教授。她于1999年、2002年和2008年分别获得巴西帕拉联邦大学信息学学士学位、巴西帕拉伊巴联邦大学信息学硕士学位和加拿大魁北克大学École de Technologie supsamrieure工程博士学位。她的研究兴趣包括模式识别、机器学习和计算机视觉。Eduardo SoutoEduardo Souto于2007年获得巴西累西腓伯南布哥联邦大学(UFPE)计算机科学博士学位。他目前是亚马逊联邦大学计算机研究所的副教授。他也是新兴技术和系统安全(ETSS)研究小组的负责人。他的研究兴趣包括应用机器学习、物联网和网络安全领域。
{"title":"Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models","authors":"Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto","doi":"10.1080/21681163.2023.2264408","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264408","url":null,"abstract":"ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1080/21681163.2023.2264937
Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber
ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating its progression and complications. The diagnostic process often contends with data ambiguity and decision uncertainty, adding complexity to achieving definitive outcomes. This study addresses the diabetes diagnostic challenge through data mining and machine learning techniques. It involves training various machine learning algorithms and conducting statistical analysis on a dataset comprising 520 patients, encompassing both normal and diabetic cases, to discern influential features. Incorporating 17 features as classifier inputs, this research evaluates the diagnostic performance using four reputable techniques: support vector machine (SVM), random forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes underscore the SVM model's superior performance, boasting accuracy, specificity, and sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, respectively, across 50 iterations. The findings establish SVM as the preferred method for diabetes diagnosis. This study highlights the efficacy of data mining and machine learning models in diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their integration with a physician's assessment promises even better patient outcomes.KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, and earned his M.S. degree in biomedical engineering-bioelectric from Science and Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately dedicated to the examination and interpretation of biomedical data, particularly in the context of disease prediction and detection. His academic pursuits involve in-depth exploration of biomedical data analysi
{"title":"Detection and prediction of diabetes using effective biomarkers","authors":"Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber","doi":"10.1080/21681163.2023.2264937","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264937","url":null,"abstract":"ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating its progression and complications. The diagnostic process often contends with data ambiguity and decision uncertainty, adding complexity to achieving definitive outcomes. This study addresses the diabetes diagnostic challenge through data mining and machine learning techniques. It involves training various machine learning algorithms and conducting statistical analysis on a dataset comprising 520 patients, encompassing both normal and diabetic cases, to discern influential features. Incorporating 17 features as classifier inputs, this research evaluates the diagnostic performance using four reputable techniques: support vector machine (SVM), random forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes underscore the SVM model's superior performance, boasting accuracy, specificity, and sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, respectively, across 50 iterations. The findings establish SVM as the preferred method for diabetes diagnosis. This study highlights the efficacy of data mining and machine learning models in diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their integration with a physician's assessment promises even better patient outcomes.KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, and earned his M.S. degree in biomedical engineering-bioelectric from Science and Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately dedicated to the examination and interpretation of biomedical data, particularly in the context of disease prediction and detection. His academic pursuits involve in-depth exploration of biomedical data analysi","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}