首页 > 最新文献

Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization最新文献

英文 中文
Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning 基于迁移学习的牙科x射线多分类深度神经网络优化
Q2 Engineering Pub Date : 2023-11-09 DOI: 10.1080/21681163.2023.2272976
G. Divya Deepak, Subraya Krishna Bhat
In this work, the segmented dental X-ray images obtained by dentists have been classified into ideal/minimally compromised edentulous area (no clinical treatment needed immediately), partially/moderately compromised edentulous area (require bridges or cast partial denture) and substantially compromised edentulous area (require complete denture prosthesis). A total of 116 image dental X-ray dataset is used, of which 70% of the image dataset is used for training the convolutional neural network (CNN) while 30% is used sfor testing and validation. Three pretrained deep neural networks (DNNs; SqueezeNet, ResNet-50 and EfficientNet-b0) have been implemented using Deep Network Designer module of Matlab 2022. Each of these CNNs were trained, tested and optimised for the best possible accuracy and validation of dental images, which require an appropriate clinical treatment. The highest classification accuracy of 98% was obtained for EfficientNet-b0. This novel research enables the implementation of DNN parameters for automated identification and labelling of edentulous area, which would require clinical treatment. Also, the performance metrics, accuracy, recall, precision and F1 score have been calculated for the best DNN using confusion matrix.
在本研究中,将牙科医生获得的牙x线分割图像分为理想无牙区/最小损害区(无需立即临床治疗)、部分/中度损害区(需要桥接或铸造局部义齿)和严重损害区(需要全口义齿修复)。总共使用了116个牙科x射线图像数据集,其中70%的图像数据集用于训练卷积神经网络(CNN), 30%用于测试和验证。三个预训练深度神经网络(dnn);使用Matlab 2022的深度网络设计模块实现了SqueezeNet, ResNet-50和efficientnet - 60)。每一个cnn都经过训练、测试和优化,以获得最佳的准确性和牙科图像的验证,这需要适当的临床治疗。效率网-b0的分类准确率最高,达到98%。这项新研究使DNN参数的实现能够自动识别和标记无牙区域,这将需要临床治疗。此外,使用混淆矩阵计算了最佳深度神经网络的性能指标、准确率、召回率、精度和F1分数。
{"title":"Optimization of deep neural networks for multiclassification of dental X-rays using transfer learning","authors":"G. Divya Deepak, Subraya Krishna Bhat","doi":"10.1080/21681163.2023.2272976","DOIUrl":"https://doi.org/10.1080/21681163.2023.2272976","url":null,"abstract":"In this work, the segmented dental X-ray images obtained by dentists have been classified into ideal/minimally compromised edentulous area (no clinical treatment needed immediately), partially/moderately compromised edentulous area (require bridges or cast partial denture) and substantially compromised edentulous area (require complete denture prosthesis). A total of 116 image dental X-ray dataset is used, of which 70% of the image dataset is used for training the convolutional neural network (CNN) while 30% is used sfor testing and validation. Three pretrained deep neural networks (DNNs; SqueezeNet, ResNet-50 and EfficientNet-b0) have been implemented using Deep Network Designer module of Matlab 2022. Each of these CNNs were trained, tested and optimised for the best possible accuracy and validation of dental images, which require an appropriate clinical treatment. The highest classification accuracy of 98% was obtained for EfficientNet-b0. This novel research enables the implementation of DNN parameters for automated identification and labelling of edentulous area, which would require clinical treatment. Also, the performance metrics, accuracy, recall, precision and F1 score have been calculated for the best DNN using confusion matrix.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prototype smartphone jaw tracking application to quantitatively model tooth contact 一个原型智能手机颌骨跟踪应用程序,定量模拟牙齿接触
Q2 Engineering Pub Date : 2023-11-08 DOI: 10.1080/21681163.2023.2264402
Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto
ABSTRACTThis study utilised a prototype system which consisted of a person-specific 3D printed jaw tracking harness interfacing with the maxillary and mandibular teeth and custom jaw tracking software implemented on a smartphone. The prototype achieved acceptable results. The prototype demonstrated a static position accuracy of less than 1 mm and 5°. It successfully tracked 30 cycles of a protrusive excursion, left lateral excursion, and 40 mm of jaw opening on a semi-adjustable articulator. The standard error of the tracking accuracy was reported as 0.1377 mm, 0.0449 mm, and 0.9196 mm, with corresponding r2 values of 0.98, 1.00, and 1.00, respectively. Finally, occlusal contacts of left, right, and protrusive excursions were tracked with the prototype system and their trajectories were used to demonstrate kinematic modelling (no occlusal forces) with a biomechanical simulation tool.KEYWORDS: Smartphonedental occlusioncomputer visionjaw trackingbiomechanical simulation AcknowledgmentsThe authors would like to thank the Institute for Reconstructive Science in Medicine at the Misericordia Community Hospital in Edmonton Alberta for their help with the design and 3D printing of the tracking harnesses.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsKieran ArmstrongKieran Armstrong, holds a BEng in biomedical engineering from the University of Victoria and an MSc in rehabilitation science from the University of Alberta. His MSc research focused on computer modeling for dental prosthetic biomechanics in head and neck cancer treatment. Working in the wearable biometric sensing industry, his focus is on exploring how optical biometric sensing methods can be used to make meaningful connections to biological signals, like photoplethysmography to help people monitor their health and fitness.Carolyn KincadeCarolyn Kincade is a seasoned healthcare professional with a strong background in quality management and patient care. As a traditionally trained Dental Technologist she has enjoyed the transition of analog case work to digital. She is currently engaged in furthering her studies with a Master of Technology Management, though Memorial University of Newfoundland, to build upon her Diploma in Dental Technology and Bachelor of Technology from the Northern Alberta Institute of Technology. Carolyn also engages with the regulatory community in many ways, having served in various committee roles as part of the College of Dental Technologists of Alberta. Carolyn continues to make a meaningful impact in the healthcare field, bringing her expertise to the forefront for quality healthcare delivery.Jana RiegerJana Rieger, PhD is a global leader in functional outcomes assessment related to head and neck disorders. Over her 20-year career in this field, Jana has held roles as a professor, clinician, researcher, and most recently, entrepreneur. Jana and her team have developed, tested, and comme
摘要本研究利用了一个原型系统,该系统由一个与上颌和下颌骨连接的个性化3D打印颌骨跟踪装置和在智能手机上实现的定制颌骨跟踪软件组成。原型机取得了可接受的结果。样机显示静态位置精度小于1毫米和5°。它成功地跟踪了30个周期的突出偏移,左侧偏移和40毫米的下颌开口在半可调节关节。跟踪精度的标准误差分别为0.1377 mm、0.0449 mm和0.9196 mm,相应的r2值分别为0.98、1.00和1.00。最后,使用原型系统跟踪左、右和突出的咬合接触,并使用生物力学模拟工具使用它们的轨迹来演示运动学建模(无咬合力)。作者要感谢艾伯塔省埃德蒙顿Misericordia社区医院医学重建科学研究所在跟踪束带的设计和3D打印方面的帮助。披露声明作者未报告潜在的利益冲突。kieran Armstrong,拥有维多利亚大学生物医学工程学士学位和阿尔伯塔大学康复科学硕士学位。他的硕士研究方向是头颈部癌症治疗中牙义肢生物力学的计算机建模。在可穿戴生物识别传感行业工作,他的重点是探索如何使用光学生物识别传感方法与生物信号建立有意义的联系,如光容积脉搏波,以帮助人们监测他们的健康和健身。Carolyn Kincade是一位经验丰富的医疗保健专业人士,在质量管理和患者护理方面拥有强大的背景。作为一名受过传统训练的牙科技师,她很享受从模拟案例工作到数字化的过渡。她目前在纽芬兰纪念大学攻读技术管理硕士学位,以获得北阿尔伯塔理工学院牙科技术文凭和技术学士学位为基础。Carolyn还以多种方式与监管机构合作,曾在阿尔伯塔省牙科技术学院担任各种委员会职务。Carolyn继续在医疗保健领域产生有意义的影响,将她的专业知识带到优质医疗保健服务的最前沿。Jana RiegerJana Rieger博士是头颈部疾病相关功能结果评估的全球领导者。在该领域20年的职业生涯中,Jana担任过教授、临床医生、研究员,最近还担任过企业家。Jana和她的团队已经开发、测试并商业化了Mobili-T:一种新型的基于软件的移动健康“智能”设备,用于患有吞咽困难(即吞咽障碍)的人。在她的学术生涯中,Jana还建立并部署了一个创新的健康结果评估项目,该项目在国际上享有盛誉,被认为是该领域的黄金标准。她是国际团队建设方面的专家,她将来自四个不同国家的思想领袖聚集在一起,参与了一个创新的研究网络——头颈部研究网络(HNRN)。作为该网络的第一任主管,她为该组织制定了政策和程序、数据库、隐私影响评估和道德批准,为治理奠定了坚实的基础。Jana擅长思想领导。她曾在一家卫生保健机构担任主任级别的职务,将不同群体的临床医生、研究人员和决策者聚集在一起。丹尼尔AaltoDr。丹尼尔·阿尔托(Daniel Aalto)是阿尔伯塔大学康复医学系通信科学与障碍系的副教授。他在医学重建科学研究所(iRSM)担任联合研究员,担任研究科学家。他在芬兰阿尔托大学获得工程物理和数学硕士和博士学位。他的研究兴趣是头部和颈部功能的计算机建模,包括舌头活动、语音、发音、听力、吞咽和咀嚼。此外,他积极探索新的设计和仿真技术,以支持头颈部重建手术的虚拟规划和手术执行。
{"title":"A prototype smartphone jaw tracking application to quantitatively model tooth contact","authors":"Kieran Armstrong, Carolyn Kincade, Martin Osswald, Jana Rieger, Daniel Aalto","doi":"10.1080/21681163.2023.2264402","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264402","url":null,"abstract":"ABSTRACTThis study utilised a prototype system which consisted of a person-specific 3D printed jaw tracking harness interfacing with the maxillary and mandibular teeth and custom jaw tracking software implemented on a smartphone. The prototype achieved acceptable results. The prototype demonstrated a static position accuracy of less than 1 mm and 5°. It successfully tracked 30 cycles of a protrusive excursion, left lateral excursion, and 40 mm of jaw opening on a semi-adjustable articulator. The standard error of the tracking accuracy was reported as 0.1377 mm, 0.0449 mm, and 0.9196 mm, with corresponding r2 values of 0.98, 1.00, and 1.00, respectively. Finally, occlusal contacts of left, right, and protrusive excursions were tracked with the prototype system and their trajectories were used to demonstrate kinematic modelling (no occlusal forces) with a biomechanical simulation tool.KEYWORDS: Smartphonedental occlusioncomputer visionjaw trackingbiomechanical simulation AcknowledgmentsThe authors would like to thank the Institute for Reconstructive Science in Medicine at the Misericordia Community Hospital in Edmonton Alberta for their help with the design and 3D printing of the tracking harnesses.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsKieran ArmstrongKieran Armstrong, holds a BEng in biomedical engineering from the University of Victoria and an MSc in rehabilitation science from the University of Alberta. His MSc research focused on computer modeling for dental prosthetic biomechanics in head and neck cancer treatment. Working in the wearable biometric sensing industry, his focus is on exploring how optical biometric sensing methods can be used to make meaningful connections to biological signals, like photoplethysmography to help people monitor their health and fitness.Carolyn KincadeCarolyn Kincade is a seasoned healthcare professional with a strong background in quality management and patient care. As a traditionally trained Dental Technologist she has enjoyed the transition of analog case work to digital. She is currently engaged in furthering her studies with a Master of Technology Management, though Memorial University of Newfoundland, to build upon her Diploma in Dental Technology and Bachelor of Technology from the Northern Alberta Institute of Technology. Carolyn also engages with the regulatory community in many ways, having served in various committee roles as part of the College of Dental Technologists of Alberta. Carolyn continues to make a meaningful impact in the healthcare field, bringing her expertise to the forefront for quality healthcare delivery.Jana RiegerJana Rieger, PhD is a global leader in functional outcomes assessment related to head and neck disorders. Over her 20-year career in this field, Jana has held roles as a professor, clinician, researcher, and most recently, entrepreneur. Jana and her team have developed, tested, and comme","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135340497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-aided diagnosis of Canine Hip Dysplasia using deep learning approach in a novel X-ray image dataset 在新的x射线图像数据集中使用深度学习方法进行犬髋关节发育不良的计算机辅助诊断
Q2 Engineering Pub Date : 2023-11-02 DOI: 10.1080/21681163.2023.2274947
Chaouki Boufenar, Tété Elom Mike Norbert Logovi, Djemai Samir, Imad Eddine Lassakeur
ABSTRACTCanine Hip Dysplasia (CHD) is a congenital disease with a polygenic hereditary component, characterised by abnormal development of the coxo-femoral joint which results in poor coaptation of the femoral head in the acetabulum; the disease rapidly progresses to osteoarthritis of the hip. While dysplasia has been recognised in practically all canine breeds, it is much more common and of concern in medium and large dog breeds with rapid development. Dysplasia in predisposed breeds, particularly the German Shepherd, is the object of screening based on systematic radiological control in some countries. Our collected dataset comprises 507 X-ray images of dogs affected by hip dysplasia (HD). These images were meticulously evaluated using six Deep Convolutional Neural Network (CNN) models. Following an extensive analysis of the top-performing models, VGG16 emerged as the leader, achieving remarkable accuracy, recall, and precision scores of 98.32%, 98.35%, and 98.44%, respectively. Leveraging deep learning (DL) techniques, this approach excels in diagnosing CHD from hip X-rays with a high degree of accuracy.KEYWORDS: Canine Hip Dysplasia diagnosisdeep learningtransfer learningX-rayimage classification AcknowledgementSpecial thanks to Dr. Samir DJEMAI, a lecturer at the National Veterinary Institute of the University of Constantine, and the DHONDT NUNES veterinary clinic in France for providing the authors with dog hip radiographic images. This work would not have been possible without their invaluable assistance.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsChaouki BoufenarChaouki Boufenar is an Algerian scientist and researcher known for his work in the field of artificial intelligence and data science. He is currently a lecturer at the Computer Science Department of the University of Algiers. He received a Ph.D. in Computer Science from the University of Constantine 2 (Abdelhamid Mehri) in 2018. Chaouki Boufenar has been affiliated with several academic and research institutions, including the University of Paris-Saclay (Laboratoire de Recherche en Informatique), the University of Constantine, and the University of Jijel in Algeria. He has published several research papers and articles in the field of Computer Science and Artificial Intelligence. His areas of interest include data science, deep learning, and computer vision.Tété Elom Mike Norbert LogoviTete Elom Mike Norbert Logovi is currently working as a teaching assistant at Laval University. He is also currently pursuing his M.Sc. degree in Computer Science with a thesis at the same university. He received his Bachelor's degree in Computer Systems from the Department of Computer Science at Benyoucef Benkhedda Algiers 1 University. His research area includes Machine Learning, Deep Learning, and Computer Vision.Djemai SamirDjemai Samir is currently a lecturer and researcher at the Institute of Veterinary Sciences
犬髋关节发育不良(CHD)是一种具有多基因遗传成分的先天性疾病,其特征是髋-股关节发育异常,导致股骨头与髋臼的配合不良;这种疾病迅速发展为髋关节骨关节炎。虽然几乎所有犬种都存在发育不良,但在快速发育的中型和大型犬种中更为常见和关注。在一些国家,易感品种,特别是德国牧羊犬,是基于系统放射控制的筛查对象。我们收集的数据集包括507张受髋关节发育不良(HD)影响的狗的x射线图像。这些图像使用六个深度卷积神经网络(CNN)模型进行仔细评估。在对表现最好的模型进行广泛分析后,VGG16成为领导者,分别达到了98.32%,98.35%和98.44%的准确率,召回率和精确度。利用深度学习(DL)技术,这种方法在从髋关节x光片诊断冠心病方面表现出色,准确率很高。关键词:犬髋关节发育不良诊断深度学习迁移学习x线图像分类致谢特别感谢康斯坦丁大学国家兽医研究所讲师Samir DJEMAI博士和法国DHONDT NUNES兽医诊所为作者提供犬髋关节x线图像。如果没有他们宝贵的帮助,这项工作是不可能完成的。披露声明作者未报告潜在的利益冲突。archaouki Boufenar是一名阿尔及利亚科学家和研究人员,以其在人工智能和数据科学领域的工作而闻名。他目前是阿尔及尔大学计算机科学系的讲师。他于2018年获得君士坦丁第二大学(Abdelhamid Mehri)计算机科学博士学位。Chaouki Boufenar隶属于几个学术和研究机构,包括巴黎萨克雷大学(信息研究实验室)、康斯坦丁大学和阿尔及利亚的吉耶尔大学。他在计算机科学和人工智能领域发表了多篇研究论文和文章。他感兴趣的领域包括数据科学、深度学习和计算机视觉。Elom Mike Norbert Logovi目前在拉瓦尔大学担任助教。他目前也在同一所大学攻读计算机科学硕士学位,并发表了一篇论文。他在Benyoucef Benkhedda Algiers大学计算机科学系获得计算机系统学士学位。他的研究领域包括机器学习、深度学习和计算机视觉。Djemai SamirDjemai Samir目前是阿尔及利亚康斯坦丁大学兽医科学研究所的讲师和研究员。2005年获得君士坦丁兽医科学研究所兽医学博士学位,2008年获得阿尔及尔国家兽医学院兽医学硕士学位,2017年获得君士坦丁兽医科学研究所兽医学博士学位。2007年至2014年,他还在一家私人兽医诊所从事兽医工作。作为他科学研究的一部分,他对兽医学的许多科学领域感兴趣,包括兽医寄生虫学,食肉动物病理学和鸟类病理学。他在国际科学期刊上发表了多篇论文,并在多个国际会议上发表了演讲。Imad Eddine Lassakeur,阿尔及利亚计算机科学研究员,目前在加拿大魁北克省拉瓦尔大学攻读计算机科学硕士学位。他拥有计算机科学和智能计算机系统工程的背景,参与了各种各样的研究项目,在关键领域磨练了他的专业知识。Imad感兴趣的领域包括人工智能、计算机视觉和自然语言处理(NLP)。除了他的学术和研究追求,Imad对新兴技术及其改变行业的潜力保持着深刻的好奇心。他多方面的兴趣体现了他对技术进步的承诺和他对计算机科学领域坚定不移的热情。
{"title":"Computer-aided diagnosis of Canine Hip Dysplasia using deep learning approach in a novel X-ray image dataset","authors":"Chaouki Boufenar, Tété Elom Mike Norbert Logovi, Djemai Samir, Imad Eddine Lassakeur","doi":"10.1080/21681163.2023.2274947","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274947","url":null,"abstract":"ABSTRACTCanine Hip Dysplasia (CHD) is a congenital disease with a polygenic hereditary component, characterised by abnormal development of the coxo-femoral joint which results in poor coaptation of the femoral head in the acetabulum; the disease rapidly progresses to osteoarthritis of the hip. While dysplasia has been recognised in practically all canine breeds, it is much more common and of concern in medium and large dog breeds with rapid development. Dysplasia in predisposed breeds, particularly the German Shepherd, is the object of screening based on systematic radiological control in some countries. Our collected dataset comprises 507 X-ray images of dogs affected by hip dysplasia (HD). These images were meticulously evaluated using six Deep Convolutional Neural Network (CNN) models. Following an extensive analysis of the top-performing models, VGG16 emerged as the leader, achieving remarkable accuracy, recall, and precision scores of 98.32%, 98.35%, and 98.44%, respectively. Leveraging deep learning (DL) techniques, this approach excels in diagnosing CHD from hip X-rays with a high degree of accuracy.KEYWORDS: Canine Hip Dysplasia diagnosisdeep learningtransfer learningX-rayimage classification AcknowledgementSpecial thanks to Dr. Samir DJEMAI, a lecturer at the National Veterinary Institute of the University of Constantine, and the DHONDT NUNES veterinary clinic in France for providing the authors with dog hip radiographic images. This work would not have been possible without their invaluable assistance.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsChaouki BoufenarChaouki Boufenar is an Algerian scientist and researcher known for his work in the field of artificial intelligence and data science. He is currently a lecturer at the Computer Science Department of the University of Algiers. He received a Ph.D. in Computer Science from the University of Constantine 2 (Abdelhamid Mehri) in 2018. Chaouki Boufenar has been affiliated with several academic and research institutions, including the University of Paris-Saclay (Laboratoire de Recherche en Informatique), the University of Constantine, and the University of Jijel in Algeria. He has published several research papers and articles in the field of Computer Science and Artificial Intelligence. His areas of interest include data science, deep learning, and computer vision.Tété Elom Mike Norbert LogoviTete Elom Mike Norbert Logovi is currently working as a teaching assistant at Laval University. He is also currently pursuing his M.Sc. degree in Computer Science with a thesis at the same university. He received his Bachelor's degree in Computer Systems from the Department of Computer Science at Benyoucef Benkhedda Algiers 1 University. His research area includes Machine Learning, Deep Learning, and Computer Vision.Djemai SamirDjemai Samir is currently a lecturer and researcher at the Institute of Veterinary Sciences","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135936141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decorrelation stretch for enhancing colour fundus photographs affected by cataracts 去相关拉伸法增强白内障眼底彩色照片
Q2 Engineering Pub Date : 2023-11-02 DOI: 10.1080/21681163.2023.2274948
Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag
ABSTRACTA method of enhancing colour fundus photographs is proposed to reduce the effect of cataracts. The enhancement method employs a decorrelation stretch (DS) technique in an LCC colour model. The initial designed technique embeds Hubbard’s colouration model into DS parameters to produce enhanced results in a standard form of age-related macular degeneration (AMD) reading centres. The colouration model could modify to enhance the colour of lesions observed in diabetic retinopathy (DR). The proposed algorithm could improve the effect of cataracts on fundus images and provided good results when the density of the cataract was less than grade 2. In the case of images taken through cataracts higher than or equal to grade 2, some output results could become unusable when the cataract was in line with the macula.KEYWORDS: Decorrelation stretchretinal image enhancementcataract Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B04G640070].Notes on contributorsPreecha VonghirandechaPreecha Vonghirandecha is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. His current research interests include data Science, image processing and artificial intelligence applied to medical image analysis. He received a PhD in computer engineering from Prince of Songkla University, Thailand, in 2019.Supaporn KansomkeatSupaporn Kansomkeat is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. Her current research interests include software testing, test process improvement and artificial intelligence applied to medical image analysis. She received a PhD in computer engineering from Chulalongkorn University, Thailand, in 2007.Patama BhurayanontachaiPatama Bhurayanontachai (MD.) is an Associate Professor at the Department of Ophthalmology, Prince of Songkla University, Songkhla, Thailand. She received a certificate in Clinical Fellowship in vitreoretinal surgery from Flinders Medical Centre, Australia, in 2005. Her current research interests involve medical retina, surgical retina, and artificial intelligence applied to clinical diagnosis.Pannipa Sae-UengPannipa Sae-Ueng is a lecturer at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. She received her Ph.D. in Computer Science in 2022 at the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. Recently she has focused on research topics in data science and artificial intelligence.Sathit IntajagSathit Intajag received the M. Eng. and D. Eng. Degree in electrical engineering from the King Mongkut’s Institute of Tec
摘要:提出了一种增强眼底彩色照片的方法,以减少白内障的影响。增强方法在LCC颜色模型中采用去相关拉伸(DS)技术。最初设计的技术将Hubbard的着色模型嵌入到DS参数中,以在标准形式的年龄相关性黄斑变性(AMD)阅读中心中产生增强的结果。在糖尿病视网膜病变(DR)中,该着色模型可以修饰以增强病变的颜色。该算法可以改善白内障对眼底图像的影响,在白内障密度小于2级的情况下具有较好的效果。如果通过高于或等于2级的白内障拍摄图像,当白内障与黄斑一致时,一些输出结果可能无法使用。关键词:去相关拉伸视网膜图像增强白内障披露声明作者未报告潜在利益冲突。本研究得到了国家自然科学基金人力资源与机构发展、研究与创新项目管理部门的资助[批准号B04G640070]。作者简介:preecha Vonghirandecha是泰国宋卡王子大学理学院计算科学系的助理教授。他目前的研究兴趣包括数据科学、图像处理和人工智能在医学图像分析中的应用。2019年获泰国宋卡王子大学计算机工程博士学位。Supaporn KansomkeatSupaporn Kansomkeat是泰国宋卡王子大学理学院计算科学系助理教授。她目前的研究兴趣包括软件测试,测试过程改进和应用于医学图像分析的人工智能。她于2007年获得泰国朱拉隆功大学计算机工程博士学位。Patama Bhurayanontachai(医学博士)是泰国宋卡王子大学眼科系副教授。2005年,她获得了澳大利亚弗林德斯医学中心颁发的玻璃体视网膜外科临床研究员证书。她目前的研究兴趣包括医学视网膜、外科视网膜和应用于临床诊断的人工智能。Pannipa Sae-UengPannipa Sae-Ueng是泰国宋卡王子大学理学院计算科学系的讲师。她于2022年在塞尔维亚诺维萨德大学理学院数学与信息系获得计算机科学博士学位。最近,她专注于数据科学和人工智能的研究课题。Sathit IntajagSathit Intajag获得了硕士学位。和工程博士。1998年和2005年分别获得泰国Ladkrabang国王蒙库特理工学院(KMITL)电气工程学位。他是泰国宋卡王子大学理学院计算科学系副教授。主要研究方向为信号处理、统计分析、人工智能等。
{"title":"Decorrelation stretch for enhancing colour fundus photographs affected by cataracts","authors":"Preecha Vonghirandecha, Supaporn Kansomkeat, Patama Bhurayanontachai, Pannipa Sae-Ueng, Sathit Intajag","doi":"10.1080/21681163.2023.2274948","DOIUrl":"https://doi.org/10.1080/21681163.2023.2274948","url":null,"abstract":"ABSTRACTA method of enhancing colour fundus photographs is proposed to reduce the effect of cataracts. The enhancement method employs a decorrelation stretch (DS) technique in an LCC colour model. The initial designed technique embeds Hubbard’s colouration model into DS parameters to produce enhanced results in a standard form of age-related macular degeneration (AMD) reading centres. The colouration model could modify to enhance the colour of lesions observed in diabetic retinopathy (DR). The proposed algorithm could improve the effect of cataracts on fundus images and provided good results when the density of the cataract was less than grade 2. In the case of images taken through cataracts higher than or equal to grade 2, some output results could become unusable when the cataract was in line with the macula.KEYWORDS: Decorrelation stretchretinal image enhancementcataract Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis research has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation [grant number B04G640070].Notes on contributorsPreecha VonghirandechaPreecha Vonghirandecha is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. His current research interests include data Science, image processing and artificial intelligence applied to medical image analysis. He received a PhD in computer engineering from Prince of Songkla University, Thailand, in 2019.Supaporn KansomkeatSupaporn Kansomkeat is an assistant professor at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. Her current research interests include software testing, test process improvement and artificial intelligence applied to medical image analysis. She received a PhD in computer engineering from Chulalongkorn University, Thailand, in 2007.Patama BhurayanontachaiPatama Bhurayanontachai (MD.) is an Associate Professor at the Department of Ophthalmology, Prince of Songkla University, Songkhla, Thailand. She received a certificate in Clinical Fellowship in vitreoretinal surgery from Flinders Medical Centre, Australia, in 2005. Her current research interests involve medical retina, surgical retina, and artificial intelligence applied to clinical diagnosis.Pannipa Sae-UengPannipa Sae-Ueng is a lecturer at the Division of Computational Science, Faculty of Science, Prince of Songkla University, Songkhla, Thailand. She received her Ph.D. in Computer Science in 2022 at the Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Novi Sad, Serbia. Recently she has focused on research topics in data science and artificial intelligence.Sathit IntajagSathit Intajag received the M. Eng. and D. Eng. Degree in electrical engineering from the King Mongkut’s Institute of Tec","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135973389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic algorithm for feature selection in mammograms for breast masses classification 遗传算法在乳腺肿块分类中的特征选择
Q2 Engineering Pub Date : 2023-10-19 DOI: 10.1080/21681163.2023.2266031
None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi
ABSTRACTThis paper introduces a Computer-Aided Detection (CAD) system for categorizing breast masses in mammogram images from the DDSM database as Benign, Malignant, or Normal. The CAD process involves Pre-processing, Segmentation, Feature Extraction, Feature Selection, and Classification. Three feature selection methods, namely the Genetic Algorithm (GA), t-test, and Particle Swarm Optimization (PSO) are used. In the classification phase, three machine learning algorithms (kNN, multiSVM, and Naive Bayes) are explored. Evaluation metrics like accuracy, AUC, precision, recall, F1-score, MCC, Dice coefficient, and Jaccard coefficient are used for performance assessment. Training and testing accuracy are assessed for the three classes. The system is evaluated using nine algorithm combinations, producing the following AUC values: GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM (0.85), and PSO+NB (0.86). The study shows that the GA and kNN combination outperforms others.KEYWORDS: Mammogramsbreast massfeature selectionGenetic algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funding is used to complete this project.Notes on contributors G Vaira SuganthiDr. Vaira Suganthi G has 20 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. J SuthaDr. Sutha J has more than 25 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. M ParvathyDr. Parvathy M has more than 20 years of teaching experience. Her area of interest include Image Processing, Data Mining, and Machine Learning.N Muthamil SelviMs. Muthamil Selvi N has 1 year of teaching experience. Her area of interest is Machine Learning.
摘要本文介绍了一种计算机辅助检测(CAD)系统,用于将DDSM数据库中乳腺x线照片中的肿块分类为良性、恶性或正常。CAD过程包括预处理、分割、特征提取、特征选择和分类。采用遗传算法(GA)、t检验和粒子群优化(PSO)三种特征选择方法。在分类阶段,探索了三种机器学习算法(kNN、multiSVM和朴素贝叶斯)。诸如准确度、AUC、精密度、召回率、f1分数、MCC、Dice系数和Jaccard系数等评估指标用于性能评估。对这三个类别的训练和测试准确性进行了评估。使用9种算法组合对系统进行评估,产生以下AUC值:GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM(0.85)和PSO+NB(0.86)。研究表明,遗传算法和kNN组合优于其他组合。关键词:乳房x光检查乳房肿块特征选择遗传算法披露声明作者未报告潜在利益冲突。附加信息资金没有使用资金来完成这个项目。关于贡献者G Vaira SuganthiDr的说明。Vaira Suganthi G有20年的教学经验。她感兴趣的领域包括图像处理和机器学习。J SuthaDr。Sutha J有超过25年的教学经验。她感兴趣的领域包括图像处理和机器学习。M ParvathyDr。Parvathy M有20多年的教学经验。她感兴趣的领域包括图像处理、数据挖掘和机器学习。N·穆萨米尔·塞尔维姆斯。Muthamil Selvi N有1年的教学经验。她感兴趣的领域是机器学习。
{"title":"Genetic algorithm for feature selection in mammograms for breast masses classification","authors":"None G Vaira Suganthi, None J Sutha, None M Parvathy, N Muthamil Selvi","doi":"10.1080/21681163.2023.2266031","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266031","url":null,"abstract":"ABSTRACTThis paper introduces a Computer-Aided Detection (CAD) system for categorizing breast masses in mammogram images from the DDSM database as Benign, Malignant, or Normal. The CAD process involves Pre-processing, Segmentation, Feature Extraction, Feature Selection, and Classification. Three feature selection methods, namely the Genetic Algorithm (GA), t-test, and Particle Swarm Optimization (PSO) are used. In the classification phase, three machine learning algorithms (kNN, multiSVM, and Naive Bayes) are explored. Evaluation metrics like accuracy, AUC, precision, recall, F1-score, MCC, Dice coefficient, and Jaccard coefficient are used for performance assessment. Training and testing accuracy are assessed for the three classes. The system is evaluated using nine algorithm combinations, producing the following AUC values: GA+kNN (0.93), GA+multiSVM (0.88), GA+NB (0.91), t-test+kNN (0.91), t-test+multiSVM (0.86), t-test+NB (0.89), PSO+kNN (0.89), PSO+multiSVM (0.85), and PSO+NB (0.86). The study shows that the GA and kNN combination outperforms others.KEYWORDS: Mammogramsbreast massfeature selectionGenetic algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingNo funding is used to complete this project.Notes on contributors G Vaira SuganthiDr. Vaira Suganthi G has 20 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. J SuthaDr. Sutha J has more than 25 years of teaching experience. Her area of interest includes Image Processing and Machine Learning. M ParvathyDr. Parvathy M has more than 20 years of teaching experience. Her area of interest include Image Processing, Data Mining, and Machine Learning.N Muthamil SelviMs. Muthamil Selvi N has 1 year of teaching experience. Her area of interest is Machine Learning.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135728872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid generative model for grading the severity of diabetic retinopathy images 糖尿病视网膜病变图像严重程度分级的混合生成模型
Q2 Engineering Pub Date : 2023-10-15 DOI: 10.1080/21681163.2023.2266048
R. Bhuvaneswari, M. Diviya, M. Subramanian, Ramya Maranan, R Josphineleela
ABSTRACTOne of the common eye conditions affecting patients with diabetes is diabetic retinopathy (DR). It is characterised by the progressive impairment to the blood vessels with the increase of glucose level in the blood. The grading efficiency still finds challenging because of the existence of intra-class variations and imbalanced data distributions on the retinal images. Traditional machine learning techniques utilise hand-engineered features for classification of the affected retinal images. As convolutional neural network produces better image classification accuracy in many medical images, this work utilises the CNN-based feature extraction method. This feature has been used to build Gaussian mixture model (GMM) for each class that maps the CNN features to log-likelihood dimensional vector spaces. Since the Gaussian mixture model can be realised as a mixture of both parametric and nonparametric density models and has their flexibility in capturing different data distributions, probabilistic outputs, interpretability, efficient parameter estimation, and robustness to outliers, the proposed model aimed to obtain and provide a smooth approximation of the underlying distribution of features for training the model. Then these vector spaces are trained by the SVM classifier. Experimental results illustrate the efficacy of the proposed model with accuracy 86.3% and 89.1%, respectively.KEYWORDS: Retinal imagesCNN feature extractionsupport vector machineGaussian mixture model Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationNotes on contributorsR. BhuvaneswariR. Bhuvaneswari (Member, IEEE) received the Ph.D. degree from Anna University. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 18 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and co-authored a book on computer graphics. Her research interests include machine learning and deep learning for image processing applications.M. DiviyaM.Diviya received the M.E . degree from Anna University. Currently pursuing Ph.D in Vellore Institute of Technology, Chennai. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 7 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and book chapters. Her research interests include machine learning and deep learning for image processing,text processing applications.M. SubramanianSubramanian M received a BE degree in Mechanical Engineering from 2008, and he obtained ME degrees in computer aided design and engineering design in 2011 and 2013, respectively. He is pursuing his PhD degree from Anna University, Chennai, Tamilnadu, India in the field of material
糖尿病视网膜病变(DR)是糖尿病患者常见的眼病之一。其特点是随着血液中葡萄糖水平的升高,血管逐渐受损。由于视网膜图像存在类内差异和数据分布不平衡等问题,分级效率仍然存在一定的挑战。传统的机器学习技术利用人工设计的特征对受影响的视网膜图像进行分类。由于卷积神经网络在许多医学图像中具有更好的图像分类精度,因此本工作采用了基于cnn的特征提取方法。该特征已被用于为将CNN特征映射到对数似然维向量空间的每个类构建高斯混合模型(GMM)。由于高斯混合模型可以实现为参数和非参数密度模型的混合,并且在捕获不同数据分布、概率输出、可解释性、有效参数估计和对异常值的鲁棒性方面具有灵活性,因此所提出的模型旨在获得并提供用于训练模型的底层特征分布的平滑近似值。然后使用SVM分类器对这些向量空间进行训练。实验结果表明,该模型的准确率分别为86.3%和89.1%。关键词:视网膜图像cnn特征提取支持向量机高斯混合模型披露声明作者未报告潜在利益冲突附加信息:贡献者说明BhuvaneswariR。Bhuvaneswari (IEEE成员)获安娜大学博士学位。她目前是印度金奈Amrita Vishwa Vidyapeetham Amrita计算机学院的助理教授。她在工程领域有18年的教学经验。她在国际期刊和国际会议上发表了许多文章,并与人合著了一本关于计算机图形学的书。主要研究方向为机器学习和深度学习在图像处理中的应用。DiviyaM。迪维亚接到了法医的报告。毕业于安娜大学。目前在金奈Vellore理工学院攻读博士学位。她目前是印度金奈Amrita Vishwa Vidyapeetham Amrita计算机学院的助理教授。她在工程领域有7年的教学经验。她在国际期刊、国际会议和书籍章节上发表了许多文章。她的研究兴趣包括机器学习和深度学习在图像处理、文本处理中的应用。subramanian M于2008年获得机械工程学士学位,并分别于2011年和2013年获得计算机辅助设计和工程设计硕士学位。他在印度泰米尔纳德邦金奈的安娜大学攻读材料科学与工程博士学位。目前,他是印度泰米尔纳德邦金奈安娜大学附属圣约瑟夫工程学院机械工程系的助理教授。主要研究方向为材料科学与冶金学、机械加工科学、机器学习、图像处理与优化技术。Ramya Maranan是印度金奈SIMATS Saveetha工程学院研究与创新系的一名有成就的研究人员。Ramya热衷于推动知识边界和推动创新,在推动该机构的研究活动方面发挥着至关重要的作用。Ramya的工作主要围绕在他们的专业领域开展研究和开发活动。他们参与设计和执行实验,收集和分析数据,并通过学术出版物传播他们的发现。Ramya致力于研究表明其致力于推进科学理解和促进技术进步。他们的工作有可能对社会产生积极影响,并为整个学术和科学界做出贡献。R JosphineleelaR。Josphineleela于2013年获得印度Sathyabama大学计算机科学工程博士学位。她在萨提亚拉巴马大学获得了计算机科学与工程硕士学位。她在计算机科学领域拥有超过20年的经验,目前是Panimalar理工学院信息技术系的教授。在国内外发表论文50余篇。主要研究方向为图像处理、神经网络、人工智能、生物医学成像、软计算等。她曾获得印度计算机学会颁发的杰出教授奖,并获得印度博士颁发的最佳项目奖。
{"title":"Hybrid generative model for grading the severity of diabetic retinopathy images","authors":"R. Bhuvaneswari, M. Diviya, M. Subramanian, Ramya Maranan, R Josphineleela","doi":"10.1080/21681163.2023.2266048","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266048","url":null,"abstract":"ABSTRACTOne of the common eye conditions affecting patients with diabetes is diabetic retinopathy (DR). It is characterised by the progressive impairment to the blood vessels with the increase of glucose level in the blood. The grading efficiency still finds challenging because of the existence of intra-class variations and imbalanced data distributions on the retinal images. Traditional machine learning techniques utilise hand-engineered features for classification of the affected retinal images. As convolutional neural network produces better image classification accuracy in many medical images, this work utilises the CNN-based feature extraction method. This feature has been used to build Gaussian mixture model (GMM) for each class that maps the CNN features to log-likelihood dimensional vector spaces. Since the Gaussian mixture model can be realised as a mixture of both parametric and nonparametric density models and has their flexibility in capturing different data distributions, probabilistic outputs, interpretability, efficient parameter estimation, and robustness to outliers, the proposed model aimed to obtain and provide a smooth approximation of the underlying distribution of features for training the model. Then these vector spaces are trained by the SVM classifier. Experimental results illustrate the efficacy of the proposed model with accuracy 86.3% and 89.1%, respectively.KEYWORDS: Retinal imagesCNN feature extractionsupport vector machineGaussian mixture model Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationNotes on contributorsR. BhuvaneswariR. Bhuvaneswari (Member, IEEE) received the Ph.D. degree from Anna University. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 18 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and co-authored a book on computer graphics. Her research interests include machine learning and deep learning for image processing applications.M. DiviyaM.Diviya received the M.E . degree from Anna University. Currently pursuing Ph.D in Vellore Institute of Technology, Chennai. She is currently an Assistant Professor with the Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, India. She has 7 years of teaching experience in the field of engineering. She has authored over many publications on international journals and international conferences and book chapters. Her research interests include machine learning and deep learning for image processing,text processing applications.M. SubramanianSubramanian M received a BE degree in Mechanical Engineering from 2008, and he obtained ME degrees in computer aided design and engineering design in 2011 and 2013, respectively. He is pursuing his PhD degree from Anna University, Chennai, Tamilnadu, India in the field of material","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136185046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism DiabPrednet:基于注意的长短期记忆的糖尿病预测模型,该模型具有最优加权特征融合机制
Q2 Engineering Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2258995
S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini
ABSTRACTMachine learning is a computer technique that automatically learns from experience and enhances the effectiveness of producing more precise diabetes predictions. However, large, inclusive, high-quality datasets are needed for training the machine learning networks. In this research work, attention-based approaches are designed for predicting diabetes in the affected individuals. Initially, the collected diabetes data is given into the data cleaning to get noise-free data for the prediction task. Here, extracted feature set 1 is extracted from the Auto encoder, and extracted feature set 2 is extracted from the 1-Dimensional Convolutional Neural Network (1D-CNN). These two sets of extracted features are fused in the adaptive way that is weighted feature fusion. Here, the weight of the selected features is optimized by an Enhanced Path Finder Algorithm (EPFA) to get more accurate results. The weighted fused features are employed for the diabetes prediction phase, in which the developed Attention-based Long Short Term Memory (ALSTM) with architecture optimization by improved PFA for predicting diabetes in affected one. Throughout the result analysis, the designed method attains 95% accuracy and 92%precision rate. Finally, the analysis is made by the proposed and existing prediction methods to showcase the effective performance.KEYWORDS: Diabetes predictionautoencoder1-dimensional convolutional neural networkattention-based long short term memory componentenhanced path finder algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要机器学习是一种自动从经验中学习的计算机技术,可以提高产生更精确的糖尿病预测的有效性。然而,训练机器学习网络需要庞大、包容、高质量的数据集。在这项研究工作中,基于注意力的方法被设计用于预测受影响个体的糖尿病。首先,将收集到的糖尿病数据进行数据清洗,得到用于预测任务的无噪声数据。在这里,提取的特征集1是从Auto encoder中提取的,提取的特征集2是从一维卷积神经网络(1D-CNN)中提取的。将提取的两组特征以加权特征融合的自适应方式进行融合。在这里,通过增强寻径算法(Enhanced Path Finder Algorithm, EPFA)优化所选特征的权重,以获得更准确的结果。在糖尿病预测阶段,采用加权融合特征,利用改进PFA优化结构的基于注意的长短期记忆(ALSTM)来预测受影响者的糖尿病。在整个结果分析中,设计的方法达到95%的准确度和92%的精密度。最后,将本文提出的预测方法与现有的预测方法进行对比分析,以展示其有效性能。关键词:糖尿病预测、自编码器、一维卷积神经网络、基于注意的长短期记忆组件、增强型寻径器算法披露声明作者未报告潜在利益冲突。
{"title":"DiabPrednet: development of attention-based long short-term memory-based diabetes prediction model with optimal weighted feature fusion mechanism","authors":"S. Nagendiran, S. Rohini, P. Jagadeesan, S. Shankari, R. Harini","doi":"10.1080/21681163.2023.2258995","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258995","url":null,"abstract":"ABSTRACTMachine learning is a computer technique that automatically learns from experience and enhances the effectiveness of producing more precise diabetes predictions. However, large, inclusive, high-quality datasets are needed for training the machine learning networks. In this research work, attention-based approaches are designed for predicting diabetes in the affected individuals. Initially, the collected diabetes data is given into the data cleaning to get noise-free data for the prediction task. Here, extracted feature set 1 is extracted from the Auto encoder, and extracted feature set 2 is extracted from the 1-Dimensional Convolutional Neural Network (1D-CNN). These two sets of extracted features are fused in the adaptive way that is weighted feature fusion. Here, the weight of the selected features is optimized by an Enhanced Path Finder Algorithm (EPFA) to get more accurate results. The weighted fused features are employed for the diabetes prediction phase, in which the developed Attention-based Long Short Term Memory (ALSTM) with architecture optimization by improved PFA for predicting diabetes in affected one. Throughout the result analysis, the designed method attains 95% accuracy and 92%precision rate. Finally, the analysis is made by the proposed and existing prediction methods to showcase the effective performance.KEYWORDS: Diabetes predictionautoencoder1-dimensional convolutional neural networkattention-based long short term memory componentenhanced path finder algorithm Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs 基于广义svm的大规模超分辨率算法对轻量级医学图像分割深度神经网络设计的影响
Q2 Engineering Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2266008
Mina Esfandiarkhani, Amir Hossein Foruzan
ABSTRACTSetting up a complex CNN requires a powerful platform, several hours of run-time, and a lot of data for training. Here, we propose a generalised lightweight solution that exploits super-resolution and scalable vector graphics and uses a small-scale UNet as the baseline framework to segment different organs in MR and CT data. We selected the UNet since many researchers use it as the baseline, modify it in their proposal, and perform an ablation study to show the effectiveness of the proposed modification. First, we downsample the input 2D CT slices by bicubic interpolation. Using the architecture of the conventional UNet, we reduce the size of the network’s input, and the number of layers and filters to construct a lightweight UNet. The network segments the low-resolution images and prepares the mask of an organ. Then, we upscale the boundary of the output mask by the Support Vector Graphics technique to obtain the final border. This design reduces the number of parameters and the run-time by a factor of two. We segmented several tissues to prove the stability of our method to the type of organ. The experiments proved the feasibility of setting up complex deep neural networks with conventional platforms.KEYWORDS: light-weight deep neural networksscalable vector graphicsgeneralised segmentation frameworksmedical image segmentation Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsMina EsfandiarkhaniMina Esfandiarkhani received a B.Sc. degree from the Azad University of Qazvin in 2013 and an M.Sc. degree in Biomedical Engineering from the Shahed University of Tehran in 2016. She is currently pursuing a Ph.D. degree in the Biomedical Engineering faculty of Shahed University. Her research interests include machine learning, computer vision, medical image processing, and artificial intelligence.Amir Hossein ForuzanAmir Hossein Foruzan received his B.S. from the Sharif University of Technology in Telecommunication Engineering. He received his M.S. and Ph.D. from Tehran University in Biomedical Engineering. Since 2011, he has been a faculty member of Shahed University. His research interest is medical image processing.
摘要建立一个复杂的CNN需要强大的平台、数小时的运行时间和大量的训练数据。在这里,我们提出了一种通用的轻量级解决方案,该解决方案利用超分辨率和可扩展的矢量图形,并使用小规模UNet作为基线框架来分割MR和CT数据中的不同器官。我们之所以选择UNet,是因为许多研究人员将其作为基线,在他们的提案中对其进行修改,并进行消融研究以显示所提议修改的有效性。首先,我们通过双三次插值对输入的二维CT切片进行下采样。利用传统UNet的架构,我们减少了网络输入的大小、层数和过滤器的数量,构建了一个轻量级的UNet。该网络对低分辨率图像进行分割,并准备器官的掩膜。然后,利用支持向量图形技术对输出蒙版的边界进行上移,得到最终的边界。这种设计将参数的数量和运行时间减少了两倍。我们分割了几个组织,以证明我们的方法对器官类型的稳定性。实验证明了在常规平台上建立复杂深度神经网络的可行性。关键词:轻量级深度神经网络可扩展向量图广义分割框架医学图像分割披露声明作者未报告潜在利益冲突。mina Esfandiarkhani于2013年获得Azad University of Qazvin的学士学位,并于2016年获得Shahed University of Tehran的生物医学工程硕士学位。她目前在沙希德大学生物医学工程学院攻读博士学位。她的研究兴趣包括机器学习、计算机视觉、医学图像处理和人工智能。Amir Hossein Foruzan获得谢里夫理工大学电信工程学士学位。他获得了德黑兰大学生物医学工程硕士和博士学位。2011年以来,他一直担任Shahed University的教员。主要研究方向为医学图像处理。
{"title":"Impact of a generalised SVG-based large-scale super-resolution algorithm on the design of light-weight medical image segmentation DNNs","authors":"Mina Esfandiarkhani, Amir Hossein Foruzan","doi":"10.1080/21681163.2023.2266008","DOIUrl":"https://doi.org/10.1080/21681163.2023.2266008","url":null,"abstract":"ABSTRACTSetting up a complex CNN requires a powerful platform, several hours of run-time, and a lot of data for training. Here, we propose a generalised lightweight solution that exploits super-resolution and scalable vector graphics and uses a small-scale UNet as the baseline framework to segment different organs in MR and CT data. We selected the UNet since many researchers use it as the baseline, modify it in their proposal, and perform an ablation study to show the effectiveness of the proposed modification. First, we downsample the input 2D CT slices by bicubic interpolation. Using the architecture of the conventional UNet, we reduce the size of the network’s input, and the number of layers and filters to construct a lightweight UNet. The network segments the low-resolution images and prepares the mask of an organ. Then, we upscale the boundary of the output mask by the Support Vector Graphics technique to obtain the final border. This design reduces the number of parameters and the run-time by a factor of two. We segmented several tissues to prove the stability of our method to the type of organ. The experiments proved the feasibility of setting up complex deep neural networks with conventional platforms.KEYWORDS: light-weight deep neural networksscalable vector graphicsgeneralised segmentation frameworksmedical image segmentation Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationNotes on contributorsMina EsfandiarkhaniMina Esfandiarkhani received a B.Sc. degree from the Azad University of Qazvin in 2013 and an M.Sc. degree in Biomedical Engineering from the Shahed University of Tehran in 2016. She is currently pursuing a Ph.D. degree in the Biomedical Engineering faculty of Shahed University. Her research interests include machine learning, computer vision, medical image processing, and artificial intelligence.Amir Hossein ForuzanAmir Hossein Foruzan received his B.S. from the Sharif University of Technology in Telecommunication Engineering. He received his M.S. and Ph.D. from Tehran University in Biomedical Engineering. Since 2011, he has been a faculty member of Shahed University. His research interest is medical image processing.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models 预训练深度模型预测胸部x线图像COVID-19的通用性分析
Q2 Engineering Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2264408
Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto
ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech
摘要在许多研究中,机器学习方法被广泛用于利用胸部x线图像预测COVID-19。然而,机器学习模型必须表现出鲁棒性,并为不同的人群提供可靠的预测,而不仅仅是在训练数据中使用的预测,才能真正有价值。不幸的是,在目前的文献中,对模型通用性的评估经常被忽视。在这项研究中,我们研究了三种分类模型——ResNet50v2、MobileNetv2和Swin Transformer——用于使用胸部x线图像预测COVID-19的通用性。我们采用三种并行的方法进行评估:内部和外部验证程序,肺区域裁剪和图像增强。结果表明,两种方法的结合可以使深度模型获得相似的内部和外部泛化能力。关键词:covid -19 x射线机器学习披露声明作者未报告潜在利益冲突。https://github.com/dirtmaxim/lungs-finder2。https://keras.io/examples/vision/swin_transformers/3。https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4。https://github.com/agchung/Actualmed-COVID-chestxray-dataset5。本工作是研究与开发(R&D)项目001/2020的成果,该项目与亚马逊联邦大学和巴西FAEPI签署,该项目由三星资助,使用了西部亚马逊信息学法(第8.387/1991号联邦法)的资源,其披露符合第10.521/2020号法令第39条。关于贡献者的说明natalia de Sousa Freire renalia de Sousa Freire目前是亚马逊联邦大学(UFAM)软件工程专业的学生。他的主要研究兴趣包括机器学习和计算机视觉。Pedro Paulo de Souza le,于2023年在巴西亚马逊联邦大学获得软件工程学士学位。他的主要研究兴趣是机器学习。Leonardo Albuquerque Tiago目前正在亚马逊联邦大学(巴西)攻读软件工程学士学位。他的主要研究兴趣是机器学习和软件测试。Alberto de Almeida Campos gonalalves于2022年获得亚马逊联邦大学计算机科学学士学位。他的研究兴趣包括机器学习和计算机视觉领域。Rafael Albuquerque Pinto于2017年获得罗赖马联邦大学(UFRR)的计算机科学学士学位,并于2022年获得亚马逊联邦大学(UFAM)的信息学硕士学位。他目前正在UFAM攻读信息学博士学位,他的研究重点是使用机器学习技术的生物信号。ulanda Miranda dos Santos是亚马逊联邦大学计算机研究所(IComp)的副教授。她于1999年、2002年和2008年分别获得巴西帕拉联邦大学信息学学士学位、巴西帕拉伊巴联邦大学信息学硕士学位和加拿大魁北克大学École de Technologie supsamrieure工程博士学位。她的研究兴趣包括模式识别、机器学习和计算机视觉。Eduardo SoutoEduardo Souto于2007年获得巴西累西腓伯南布哥联邦大学(UFPE)计算机科学博士学位。他目前是亚马逊联邦大学计算机研究所的副教授。他也是新兴技术和系统安全(ETSS)研究小组的负责人。他的研究兴趣包括应用机器学习、物联网和网络安全领域。
{"title":"Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models","authors":"Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto","doi":"10.1080/21681163.2023.2264408","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264408","url":null,"abstract":"ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and prediction of diabetes using effective biomarkers 利用有效的生物标志物检测和预测糖尿病
Q2 Engineering Pub Date : 2023-10-05 DOI: 10.1080/21681163.2023.2264937
Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber
ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating ‎its progression and complications. The diagnostic process often contends with data ‎ambiguity and decision uncertainty, adding complexity to achieving definitive ‎outcomes. This study addresses the diabetes diagnostic challenge through data mining ‎and machine learning techniques. It involves training various machine learning ‎algorithms and conducting statistical analysis on a dataset comprising 520 patients, ‎encompassing both normal and diabetic cases, to discern influential features.‎ Incorporating 17 features as classifier inputs, this research evaluates the diagnostic ‎performance using four reputable techniques: support vector machine (SVM), random ‎forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes ‎underscore the SVM model's superior performance, boasting accuracy, specificity, and ‎sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, ‎respectively, across 50 iterations. The findings establish SVM as the preferred method ‎for diabetes diagnosis.‎ This study highlights the efficacy of data mining and machine learning models in ‎diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their ‎integration with a physician's assessment promises even better patient outcomes.‎KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-‎‎bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, ‎and earned his M.S. degree in biomedical engineering-bioelectric from Science and ‎Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately ‎dedicated to the examination and interpretation of biomedical data, particularly in ‎the context of disease prediction and detection. His academic pursuits involve in-‎depth exploration of biomedical data analysi
糖尿病是一种普遍且昂贵的疾病,早期诊断对于减轻其进展和并发症至关重要。诊断过程经常与数据模糊和决策不确定性相冲突,增加了获得明确结果的复杂性。本研究通过数据挖掘和机器学习技术解决了糖尿病诊断的挑战。它包括训练各种机器学习算法,并对包含520名患者(包括正常和糖尿病病例)的数据集进行统计分析,以识别有影响的特征。将17个特征作为分类器输入,本研究使用四种知名技术评估诊断性能:支持向量机(SVM)、随机森林(RF)、多层感知器(MLP)和k-近邻(kNN)。结果表明,SVM模型在50次迭代中,准确率、特异性和灵敏度分别为98.78±1.96%、99.28±1.63%和97.32±2.45%。研究结果表明支持向量机是糖尿病诊断的首选方法。这项研究强调了数据挖掘和机器学习模型在糖尿病诊断中的功效。虽然这些方法表现出可观的预测准确性,但它们与医生的评估相结合,有望为患者带来更好的结果。‎关键词:数据挖掘diabetessvmdetection prediction缩写ANN=人工神经网络auc =曲率下面积ecdc =疾病控制中心cpcsn =加拿大初级保健哨点监测网络dt =决策树efn =假阴性fp =假阳性knn =k最近邻lda =线性判别分析lr =逻辑回归ml =机器学习mlp =多层感知器nb =朴素贝叶斯pidd =皮马印第安人糖尿病数据etrf =随机森林c =Receiver Operating feature svm =支持向量MachineTN=真阴性tp =真阳性ukpds =英国前瞻性糖尿病研究披露声明作者未报告潜在的利益冲突作者的贡献所有作者平均贡献了全部工作。所有作者都阅读并批准了最终的手稿。数据和材料的可用性本文中使用的数据在全文中被引用。伦理批准:本文不包含任何作者进行的任何人类参与者的研究。其他信息资金来源本工作没有资金来源。mohammad Ehsan Farnoodian获得伊朗德黑兰伊斯兰阿扎德大学德黑兰医学科学生物医学工程-生物电学士学位,并于2023年在伊朗德黑兰伊斯兰阿扎德大学科学与研究分部获得生物医学工程-生物电硕士学位。他热情地致力于检查和解释生物医学数据,特别是在疾病预测和检测的背景下。他的学术追求涉及生物医学数据分析复杂性的深入探索,特别侧重于采用数据驱动的方法进行疾病预测和识别。Mohammad Karimi Moridani于2006年获得电气工程-电子学士学位,并分别于2008年和2015年获得生物医学工程-生物电学硕士和博士学位。目前,他是伊朗德黑兰伊斯兰阿扎德大学(Islamic Azad University)德黑兰医学科学生物医学工程系的助理教授。他的研究重点是生物医学信号和图像处理、非线性时间序列分析和认知科学,具体应用范围从用于疾病检测和预测的ECG、HRV和EEG信号处理到癫痫发作预测、模式识别、面部和美丽识别的图像处理、水印等。他热衷于为科学界做出有意义的贡献,并采用数据驱动的方法来解决医疗保健和相关领域的关键挑战。Hanieh MokhberHanieh Mokhber获得德黑兰伊斯兰阿扎德大学医学科学生物医学工程-生物电学士学位。她的学术努力涉及对生物医学数据分析复杂性的细致探索,特别强调利用数据驱动的方法来预测和识别各种疾病
{"title":"Detection and prediction of diabetes using effective biomarkers","authors":"Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber","doi":"10.1080/21681163.2023.2264937","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264937","url":null,"abstract":"ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating ‎its progression and complications. The diagnostic process often contends with data ‎ambiguity and decision uncertainty, adding complexity to achieving definitive ‎outcomes. This study addresses the diabetes diagnostic challenge through data mining ‎and machine learning techniques. It involves training various machine learning ‎algorithms and conducting statistical analysis on a dataset comprising 520 patients, ‎encompassing both normal and diabetic cases, to discern influential features.‎ Incorporating 17 features as classifier inputs, this research evaluates the diagnostic ‎performance using four reputable techniques: support vector machine (SVM), random ‎forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes ‎underscore the SVM model's superior performance, boasting accuracy, specificity, and ‎sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, ‎respectively, across 50 iterations. The findings establish SVM as the preferred method ‎for diabetes diagnosis.‎ This study highlights the efficacy of data mining and machine learning models in ‎diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their ‎integration with a physician's assessment promises even better patient outcomes.‎KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-‎‎bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, ‎and earned his M.S. degree in biomedical engineering-bioelectric from Science and ‎Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately ‎dedicated to the examination and interpretation of biomedical data, particularly in ‎the context of disease prediction and detection. His academic pursuits involve in-‎depth exploration of biomedical data analysi","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1