Pub Date : 2023-10-11DOI: 10.1080/21681163.2023.2264408
Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto
ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech
摘要在许多研究中,机器学习方法被广泛用于利用胸部x线图像预测COVID-19。然而,机器学习模型必须表现出鲁棒性,并为不同的人群提供可靠的预测,而不仅仅是在训练数据中使用的预测,才能真正有价值。不幸的是,在目前的文献中,对模型通用性的评估经常被忽视。在这项研究中,我们研究了三种分类模型——ResNet50v2、MobileNetv2和Swin Transformer——用于使用胸部x线图像预测COVID-19的通用性。我们采用三种并行的方法进行评估:内部和外部验证程序,肺区域裁剪和图像增强。结果表明,两种方法的结合可以使深度模型获得相似的内部和外部泛化能力。关键词:covid -19 x射线机器学习披露声明作者未报告潜在利益冲突。https://github.com/dirtmaxim/lungs-finder2。https://keras.io/examples/vision/swin_transformers/3。https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4。https://github.com/agchung/Actualmed-COVID-chestxray-dataset5。本工作是研究与开发(R&D)项目001/2020的成果,该项目与亚马逊联邦大学和巴西FAEPI签署,该项目由三星资助,使用了西部亚马逊信息学法(第8.387/1991号联邦法)的资源,其披露符合第10.521/2020号法令第39条。关于贡献者的说明natalia de Sousa Freire renalia de Sousa Freire目前是亚马逊联邦大学(UFAM)软件工程专业的学生。他的主要研究兴趣包括机器学习和计算机视觉。Pedro Paulo de Souza le,于2023年在巴西亚马逊联邦大学获得软件工程学士学位。他的主要研究兴趣是机器学习。Leonardo Albuquerque Tiago目前正在亚马逊联邦大学(巴西)攻读软件工程学士学位。他的主要研究兴趣是机器学习和软件测试。Alberto de Almeida Campos gonalalves于2022年获得亚马逊联邦大学计算机科学学士学位。他的研究兴趣包括机器学习和计算机视觉领域。Rafael Albuquerque Pinto于2017年获得罗赖马联邦大学(UFRR)的计算机科学学士学位,并于2022年获得亚马逊联邦大学(UFAM)的信息学硕士学位。他目前正在UFAM攻读信息学博士学位,他的研究重点是使用机器学习技术的生物信号。ulanda Miranda dos Santos是亚马逊联邦大学计算机研究所(IComp)的副教授。她于1999年、2002年和2008年分别获得巴西帕拉联邦大学信息学学士学位、巴西帕拉伊巴联邦大学信息学硕士学位和加拿大魁北克大学École de Technologie supsamrieure工程博士学位。她的研究兴趣包括模式识别、机器学习和计算机视觉。Eduardo SoutoEduardo Souto于2007年获得巴西累西腓伯南布哥联邦大学(UFPE)计算机科学博士学位。他目前是亚马逊联邦大学计算机研究所的副教授。他也是新兴技术和系统安全(ETSS)研究小组的负责人。他的研究兴趣包括应用机器学习、物联网和网络安全领域。
{"title":"Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models","authors":"Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto","doi":"10.1080/21681163.2023.2264408","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264408","url":null,"abstract":"ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1080/21681163.2023.2264937
Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber
ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating its progression and complications. The diagnostic process often contends with data ambiguity and decision uncertainty, adding complexity to achieving definitive outcomes. This study addresses the diabetes diagnostic challenge through data mining and machine learning techniques. It involves training various machine learning algorithms and conducting statistical analysis on a dataset comprising 520 patients, encompassing both normal and diabetic cases, to discern influential features. Incorporating 17 features as classifier inputs, this research evaluates the diagnostic performance using four reputable techniques: support vector machine (SVM), random forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes underscore the SVM model's superior performance, boasting accuracy, specificity, and sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, respectively, across 50 iterations. The findings establish SVM as the preferred method for diabetes diagnosis. This study highlights the efficacy of data mining and machine learning models in diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their integration with a physician's assessment promises even better patient outcomes.KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, and earned his M.S. degree in biomedical engineering-bioelectric from Science and Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately dedicated to the examination and interpretation of biomedical data, particularly in the context of disease prediction and detection. His academic pursuits involve in-depth exploration of biomedical data analysi
{"title":"Detection and prediction of diabetes using effective biomarkers","authors":"Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber","doi":"10.1080/21681163.2023.2264937","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264937","url":null,"abstract":"ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating its progression and complications. The diagnostic process often contends with data ambiguity and decision uncertainty, adding complexity to achieving definitive outcomes. This study addresses the diabetes diagnostic challenge through data mining and machine learning techniques. It involves training various machine learning algorithms and conducting statistical analysis on a dataset comprising 520 patients, encompassing both normal and diabetic cases, to discern influential features. Incorporating 17 features as classifier inputs, this research evaluates the diagnostic performance using four reputable techniques: support vector machine (SVM), random forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes underscore the SVM model's superior performance, boasting accuracy, specificity, and sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, respectively, across 50 iterations. The findings establish SVM as the preferred method for diabetes diagnosis. This study highlights the efficacy of data mining and machine learning models in diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their integration with a physician's assessment promises even better patient outcomes.KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, and earned his M.S. degree in biomedical engineering-bioelectric from Science and Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately dedicated to the examination and interpretation of biomedical data, particularly in the context of disease prediction and detection. His academic pursuits involve in-depth exploration of biomedical data analysi","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1080/21681163.2023.2264410
Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado
ABSTRACTBrain injuries caused by strokes are one of the leading causes of disability worldwide. Current procedures require a specialised physician to analyse MRI images before diagnosing and deciding on the specific treatment. However, the procedure can be costly and time-consuming. Artificial intelligence techniques are becoming a game-changer for analysing MRI images. This work proposes an end-to-end approach in three stages: Pre-processing techniques for normalising the images to the standard MNI space, as well as inhomogeneities and bias corrections; lesion segmentation using a CNN network, trained for cerebrovascular accidents and feature extraction; and, classification for determining the vascular territory within which the lesion occurred. A CLCI-Net was used for stroke segmentation. Four Deep Learning (DL) and four Shallow Machine Learning (ML) network architectures were evaluated to assess the strokes’ territory localisation. All models’ architectures were designed, analysed, and compared based on their performance scores, reaching an accuracy of 84% with the DL models and 95% with the Shallow ML models. The proposed methodology may be helpful for rapid and accurate stroke assessment for an acute treatment to minimise patient complications.KEYWORDS: Artificial intelligencelesion segmentationMRI preprocessingstroke assessment AcknowledgementWe would like to thank Carlos Jimenez, Alisson Constantine and Edwin Valarezo for their helpful contribution in perfecting the text and debugging the scripts.Disclosure statementAll authors have seen and agreed with the content of the manuscript; there is no financial interest to report, or declare any conflicts of interest, neither there are funding sources involved. We certify that the submission is original work and is not under review at any other publication.Additional informationNotes on contributorsRoberto MenaRoberto Alejandro Mena is a graduate student in Computer Science Engineering from Escuela Superior Politécnica del Litoral – ESPOL University. Throughout his career, he has played a leading role as a data analyst in various research projects, mainly centered on system development for magnetic resonance imaging (MRI) processing and visualization.Enrique PelaezDr. Enrique Peláez earned his Ph.D. in Computer Engineering from the University of South Carolina, USA, in 1994. Currently, he is a Professor at ESPOL University where he leads the AI research in Computational Intelligence. Over recent years, Dr. Pelaez has been engaged in applied research on Parkinson's Disease, leveraging machine and deep learning techniques. His academic contributions showcased in leading publications and forums, with papers presented in several conferences and symposia. Dr. Pelaez's work has been published in journals, including the IEEE and Nature Communications. His research topics encompass EEG signal classification, deep learning for medical imaging, and behavioral signal processing using AI.Francis LoayzaDr. F
摘要中风引起的脑损伤是全球致残的主要原因之一。目前的治疗方法需要专门的医生在诊断和决定具体的治疗方法之前分析核磁共振成像图像。然而,这个过程既昂贵又耗时。人工智能技术正在成为分析核磁共振成像图像的游戏规则改变者。这项工作提出了一种分三个阶段的端到端方法:将图像归一化到标准MNI空间的预处理技术,以及不均匀性和偏差校正;利用CNN网络进行病变分割,训练脑血管意外并提取特征;并且,用于确定病变发生的血管区域的分类。使用cci - net进行脑卒中分割。评估了四种深度学习(DL)和四种浅机器学习(ML)网络架构,以评估笔画的区域定位。所有模型的架构都是根据它们的性能分数进行设计、分析和比较的,深度学习模型的准确率达到84%,浅层机器学习模型的准确率达到95%。提出的方法可能有助于快速和准确的中风评估急性治疗,以尽量减少患者的并发症。感谢Carlos Jimenez, Alisson Constantine和Edwin Valarezo在完善文本和调试脚本方面所做的贡献。所有作者均已阅读并同意稿件内容;没有任何经济利益需要报告,或申报任何利益冲突,也没有涉及资金来源。我们保证提交的作品是原创作品,没有被任何其他出版物审查过。罗伯托·亚历杭德罗·梅纳是Escuela Superior politcnica del Litoral - ESPOL大学计算机科学工程专业的研究生。在他的职业生涯中,他作为数据分析师在各种研究项目中发挥了主导作用,主要集中在磁共振成像(MRI)处理和可视化的系统开发上。恩里克PelaezDr。Enrique Peláez于1994年在美国南卡罗莱纳大学获得计算机工程博士学位。目前,他是ESPOL大学的教授,领导计算智能领域的人工智能研究。近年来,Pelaez博士一直从事帕金森病的应用研究,利用机器和深度学习技术。他的学术贡献在主要出版物和论坛上展示,并在几个会议和专题讨论会上发表了论文。Pelaez博士的研究成果已发表在IEEE和Nature Communications等期刊上。他的研究课题包括脑电图信号分类、医学成像的深度学习和使用人工智能的行为信号处理。弗朗西斯LoayzaDr。Francis Loayza是ESPOL大学机械工程系(FIMCP)的全职教授。他于2010年获得西班牙纳瓦拉大学神经科学博士学位。Loayza博士在图像数据分析方面拥有深厚的专业知识,他利用功能性磁共振成像和基于体素的形态测量学等统计方法。此外,他对机器和深度学习方法的应用为神经退行性疾病的知识增长做出了贡献。Alex Macas Alcocer是Escuela Superior politcima del Litoral - ESPOL大学计算机科学工程专业的研究生。他一直是一名数据科学家,使用人工智能技术分析磁共振图像,以及网络开发。Heydy Franco-MaldonadoDr。Heydy Franco Maldonado是Imagenología的杰出专家,曾在昆卡大学接受培训。她在墨西哥国立自治大学攻读了磁共振专业,然后在巴塞罗那大学获得了乳腺病理成像文凭。目前,她在瓜亚基尔的Luis Vernaza医院和厄瓜多尔瓜亚基尔SOLCA担任医学放射科医生。除了她的临床角色,Maldonado博士还是ESPOL人工智能研究小组的积极成员,并与Luis Vernaza医院一起协调Espíritu Santo大学Imagenología的研究生课程。此外,她是拜耳公司公认的演讲者。
{"title":"An artificial intelligence approach for segmenting and classifying brain lesions caused by stroke","authors":"Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado","doi":"10.1080/21681163.2023.2264410","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264410","url":null,"abstract":"ABSTRACTBrain injuries caused by strokes are one of the leading causes of disability worldwide. Current procedures require a specialised physician to analyse MRI images before diagnosing and deciding on the specific treatment. However, the procedure can be costly and time-consuming. Artificial intelligence techniques are becoming a game-changer for analysing MRI images. This work proposes an end-to-end approach in three stages: Pre-processing techniques for normalising the images to the standard MNI space, as well as inhomogeneities and bias corrections; lesion segmentation using a CNN network, trained for cerebrovascular accidents and feature extraction; and, classification for determining the vascular territory within which the lesion occurred. A CLCI-Net was used for stroke segmentation. Four Deep Learning (DL) and four Shallow Machine Learning (ML) network architectures were evaluated to assess the strokes’ territory localisation. All models’ architectures were designed, analysed, and compared based on their performance scores, reaching an accuracy of 84% with the DL models and 95% with the Shallow ML models. The proposed methodology may be helpful for rapid and accurate stroke assessment for an acute treatment to minimise patient complications.KEYWORDS: Artificial intelligencelesion segmentationMRI preprocessingstroke assessment AcknowledgementWe would like to thank Carlos Jimenez, Alisson Constantine and Edwin Valarezo for their helpful contribution in perfecting the text and debugging the scripts.Disclosure statementAll authors have seen and agreed with the content of the manuscript; there is no financial interest to report, or declare any conflicts of interest, neither there are funding sources involved. We certify that the submission is original work and is not under review at any other publication.Additional informationNotes on contributorsRoberto MenaRoberto Alejandro Mena is a graduate student in Computer Science Engineering from Escuela Superior Politécnica del Litoral – ESPOL University. Throughout his career, he has played a leading role as a data analyst in various research projects, mainly centered on system development for magnetic resonance imaging (MRI) processing and visualization.Enrique PelaezDr. Enrique Peláez earned his Ph.D. in Computer Engineering from the University of South Carolina, USA, in 1994. Currently, he is a Professor at ESPOL University where he leads the AI research in Computational Intelligence. Over recent years, Dr. Pelaez has been engaged in applied research on Parkinson's Disease, leveraging machine and deep learning techniques. His academic contributions showcased in leading publications and forums, with papers presented in several conferences and symposia. Dr. Pelaez's work has been published in journals, including the IEEE and Nature Communications. His research topics encompass EEG signal classification, deep learning for medical imaging, and behavioral signal processing using AI.Francis LoayzaDr. F","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1080/21681163.2023.2261575
Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat
ABSTRACTRecently, huge concerns have been raised in diagnosing chest diseases, especially after the COVID-19 pandemic. Regular diagnosis processes of chest diseases sometimes fail to distinguish between Corona and Viral Pneumonia diseases through Polymerase Chain Reaction (PCR) tests which are a time-engrossing process that needs convoluted manual procedures. Artificial Intelligence (AI) techniques have achieved high performance in aiding medical diagnostic processes. The innovation of this work lies in using a new diagnostic technique to distinguish between COVID-19 and Viral Pneumonia diseases using advanced AI technologies. This is done by extracting novel features from chest X-ray images based on Wavelet analysis, Scale Invariant Feature Transformation (SIFT), and the Mel Frequency Cepstral Coefficient (MFCC). Support vector machines (SVM) and artificial neural networks (ANN) were utilized to build classification algorithms using 1200 chest X-ray mages for each case. Using Wavelet features, the results of evaluating the SVM and ANN models were 97% accurate, and with SIFT features, they were closer to 99%. The proposed models were very effective at identifying COVID-19 and Viral Pneumonitis, so physicians can determine the best treatment course for patients with the support of this high accuracy. Moreover, this model can be used in hospitals and emergency rooms when a massive number of patients are waiting, as it is faster and more accurate than the regular diagnosis processes as each step takes few seconds on average to complete.KEYWORDS: Chest X-ray imagesfeature extractionand SVMimage classifications Disclosure statementNo potential conflict of interest was reported by the author(s).
{"title":"An automated system to distinguish between Corona and Viral Pneumonia chest diseases based on image processing techniques","authors":"Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat","doi":"10.1080/21681163.2023.2261575","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261575","url":null,"abstract":"ABSTRACTRecently, huge concerns have been raised in diagnosing chest diseases, especially after the COVID-19 pandemic. Regular diagnosis processes of chest diseases sometimes fail to distinguish between Corona and Viral Pneumonia diseases through Polymerase Chain Reaction (PCR) tests which are a time-engrossing process that needs convoluted manual procedures. Artificial Intelligence (AI) techniques have achieved high performance in aiding medical diagnostic processes. The innovation of this work lies in using a new diagnostic technique to distinguish between COVID-19 and Viral Pneumonia diseases using advanced AI technologies. This is done by extracting novel features from chest X-ray images based on Wavelet analysis, Scale Invariant Feature Transformation (SIFT), and the Mel Frequency Cepstral Coefficient (MFCC). Support vector machines (SVM) and artificial neural networks (ANN) were utilized to build classification algorithms using 1200 chest X-ray mages for each case. Using Wavelet features, the results of evaluating the SVM and ANN models were 97% accurate, and with SIFT features, they were closer to 99%. The proposed models were very effective at identifying COVID-19 and Viral Pneumonitis, so physicians can determine the best treatment course for patients with the support of this high accuracy. Moreover, this model can be used in hospitals and emergency rooms when a massive number of patients are waiting, as it is faster and more accurate than the regular diagnosis processes as each step takes few seconds on average to complete.KEYWORDS: Chest X-ray imagesfeature extractionand SVMimage classifications Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is necessary to obtain gene expression values to identify gene biomarkers involved in all types of cancers, and microarray data is one of the best data for this purpose. In order to extract gene expression values from microarray images that have different challenges. This article presents a completely automatic and comprehensive method that can deal with the various challenges in these images and obtain gene expression values with high accuracy. A pre-processing approach is proposed for contrast enhancement using a genetic algorithm and for removing noise and artefacts in microarray cells using wavelet transform based on a complex Gaussian scaling model. For each point, the coordinate centre is determined using Self Organising Maps. Then, using a new hybrid model based on the Fuzzy Local Information Gaussian Mixture Model (FLIGMM), the position of each spot is accurately determined. In this model, various features are obtained using local information about pixels, considering the pixel neighbourhood correlation coefficient. Finally, the gene expression values are obtained. The performance of the proposed algorithm was evaluated using real microarray images of cervical cancer from the GMRCL microarray dataset as well as simulated images. The results show that the proposed algorithm achieves 90.91% and 98% accuracy in segmenting microarray spots for noiseless and noisy spots, respectively.
{"title":"Gene expression extraction in cervical cancer by segmentation of microarray images using a novel fuzzy method","authors":"Nayyer Mostaghim Bakhshayesh, Mousa Shamsi, Faegheh Golabi","doi":"10.1080/21681163.2023.2261555","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261555","url":null,"abstract":"It is necessary to obtain gene expression values to identify gene biomarkers involved in all types of cancers, and microarray data is one of the best data for this purpose. In order to extract gene expression values from microarray images that have different challenges. This article presents a completely automatic and comprehensive method that can deal with the various challenges in these images and obtain gene expression values with high accuracy. A pre-processing approach is proposed for contrast enhancement using a genetic algorithm and for removing noise and artefacts in microarray cells using wavelet transform based on a complex Gaussian scaling model. For each point, the coordinate centre is determined using Self Organising Maps. Then, using a new hybrid model based on the Fuzzy Local Information Gaussian Mixture Model (FLIGMM), the position of each spot is accurately determined. In this model, various features are obtained using local information about pixels, considering the pixel neighbourhood correlation coefficient. Finally, the gene expression values are obtained. The performance of the proposed algorithm was evaluated using real microarray images of cervical cancer from the GMRCL microarray dataset as well as simulated images. The results show that the proposed algorithm achieves 90.91% and 98% accuracy in segmenting microarray spots for noiseless and noisy spots, respectively.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1080/21681163.2023.2258998
Shabnam Ghasemi, Shahin Akbarpour, Ali Farzan, Mohammad Ali Jamali
Lung cancer is a leading cause of cancer-related deaths. Computer-aided detection (CAD) has emerged as a valuable tool to assist radiologists in the automated detection and segmentation of pulmonary nodules using Computed Tomography (CT) scans, indicating early stages of lung cancer. However, detecting small nodules remains challenging. This paper proposes novel techniques to address this challenge, achieving high sensitivity and low false-positive nodule identification using the RePoint-Net detection networks. Additionally, the 3DSqU2 Net, a novel nodule segmentation approach incorporating full-scale skip connections and deep supervision, is introduced. A 3DCNN model is employed for nodule candidate classification, generating final classification results by combining previous step outputs. Extensive training and testing on the LIDC/IDRI public lung CT database dataset validate the proposed model, demonstrating its superiority over human specialists with a remarkable 97.4% sensitivity in identifying nodule candidates. Moreover, CT texture analysis accurately differentiates between malignant and benign pulmonary nodules due to its ability to capture subtle tissue characteristic differences. This approach achieves a 95.8% sensitivity in nodule classification, promising non-invasive support for clinical decision-making in managing pulmonary nodules and improving patient outcomes.
{"title":"RePoint-Net detection and 3DSqU² Net segmentation for automatic identification of pulmonary nodules in computed tomography images","authors":"Shabnam Ghasemi, Shahin Akbarpour, Ali Farzan, Mohammad Ali Jamali","doi":"10.1080/21681163.2023.2258998","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258998","url":null,"abstract":"Lung cancer is a leading cause of cancer-related deaths. Computer-aided detection (CAD) has emerged as a valuable tool to assist radiologists in the automated detection and segmentation of pulmonary nodules using Computed Tomography (CT) scans, indicating early stages of lung cancer. However, detecting small nodules remains challenging. This paper proposes novel techniques to address this challenge, achieving high sensitivity and low false-positive nodule identification using the RePoint-Net detection networks. Additionally, the 3DSqU2 Net, a novel nodule segmentation approach incorporating full-scale skip connections and deep supervision, is introduced. A 3DCNN model is employed for nodule candidate classification, generating final classification results by combining previous step outputs. Extensive training and testing on the LIDC/IDRI public lung CT database dataset validate the proposed model, demonstrating its superiority over human specialists with a remarkable 97.4% sensitivity in identifying nodule candidates. Moreover, CT texture analysis accurately differentiates between malignant and benign pulmonary nodules due to its ability to capture subtle tissue characteristic differences. This approach achieves a 95.8% sensitivity in nodule classification, promising non-invasive support for clinical decision-making in managing pulmonary nodules and improving patient outcomes.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-23DOI: 10.1080/21681163.2023.2261554
Yeqiang Luo, Jing Liang, Shanghui Lin, Tianmo Bai, Lingchuang Kong, Yan Jin, Xin Zhang, Baofeng Li, Bei Chen
ABSTRACTDeep learning is a powerful branch of machine learning, which presents a promising new approach for diagnose diseases. However, the deep learning for detecting anterior cruciate ligament still limits to the evaluation of whether there are injuries. The accuracy of the deep learning model is not high, and the parameters are complex. In this study, we have developed a deep learning model based on ResNet-18 to detect ACL conditions. The results suggest that there is no significant difference between our proposed model and two orthopaedic surgeons and radiologists in diagnosing ACL conditions.KEYWORDS: Deep-learningmachine-learningautomated modelanterior cruciate ligament Disclosure statementThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Data availability statementThis study used a MRNet dataset that gathered from Stanford University Medical Center. This dataset available online and anyone can be used.
{"title":"The application of deep learning methods in knee joint sports injury diseases","authors":"Yeqiang Luo, Jing Liang, Shanghui Lin, Tianmo Bai, Lingchuang Kong, Yan Jin, Xin Zhang, Baofeng Li, Bei Chen","doi":"10.1080/21681163.2023.2261554","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261554","url":null,"abstract":"ABSTRACTDeep learning is a powerful branch of machine learning, which presents a promising new approach for diagnose diseases. However, the deep learning for detecting anterior cruciate ligament still limits to the evaluation of whether there are injuries. The accuracy of the deep learning model is not high, and the parameters are complex. In this study, we have developed a deep learning model based on ResNet-18 to detect ACL conditions. The results suggest that there is no significant difference between our proposed model and two orthopaedic surgeons and radiologists in diagnosing ACL conditions.KEYWORDS: Deep-learningmachine-learningautomated modelanterior cruciate ligament Disclosure statementThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Data availability statementThis study used a MRNet dataset that gathered from Stanford University Medical Center. This dataset available online and anyone can be used.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135966897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-21DOI: 10.1080/21681163.2023.2261557
Haobo Yu, Adam Bartlett, Harvey Ho
The liver is the largest solid organ in the body that can be anatomically divided into segments. We present in this work a web-based subject-specific human liver atlas based on the Couinaud segments simulated from portal venous (PV) perfusion zones, hepatic arterial (HA) and hepatic venous (HV) trees, as well as biliary drainage. The purpose of the atlas is to provide the modelling community with freely accessible 3D hepatic structures for in silico simulations, which are of tremendous value in yielding novel insights in hepatic circulation, drug transport and clearance.
{"title":"A web-based human liver atlas","authors":"Haobo Yu, Adam Bartlett, Harvey Ho","doi":"10.1080/21681163.2023.2261557","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261557","url":null,"abstract":"The liver is the largest solid organ in the body that can be anatomically divided into segments. We present in this work a web-based subject-specific human liver atlas based on the Couinaud segments simulated from portal venous (PV) perfusion zones, hepatic arterial (HA) and hepatic venous (HV) trees, as well as biliary drainage. The purpose of the atlas is to provide the modelling community with freely accessible 3D hepatic structures for in silico simulations, which are of tremendous value in yielding novel insights in hepatic circulation, drug transport and clearance.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136130522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-19DOI: 10.1080/21681163.2023.2255684
Jampani Ravi, R. Narmadha
ABSTRACTThe Multimodal Medical Image Fusion (MMIF) is affected by poor image quality, which leads to the extraction of inefficient features. The main intent of this work is to fuse various planes in the PET and MRI medical images efficiently using the MMIF approach. Initially, the sample images containing the axial plane of PET and MRI images are aggregated from standard datasets. Then, the collected images are employed for the decomposition process, which is accomplished via Optimal Non-Subsampled Contourlet Transform (ONSCT). The parameters in the NSCT are optimized using the Modified Water Strider Algorithm (MWSA. Once the images are decomposed, it is segmented into two sub-bands as high frequency and low-frequency sub-bands. Consequently, the high-frequency sub-bands of both PET and MRI images are fused by using the optimal weighted average fusion, in which the weight factor is obtained optimally by the MWSA algorithm. Similarly, the low-frequency sub-bands of both medical images are combined by sparse fusion technique. Finally, both the resultant fused images are subjected to Inverse Non-Subsampled Contourlet Transform (INSCT) to get desired fused images. The experimental findings suggest that the proposed model has effectively fused the images, and it also enhances the similarity score with axial planes.KEYWORDS: Medical image fusionmodified water strider algorithmmagnetic resonance imagingoptimal non-subsampled contourlet transforoptimal weighted average fusion Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要多模态医学图像融合(MMIF)受到图像质量差的影响,导致提取的特征效率低下。本工作的主要目的是利用MMIF方法有效地融合PET和MRI医学图像中的各种平面。首先,从标准数据集聚合包含PET和MRI图像轴向面的样本图像。然后,将采集到的图像进行分解,并通过最优非下采样Contourlet变换(ONSCT)完成分解。NSCT中的参数使用改进的水黾算法(Modified Water Strider Algorithm, MWSA)进行优化。对图像进行分解后,将其分割为高频子带和低频子带两个子带。因此,采用最优加权平均融合方法对PET和MRI图像的高频子带进行融合,其中权重因子通过MWSA算法得到最优。同样,采用稀疏融合技术对两幅医学图像的低频子带进行合并。最后,对得到的融合图像进行反非下采样Contourlet变换(INSCT),得到所需的融合图像。实验结果表明,该模型有效地融合了图像,提高了图像与轴向平面的相似度。关键词:医学图像融合改进水黾算法磁共振成像最优非下采样轮廓波变换最优加权平均融合披露声明作者未报告潜在利益冲突。
{"title":"Multimodality medical image fusion analysis with multi-plane features of PET and MRI images using ONSCT","authors":"Jampani Ravi, R. Narmadha","doi":"10.1080/21681163.2023.2255684","DOIUrl":"https://doi.org/10.1080/21681163.2023.2255684","url":null,"abstract":"ABSTRACTThe Multimodal Medical Image Fusion (MMIF) is affected by poor image quality, which leads to the extraction of inefficient features. The main intent of this work is to fuse various planes in the PET and MRI medical images efficiently using the MMIF approach. Initially, the sample images containing the axial plane of PET and MRI images are aggregated from standard datasets. Then, the collected images are employed for the decomposition process, which is accomplished via Optimal Non-Subsampled Contourlet Transform (ONSCT). The parameters in the NSCT are optimized using the Modified Water Strider Algorithm (MWSA. Once the images are decomposed, it is segmented into two sub-bands as high frequency and low-frequency sub-bands. Consequently, the high-frequency sub-bands of both PET and MRI images are fused by using the optimal weighted average fusion, in which the weight factor is obtained optimally by the MWSA algorithm. Similarly, the low-frequency sub-bands of both medical images are combined by sparse fusion technique. Finally, both the resultant fused images are subjected to Inverse Non-Subsampled Contourlet Transform (INSCT) to get desired fused images. The experimental findings suggest that the proposed model has effectively fused the images, and it also enhances the similarity score with axial planes.KEYWORDS: Medical image fusionmodified water strider algorithmmagnetic resonance imagingoptimal non-subsampled contourlet transforoptimal weighted average fusion Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135059109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1080/21681163.2023.2256896
Ahmet Taş, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer
ABSTRACTIndices derived from photoplethysmography (PPG) have shown promising results as non-invasive digital biomarkers for the detection of diabetes mellitus (DM). Considering the mutual endothelial insult leading to similar undesirable peripheral hemodynamic perturbations, hypertension (HT) may blunt this classification performance. Second derivative PPG (SD-PPG) indices were derived from the second derivative of the PPG signal. The variables of interest were the previously described peaks of the initial positive (a), early negative (b), re-increasing (c), late re-decreasing (d), diastolic positive (e) and negative (f) waves and the ratios between them. Patients were classified according to their type 2 DM and hypertension phenotypes. SD-PPG indices were compared between diseased subgroups, healthy controls and also dichotomous classification performance was evaluated. Two SDPPG indices, b/a ratio and the vascular ageing index (VAI = (b-c-d-e)/a) responded to isolated DM type 2 (n = 29) amongst healthy subjects (n = 106) (area under the curve (AUC) = 0.629 p = 0.034 and 0.631 p = 0.031 20 respectively). However, the classification performance became insignificant with the inclusion of HT patients (n=30). (p = 0.839 vs. p = 0.656). These results suggest that the coexistence of HT and DM may hinder the use of SD-PPG for noninvasive DM detection.KEYWORDS: Second-derivative photoplethysmographydiabetesnon-invasive cardiovascular screeningfingertip waveformshypertension Abbreviations Body Mass Index=BMIDiabetes Mellitus=DMDiabetes Mellitus Type-2=DM2Diastolic Blood Pressure=DBPHypertension=HTPhotoplethysmography=PPGSecond derivative of Photoplethysmography=SD-PPGVascular Ageing Index=VAISystolic Blood Pressure=SBPDisclosure statementNo potential conflict of interest was reported by the authors.Author contributionsStudy conception and design was made by Ahmet Tas. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) have contributed to material preparation, data collection and analysis (signal and/or statistical and/or intellectual) and interpretation of results. The first draft of the manuscript was written by Ahmet Tas, and all authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) commented on previous versions of the manuscript. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) read and approved the final manuscript.Ethics approvalThe data is retrieved from open-dataset published in Nature Scientific Data (https://www.nature.com/articles/sdata201820#). The data collection had ethical approval denoted in data descripting article and all participants have given written consent per open-data descripting a
来自光体积脉搏波(PPG)的数据作为检测糖尿病(DM)的无创数字生物标志物显示出有希望的结果。考虑到相互内皮损伤导致类似的不良周围血流动力学扰动,高血压(HT)可能会削弱这种分类性能。二阶导数PPG (SD-PPG)指数由PPG信号的二阶导数推导而来。感兴趣的变量是先前描述的初始正(a),早期负(b),再增加(c),晚期再减少(d),舒张期正(e)和负(f)波的峰值以及它们之间的比值。根据2型糖尿病和高血压表型对患者进行分类。比较患病亚组与健康对照组之间的SD-PPG指数,并评价二分类性能。健康受试者(n = 106)中,2个SDPPG指数b/a比值和血管老化指数(VAI = (b-c-d-e)/a)对分离的2型糖尿病(n = 29)有反应(曲线下面积(AUC)分别为0.629 p = 0.034和0.631 p = 0.031 20)。然而,随着HT患者(n=30)的纳入,分类效果变得不显著。(p = 0.839对p = 0.656)。这些结果提示,HT和DM的共存可能会阻碍SD-PPG在无创DM检测中的应用。关键词:二阶导数光容积脉搏图糖尿病无创心血管筛查指尖波形高血压缩写体重指数= bmi糖尿病= dm糖尿病2型= dm舒张压= db高血压= ht光容积脉搏图= ppg光容积脉搏图二阶导数血管老化指数=血管收缩压= sbp披露声明作者未报告潜在的利益冲突。研究概念和设计由Ahmet Tas提出。所有作者(Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer)都为材料准备,数据收集和分析(信号和/或统计和/或智力)以及结果解释做出了贡献。手稿的初稿由Ahmet Tas撰写,所有作者(Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer)都对手稿的前几个版本进行了评论。所有作者(Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer)阅读并批准了最终手稿。伦理审批数据来源于《自然科学数据》(https://www.nature.com/articles/sdata201820#)上发表的开放数据集。数据收集已在数据描述文章中得到伦理批准,所有参与者已根据开放数据描述文章给予书面同意。所有作者均符合ICMJE的作者资格标准。其他信息资金:作者报告没有与本文所述工作相关的资金。
{"title":"Presence of hypertension might pose a potential pitfall in detection of diabetes mellitus non-invasively using the second derivative of photoplethysmography","authors":"Ahmet Taş, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer","doi":"10.1080/21681163.2023.2256896","DOIUrl":"https://doi.org/10.1080/21681163.2023.2256896","url":null,"abstract":"ABSTRACTIndices derived from photoplethysmography (PPG) have shown promising results as non-invasive digital biomarkers for the detection of diabetes mellitus (DM). Considering the mutual endothelial insult leading to similar undesirable peripheral hemodynamic perturbations, hypertension (HT) may blunt this classification performance. Second derivative PPG (SD-PPG) indices were derived from the second derivative of the PPG signal. The variables of interest were the previously described peaks of the initial positive (a), early negative (b), re-increasing (c), late re-decreasing (d), diastolic positive (e) and negative (f) waves and the ratios between them. Patients were classified according to their type 2 DM and hypertension phenotypes. SD-PPG indices were compared between diseased subgroups, healthy controls and also dichotomous classification performance was evaluated. Two SDPPG indices, b/a ratio and the vascular ageing index (VAI = (b-c-d-e)/a) responded to isolated DM type 2 (n = 29) amongst healthy subjects (n = 106) (area under the curve (AUC) = 0.629 p = 0.034 and 0.631 p = 0.031 20 respectively). However, the classification performance became insignificant with the inclusion of HT patients (n=30). (p = 0.839 vs. p = 0.656). These results suggest that the coexistence of HT and DM may hinder the use of SD-PPG for noninvasive DM detection.KEYWORDS: Second-derivative photoplethysmographydiabetesnon-invasive cardiovascular screeningfingertip waveformshypertension Abbreviations Body Mass Index=BMIDiabetes Mellitus=DMDiabetes Mellitus Type-2=DM2Diastolic Blood Pressure=DBPHypertension=HTPhotoplethysmography=PPGSecond derivative of Photoplethysmography=SD-PPGVascular Ageing Index=VAISystolic Blood Pressure=SBPDisclosure statementNo potential conflict of interest was reported by the authors.Author contributionsStudy conception and design was made by Ahmet Tas. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) have contributed to material preparation, data collection and analysis (signal and/or statistical and/or intellectual) and interpretation of results. The first draft of the manuscript was written by Ahmet Tas, and all authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) commented on previous versions of the manuscript. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) read and approved the final manuscript.Ethics approvalThe data is retrieved from open-dataset published in Nature Scientific Data (https://www.nature.com/articles/sdata201820#). The data collection had ethical approval denoted in data descripting article and all participants have given written consent per open-data descripting a","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134969928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}