首页 > 最新文献

Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization最新文献

英文 中文
Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models 预训练深度模型预测胸部x线图像COVID-19的通用性分析
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-11 DOI: 10.1080/21681163.2023.2264408
Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto
ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech
摘要在许多研究中,机器学习方法被广泛用于利用胸部x线图像预测COVID-19。然而,机器学习模型必须表现出鲁棒性,并为不同的人群提供可靠的预测,而不仅仅是在训练数据中使用的预测,才能真正有价值。不幸的是,在目前的文献中,对模型通用性的评估经常被忽视。在这项研究中,我们研究了三种分类模型——ResNet50v2、MobileNetv2和Swin Transformer——用于使用胸部x线图像预测COVID-19的通用性。我们采用三种并行的方法进行评估:内部和外部验证程序,肺区域裁剪和图像增强。结果表明,两种方法的结合可以使深度模型获得相似的内部和外部泛化能力。关键词:covid -19 x射线机器学习披露声明作者未报告潜在利益冲突。https://github.com/dirtmaxim/lungs-finder2。https://keras.io/examples/vision/swin_transformers/3。https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4。https://github.com/agchung/Actualmed-COVID-chestxray-dataset5。本工作是研究与开发(R&D)项目001/2020的成果,该项目与亚马逊联邦大学和巴西FAEPI签署,该项目由三星资助,使用了西部亚马逊信息学法(第8.387/1991号联邦法)的资源,其披露符合第10.521/2020号法令第39条。关于贡献者的说明natalia de Sousa Freire renalia de Sousa Freire目前是亚马逊联邦大学(UFAM)软件工程专业的学生。他的主要研究兴趣包括机器学习和计算机视觉。Pedro Paulo de Souza le,于2023年在巴西亚马逊联邦大学获得软件工程学士学位。他的主要研究兴趣是机器学习。Leonardo Albuquerque Tiago目前正在亚马逊联邦大学(巴西)攻读软件工程学士学位。他的主要研究兴趣是机器学习和软件测试。Alberto de Almeida Campos gonalalves于2022年获得亚马逊联邦大学计算机科学学士学位。他的研究兴趣包括机器学习和计算机视觉领域。Rafael Albuquerque Pinto于2017年获得罗赖马联邦大学(UFRR)的计算机科学学士学位,并于2022年获得亚马逊联邦大学(UFAM)的信息学硕士学位。他目前正在UFAM攻读信息学博士学位,他的研究重点是使用机器学习技术的生物信号。ulanda Miranda dos Santos是亚马逊联邦大学计算机研究所(IComp)的副教授。她于1999年、2002年和2008年分别获得巴西帕拉联邦大学信息学学士学位、巴西帕拉伊巴联邦大学信息学硕士学位和加拿大魁北克大学École de Technologie supsamrieure工程博士学位。她的研究兴趣包括模式识别、机器学习和计算机视觉。Eduardo SoutoEduardo Souto于2007年获得巴西累西腓伯南布哥联邦大学(UFPE)计算机科学博士学位。他目前是亚马逊联邦大学计算机研究所的副教授。他也是新兴技术和系统安全(ETSS)研究小组的负责人。他的研究兴趣包括应用机器学习、物联网和网络安全领域。
{"title":"Analysis of generalizability on predicting COVID-19 from chest X-ray images using pre-trained deep models","authors":"Natalia de Sousa Freire, Pedro Paulo de Souza Leo, Leonardo Albuquerque Tiago, Alberto de Almeida Campos Gonalves, Rafael Albuquerque Pinto, Eulanda Miranda dos Santos, Eduardo Souto","doi":"10.1080/21681163.2023.2264408","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264408","url":null,"abstract":"ABSTRACTMachine learning methods have been extensively employed to predict COVID-19 using chest X-ray images in numerous studies. However, a machine learning model must exhibit robustness and provide reliable predictions for diverse populations, beyond those used in its training data, to be truly valuable. Unfortunately, the assessment of model generalisability is frequently overlooked in current literature. In this study, we investigate the generalisability of three classification models – ResNet50v2, MobileNetv2, and Swin Transformer – for predicting COVID-19 using chest X-ray images. We adopt three concurrent approaches for evaluation: the internal-and-external validation procedure, lung region cropping, and image enhancement. The results show that the combined approaches allow deep models to achieve similar internal and external generalisation capability.KEYWORDS: COVID-19X-raymachine learning Disclosure statementNo potential conflict of interest was reported by the author(s).Notes1. https://github.com/dirtmaxim/lungs-finder2. https://keras.io/examples/vision/swin_transformers/3. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge4. https://github.com/agchung/Actualmed-COVID-chestxray-dataset5. Figure 1-COVID-chestxray-datasethttps://github.com/agchung/Figure 1-COVID-chestxray-datasetAdditional informationFundingThe present work is the result of the Research and Development (R&D) project 001/2020, signed with Federal University of Amazonas and FAEPI, Brazil, which has funding from Samsung, using resources from the Informatics Law for the Western Amazon (Federal Law no 8.387/1991), and its disclosure is in accordance with article 39 of Decree No. 10.521/2020.Notes on contributorsNatalia de Sousa FreireNatalia de Sousa Freire is currently a Software Engineering student at the Federal University of Amazonas (UFAM). His main research interests include the areas of machine learning and computer vision.Pedro Paulo de Souza LeoPedro Paulo de Souza Leão obtained his Bachelor's degree in Software Engineering from the Federal University of Amazonas (Brazil) in 2023. His main research interest is machine learning.Leonardo Albuquerque TiagoLeonardo de Albuquerque Tiago is currently pursuing a Bachelor's degree in Software Engineering at Federal University of Amazonas (Brazil). His main research interests are machine learning and software testing.Alberto de Almeida Campos GonalvesAlberto de Almeida Campos Gonçalves received his B.S. degree in Computer Science from the Federal University of Amazonas in 2022. His research interests include the areas of machine learning and computer vision.Rafael Albuquerque PintoRafael Albuquerque Pinto received his B.S. degree in Computer Science from the Federal University of Roraima (UFRR) in 2017 and his M.Sc. degree in Informatics from the Federal University of Amazonas (UFAM) in 2022. He is currently pursuing a Ph.D. degree in Informatics at UFAM, focusing his research on biosignals using machine learning tech","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136063089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and prediction of diabetes using effective biomarkers 利用有效的生物标志物检测和预测糖尿病
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-05 DOI: 10.1080/21681163.2023.2264937
Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber
ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating ‎its progression and complications. The diagnostic process often contends with data ‎ambiguity and decision uncertainty, adding complexity to achieving definitive ‎outcomes. This study addresses the diabetes diagnostic challenge through data mining ‎and machine learning techniques. It involves training various machine learning ‎algorithms and conducting statistical analysis on a dataset comprising 520 patients, ‎encompassing both normal and diabetic cases, to discern influential features.‎ Incorporating 17 features as classifier inputs, this research evaluates the diagnostic ‎performance using four reputable techniques: support vector machine (SVM), random ‎forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes ‎underscore the SVM model's superior performance, boasting accuracy, specificity, and ‎sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, ‎respectively, across 50 iterations. The findings establish SVM as the preferred method ‎for diabetes diagnosis.‎ This study highlights the efficacy of data mining and machine learning models in ‎diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their ‎integration with a physician's assessment promises even better patient outcomes.‎KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-‎‎bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, ‎and earned his M.S. degree in biomedical engineering-bioelectric from Science and ‎Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately ‎dedicated to the examination and interpretation of biomedical data, particularly in ‎the context of disease prediction and detection. His academic pursuits involve in-‎depth exploration of biomedical data analysi
糖尿病是一种普遍且昂贵的疾病,早期诊断对于减轻其进展和并发症至关重要。诊断过程经常与数据模糊和决策不确定性相冲突,增加了获得明确结果的复杂性。本研究通过数据挖掘和机器学习技术解决了糖尿病诊断的挑战。它包括训练各种机器学习算法,并对包含520名患者(包括正常和糖尿病病例)的数据集进行统计分析,以识别有影响的特征。将17个特征作为分类器输入,本研究使用四种知名技术评估诊断性能:支持向量机(SVM)、随机森林(RF)、多层感知器(MLP)和k-近邻(kNN)。结果表明,SVM模型在50次迭代中,准确率、特异性和灵敏度分别为98.78±1.96%、99.28±1.63%和97.32±2.45%。研究结果表明支持向量机是糖尿病诊断的首选方法。这项研究强调了数据挖掘和机器学习模型在糖尿病诊断中的功效。虽然这些方法表现出可观的预测准确性,但它们与医生的评估相结合,有望为患者带来更好的结果。‎关键词:数据挖掘diabetessvmdetection prediction缩写ANN=人工神经网络auc =曲率下面积ecdc =疾病控制中心cpcsn =加拿大初级保健哨点监测网络dt =决策树efn =假阴性fp =假阳性knn =k最近邻lda =线性判别分析lr =逻辑回归ml =机器学习mlp =多层感知器nb =朴素贝叶斯pidd =皮马印第安人糖尿病数据etrf =随机森林c =Receiver Operating feature svm =支持向量MachineTN=真阴性tp =真阳性ukpds =英国前瞻性糖尿病研究披露声明作者未报告潜在的利益冲突作者的贡献所有作者平均贡献了全部工作。所有作者都阅读并批准了最终的手稿。数据和材料的可用性本文中使用的数据在全文中被引用。伦理批准:本文不包含任何作者进行的任何人类参与者的研究。其他信息资金来源本工作没有资金来源。mohammad Ehsan Farnoodian获得伊朗德黑兰伊斯兰阿扎德大学德黑兰医学科学生物医学工程-生物电学士学位,并于2023年在伊朗德黑兰伊斯兰阿扎德大学科学与研究分部获得生物医学工程-生物电硕士学位。他热情地致力于检查和解释生物医学数据,特别是在疾病预测和检测的背景下。他的学术追求涉及生物医学数据分析复杂性的深入探索,特别侧重于采用数据驱动的方法进行疾病预测和识别。Mohammad Karimi Moridani于2006年获得电气工程-电子学士学位,并分别于2008年和2015年获得生物医学工程-生物电学硕士和博士学位。目前,他是伊朗德黑兰伊斯兰阿扎德大学(Islamic Azad University)德黑兰医学科学生物医学工程系的助理教授。他的研究重点是生物医学信号和图像处理、非线性时间序列分析和认知科学,具体应用范围从用于疾病检测和预测的ECG、HRV和EEG信号处理到癫痫发作预测、模式识别、面部和美丽识别的图像处理、水印等。他热衷于为科学界做出有意义的贡献,并采用数据驱动的方法来解决医疗保健和相关领域的关键挑战。Hanieh MokhberHanieh Mokhber获得德黑兰伊斯兰阿扎德大学医学科学生物医学工程-生物电学士学位。她的学术努力涉及对生物医学数据分析复杂性的细致探索,特别强调利用数据驱动的方法来预测和识别各种疾病
{"title":"Detection and prediction of diabetes using effective biomarkers","authors":"Mohammad Ehsan Farnoodian, Mohammad Karimi Moridani, Hanieh Mokhber","doi":"10.1080/21681163.2023.2264937","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264937","url":null,"abstract":"ABSTRACTDiabetes is a prevalent and costly condition, with early diagnosis pivotal in mitigating ‎its progression and complications. The diagnostic process often contends with data ‎ambiguity and decision uncertainty, adding complexity to achieving definitive ‎outcomes. This study addresses the diabetes diagnostic challenge through data mining ‎and machine learning techniques. It involves training various machine learning ‎algorithms and conducting statistical analysis on a dataset comprising 520 patients, ‎encompassing both normal and diabetic cases, to discern influential features.‎ Incorporating 17 features as classifier inputs, this research evaluates the diagnostic ‎performance using four reputable techniques: support vector machine (SVM), random ‎forest (RF), multi-layer perceptron (MLP), and k-nearest neighbor (kNN). The outcomes ‎underscore the SVM model's superior performance, boasting accuracy, specificity, and ‎sensitivity values of 98.78±1.96%, 99.28±1.63%, and 97.32±2.45%, ‎respectively, across 50 iterations. The findings establish SVM as the preferred method ‎for diabetes diagnosis.‎ This study highlights the efficacy of data mining and machine learning models in ‎diabetes diagnosis. While these methods exhibit respectable predictive accuracy, their ‎integration with a physician's assessment promises even better patient outcomes.‎KEYWORDS: Data miningdiabetesSVMdetectionprediction Abbreviations ANN=Artificial Neural NetworkAUC=Area under CurveCDC=Centers for Disease ControlCPCSSN=Canadian Primary Care Sentinel Surveillance NetworkDT=Decision TreeFN=False NegativeFP=False PositivekNN=k Nearest NeighborLDA=Linear Discrimination AnalysisLR=Logistic RegressionML=Machine LearningMLP=Multi-Layer PerceptronNB=Naive BayesianPIDD=Pima Indians Diabetes DatasetRF=Random ForestROC=Receiver Operating CharacteristicSVM=Support Vector MachineTN=True NegativeTP=True PositiveUKPDS=UK Prospective Diabetes StudyDisclosure statementNo potential conflict of interest was reported by the author(s)Authors’ contributionsAll authors evenly contributed to the whole work. All authors read and approved the final manuscript.Availability of data and materialsThe data used in this paper is cited throughout the paper.Ethical approvalThis article does not contain any studies with human participants performed by any of the authors.Additional informationFundingNo source of funding for this work.Notes on contributorsMohammad Ehsan FarnoodianMohammad Ehsan Farnoodian received a B.S. degree in biomedical engineering-‎‎bioelectric from Tehran Medical Science, Islamic Azad University, Tehran, Iran, ‎and earned his M.S. degree in biomedical engineering-bioelectric from Science and ‎Research branch, Islamic Azad University, Tehran, Iran, in 2023. He is passionately ‎dedicated to the examination and interpretation of biomedical data, particularly in ‎the context of disease prediction and detection. His academic pursuits involve in-‎depth exploration of biomedical data analysi","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135482832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An artificial intelligence approach for segmenting and classifying brain lesions caused by stroke 脑卒中脑损伤的人工智能分割与分类方法
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-04 DOI: 10.1080/21681163.2023.2264410
Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado
ABSTRACTBrain injuries caused by strokes are one of the leading causes of disability worldwide. Current procedures require a specialised physician to analyse MRI images before diagnosing and deciding on the specific treatment. However, the procedure can be costly and time-consuming. Artificial intelligence techniques are becoming a game-changer for analysing MRI images. This work proposes an end-to-end approach in three stages: Pre-processing techniques for normalising the images to the standard MNI space, as well as inhomogeneities and bias corrections; lesion segmentation using a CNN network, trained for cerebrovascular accidents and feature extraction; and, classification for determining the vascular territory within which the lesion occurred. A CLCI-Net was used for stroke segmentation. Four Deep Learning (DL) and four Shallow Machine Learning (ML) network architectures were evaluated to assess the strokes’ territory localisation. All models’ architectures were designed, analysed, and compared based on their performance scores, reaching an accuracy of 84% with the DL models and 95% with the Shallow ML models. The proposed methodology may be helpful for rapid and accurate stroke assessment for an acute treatment to minimise patient complications.KEYWORDS: Artificial intelligencelesion segmentationMRI preprocessingstroke assessment AcknowledgementWe would like to thank Carlos Jimenez, Alisson Constantine and Edwin Valarezo for their helpful contribution in perfecting the text and debugging the scripts.Disclosure statementAll authors have seen and agreed with the content of the manuscript; there is no financial interest to report, or declare any conflicts of interest, neither there are funding sources involved. We certify that the submission is original work and is not under review at any other publication.Additional informationNotes on contributorsRoberto MenaRoberto Alejandro Mena is a graduate student in Computer Science Engineering from Escuela Superior Politécnica del Litoral – ESPOL University. Throughout his career, he has played a leading role as a data analyst in various research projects, mainly centered on system development for magnetic resonance imaging (MRI) processing and visualization.Enrique PelaezDr. Enrique Peláez earned his Ph.D. in Computer Engineering from the University of South Carolina, USA, in 1994. Currently, he is a Professor at ESPOL University where he leads the AI research in Computational Intelligence. Over recent years, Dr. Pelaez has been engaged in applied research on Parkinson's Disease, leveraging machine and deep learning techniques. His academic contributions showcased in leading publications and forums, with papers presented in several conferences and symposia. Dr. Pelaez's work has been published in journals, including the IEEE and Nature Communications. His research topics encompass EEG signal classification, deep learning for medical imaging, and behavioral signal processing using AI.Francis LoayzaDr. F
摘要中风引起的脑损伤是全球致残的主要原因之一。目前的治疗方法需要专门的医生在诊断和决定具体的治疗方法之前分析核磁共振成像图像。然而,这个过程既昂贵又耗时。人工智能技术正在成为分析核磁共振成像图像的游戏规则改变者。这项工作提出了一种分三个阶段的端到端方法:将图像归一化到标准MNI空间的预处理技术,以及不均匀性和偏差校正;利用CNN网络进行病变分割,训练脑血管意外并提取特征;并且,用于确定病变发生的血管区域的分类。使用cci - net进行脑卒中分割。评估了四种深度学习(DL)和四种浅机器学习(ML)网络架构,以评估笔画的区域定位。所有模型的架构都是根据它们的性能分数进行设计、分析和比较的,深度学习模型的准确率达到84%,浅层机器学习模型的准确率达到95%。提出的方法可能有助于快速和准确的中风评估急性治疗,以尽量减少患者的并发症。感谢Carlos Jimenez, Alisson Constantine和Edwin Valarezo在完善文本和调试脚本方面所做的贡献。所有作者均已阅读并同意稿件内容;没有任何经济利益需要报告,或申报任何利益冲突,也没有涉及资金来源。我们保证提交的作品是原创作品,没有被任何其他出版物审查过。罗伯托·亚历杭德罗·梅纳是Escuela Superior politcnica del Litoral - ESPOL大学计算机科学工程专业的研究生。在他的职业生涯中,他作为数据分析师在各种研究项目中发挥了主导作用,主要集中在磁共振成像(MRI)处理和可视化的系统开发上。恩里克PelaezDr。Enrique Peláez于1994年在美国南卡罗莱纳大学获得计算机工程博士学位。目前,他是ESPOL大学的教授,领导计算智能领域的人工智能研究。近年来,Pelaez博士一直从事帕金森病的应用研究,利用机器和深度学习技术。他的学术贡献在主要出版物和论坛上展示,并在几个会议和专题讨论会上发表了论文。Pelaez博士的研究成果已发表在IEEE和Nature Communications等期刊上。他的研究课题包括脑电图信号分类、医学成像的深度学习和使用人工智能的行为信号处理。弗朗西斯LoayzaDr。Francis Loayza是ESPOL大学机械工程系(FIMCP)的全职教授。他于2010年获得西班牙纳瓦拉大学神经科学博士学位。Loayza博士在图像数据分析方面拥有深厚的专业知识,他利用功能性磁共振成像和基于体素的形态测量学等统计方法。此外,他对机器和深度学习方法的应用为神经退行性疾病的知识增长做出了贡献。Alex Macas Alcocer是Escuela Superior politcima del Litoral - ESPOL大学计算机科学工程专业的研究生。他一直是一名数据科学家,使用人工智能技术分析磁共振图像,以及网络开发。Heydy Franco-MaldonadoDr。Heydy Franco Maldonado是Imagenología的杰出专家,曾在昆卡大学接受培训。她在墨西哥国立自治大学攻读了磁共振专业,然后在巴塞罗那大学获得了乳腺病理成像文凭。目前,她在瓜亚基尔的Luis Vernaza医院和厄瓜多尔瓜亚基尔SOLCA担任医学放射科医生。除了她的临床角色,Maldonado博士还是ESPOL人工智能研究小组的积极成员,并与Luis Vernaza医院一起协调Espíritu Santo大学Imagenología的研究生课程。此外,她是拜耳公司公认的演讲者。
{"title":"An artificial intelligence approach for segmenting and classifying brain lesions caused by stroke","authors":"Roberto Mena, Enrique Pelaez, Francis Loayza, Alex Macas, Heydy Franco-Maldonado","doi":"10.1080/21681163.2023.2264410","DOIUrl":"https://doi.org/10.1080/21681163.2023.2264410","url":null,"abstract":"ABSTRACTBrain injuries caused by strokes are one of the leading causes of disability worldwide. Current procedures require a specialised physician to analyse MRI images before diagnosing and deciding on the specific treatment. However, the procedure can be costly and time-consuming. Artificial intelligence techniques are becoming a game-changer for analysing MRI images. This work proposes an end-to-end approach in three stages: Pre-processing techniques for normalising the images to the standard MNI space, as well as inhomogeneities and bias corrections; lesion segmentation using a CNN network, trained for cerebrovascular accidents and feature extraction; and, classification for determining the vascular territory within which the lesion occurred. A CLCI-Net was used for stroke segmentation. Four Deep Learning (DL) and four Shallow Machine Learning (ML) network architectures were evaluated to assess the strokes’ territory localisation. All models’ architectures were designed, analysed, and compared based on their performance scores, reaching an accuracy of 84% with the DL models and 95% with the Shallow ML models. The proposed methodology may be helpful for rapid and accurate stroke assessment for an acute treatment to minimise patient complications.KEYWORDS: Artificial intelligencelesion segmentationMRI preprocessingstroke assessment AcknowledgementWe would like to thank Carlos Jimenez, Alisson Constantine and Edwin Valarezo for their helpful contribution in perfecting the text and debugging the scripts.Disclosure statementAll authors have seen and agreed with the content of the manuscript; there is no financial interest to report, or declare any conflicts of interest, neither there are funding sources involved. We certify that the submission is original work and is not under review at any other publication.Additional informationNotes on contributorsRoberto MenaRoberto Alejandro Mena is a graduate student in Computer Science Engineering from Escuela Superior Politécnica del Litoral – ESPOL University. Throughout his career, he has played a leading role as a data analyst in various research projects, mainly centered on system development for magnetic resonance imaging (MRI) processing and visualization.Enrique PelaezDr. Enrique Peláez earned his Ph.D. in Computer Engineering from the University of South Carolina, USA, in 1994. Currently, he is a Professor at ESPOL University where he leads the AI research in Computational Intelligence. Over recent years, Dr. Pelaez has been engaged in applied research on Parkinson's Disease, leveraging machine and deep learning techniques. His academic contributions showcased in leading publications and forums, with papers presented in several conferences and symposia. Dr. Pelaez's work has been published in journals, including the IEEE and Nature Communications. His research topics encompass EEG signal classification, deep learning for medical imaging, and behavioral signal processing using AI.Francis LoayzaDr. F","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automated system to distinguish between Corona and Viral Pneumonia chest diseases based on image processing techniques 基于图像处理技术的冠状肺炎和病毒性肺炎胸部疾病的自动区分系统
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-30 DOI: 10.1080/21681163.2023.2261575
Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat
ABSTRACTRecently, huge concerns have been raised in diagnosing chest diseases, especially after the COVID-19 pandemic. Regular diagnosis processes of chest diseases sometimes fail to distinguish between Corona and Viral Pneumonia diseases through Polymerase Chain Reaction (PCR) tests which are a time-engrossing process that needs convoluted manual procedures. Artificial Intelligence (AI) techniques have achieved high performance in aiding medical diagnostic processes. The innovation of this work lies in using a new diagnostic technique to distinguish between COVID-19 and Viral Pneumonia diseases using advanced AI technologies. This is done by extracting novel features from chest X-ray images based on Wavelet analysis, Scale Invariant Feature Transformation (SIFT), and the Mel Frequency Cepstral Coefficient (MFCC). Support vector machines (SVM) and artificial neural networks (ANN) were utilized to build classification algorithms using 1200 chest X-ray mages for each case. Using Wavelet features, the results of evaluating the SVM and ANN models were 97% accurate, and with SIFT features, they were closer to 99%. The proposed models were very effective at identifying COVID-19 and Viral Pneumonitis, so physicians can determine the best treatment course for patients with the support of this high accuracy. Moreover, this model can be used in hospitals and emergency rooms when a massive number of patients are waiting, as it is faster and more accurate than the regular diagnosis processes as each step takes few seconds on average to complete.KEYWORDS: Chest X-ray imagesfeature extractionand SVMimage classifications Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要近年来,特别是在COVID-19大流行之后,胸部疾病的诊断引起了人们的极大关注。常规的胸部疾病诊断过程有时无法通过聚合酶链反应(PCR)检测来区分冠状病毒和病毒性肺炎,这是一个耗时的过程,需要复杂的人工程序。人工智能(AI)技术在辅助医疗诊断过程中取得了很高的性能。这项工作的创新之处在于利用先进的人工智能技术,使用新的诊断技术来区分COVID-19和病毒性肺炎。这是通过基于小波分析、尺度不变特征变换(SIFT)和Mel频率倒谱系数(MFCC)从胸部x射线图像中提取新特征来实现的。利用支持向量机(SVM)和人工神经网络(ANN)对每个病例的1200张胸片进行分类。使用小波特征评价SVM和ANN模型的准确率为97%,使用SIFT特征评价SVM和ANN模型的准确率接近99%。所提出的模型在识别COVID-19和病毒性肺炎方面非常有效,因此医生可以在这种高精度的支持下为患者确定最佳治疗方案。此外,该模型可以用于大量患者等待的医院和急诊室,因为它比常规诊断过程更快,更准确,因为每一步平均只需几秒钟即可完成。关键词:胸部x线图像特征提取与svm图像分类披露声明作者未报告潜在利益冲突。
{"title":"An automated system to distinguish between Corona and Viral Pneumonia chest diseases based on image processing techniques","authors":"Amani Al-Ghraibah, Muneera Altayeb, Feras A. Alnaimat","doi":"10.1080/21681163.2023.2261575","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261575","url":null,"abstract":"ABSTRACTRecently, huge concerns have been raised in diagnosing chest diseases, especially after the COVID-19 pandemic. Regular diagnosis processes of chest diseases sometimes fail to distinguish between Corona and Viral Pneumonia diseases through Polymerase Chain Reaction (PCR) tests which are a time-engrossing process that needs convoluted manual procedures. Artificial Intelligence (AI) techniques have achieved high performance in aiding medical diagnostic processes. The innovation of this work lies in using a new diagnostic technique to distinguish between COVID-19 and Viral Pneumonia diseases using advanced AI technologies. This is done by extracting novel features from chest X-ray images based on Wavelet analysis, Scale Invariant Feature Transformation (SIFT), and the Mel Frequency Cepstral Coefficient (MFCC). Support vector machines (SVM) and artificial neural networks (ANN) were utilized to build classification algorithms using 1200 chest X-ray mages for each case. Using Wavelet features, the results of evaluating the SVM and ANN models were 97% accurate, and with SIFT features, they were closer to 99%. The proposed models were very effective at identifying COVID-19 and Viral Pneumonitis, so physicians can determine the best treatment course for patients with the support of this high accuracy. Moreover, this model can be used in hospitals and emergency rooms when a massive number of patients are waiting, as it is faster and more accurate than the regular diagnosis processes as each step takes few seconds on average to complete.KEYWORDS: Chest X-ray imagesfeature extractionand SVMimage classifications Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gene expression extraction in cervical cancer by segmentation of microarray images using a novel fuzzy method 基于微阵列图像分割的子宫颈癌基因表达提取
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-30 DOI: 10.1080/21681163.2023.2261555
Nayyer Mostaghim Bakhshayesh, Mousa Shamsi, Faegheh Golabi
It is necessary to obtain gene expression values to identify gene biomarkers involved in all types of cancers, and microarray data is one of the best data for this purpose. In order to extract gene expression values from microarray images that have different challenges. This article presents a completely automatic and comprehensive method that can deal with the various challenges in these images and obtain gene expression values with high accuracy. A pre-processing approach is proposed for contrast enhancement using a genetic algorithm and for removing noise and artefacts in microarray cells using wavelet transform based on a complex Gaussian scaling model. For each point, the coordinate centre is determined using Self Organising Maps. Then, using a new hybrid model based on the Fuzzy Local Information Gaussian Mixture Model (FLIGMM), the position of each spot is accurately determined. In this model, various features are obtained using local information about pixels, considering the pixel neighbourhood correlation coefficient. Finally, the gene expression values are obtained. The performance of the proposed algorithm was evaluated using real microarray images of cervical cancer from the GMRCL microarray dataset as well as simulated images. The results show that the proposed algorithm achieves 90.91% and 98% accuracy in segmenting microarray spots for noiseless and noisy spots, respectively.
获得基因表达值是识别所有类型癌症中涉及的基因生物标志物的必要条件,而微阵列数据是实现这一目的的最佳数据之一。为了从不同挑战的微阵列图像中提取基因表达值。本文提出了一种完全自动化和综合的方法,可以处理这些图像中的各种挑战,并获得高精度的基因表达值。提出了一种基于遗传算法的对比度增强预处理方法和基于复杂高斯尺度模型的小波变换去除微阵列细胞中的噪声和伪影的预处理方法。对于每个点,坐标中心是使用自组织地图确定的。然后,利用基于模糊局部信息高斯混合模型(FLIGMM)的混合模型,精确确定每个点的位置;在该模型中,考虑像素邻域相关系数,利用像素的局部信息获得各种特征。最后得到基因表达值。使用来自GMRCL微阵列数据集的真实宫颈癌微阵列图像以及模拟图像来评估所提出算法的性能。结果表明,该算法对无噪声点和有噪声点的分割准确率分别达到90.91%和98%。
{"title":"Gene expression extraction in cervical cancer by segmentation of microarray images using a novel fuzzy method","authors":"Nayyer Mostaghim Bakhshayesh, Mousa Shamsi, Faegheh Golabi","doi":"10.1080/21681163.2023.2261555","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261555","url":null,"abstract":"It is necessary to obtain gene expression values to identify gene biomarkers involved in all types of cancers, and microarray data is one of the best data for this purpose. In order to extract gene expression values from microarray images that have different challenges. This article presents a completely automatic and comprehensive method that can deal with the various challenges in these images and obtain gene expression values with high accuracy. A pre-processing approach is proposed for contrast enhancement using a genetic algorithm and for removing noise and artefacts in microarray cells using wavelet transform based on a complex Gaussian scaling model. For each point, the coordinate centre is determined using Self Organising Maps. Then, using a new hybrid model based on the Fuzzy Local Information Gaussian Mixture Model (FLIGMM), the position of each spot is accurately determined. In this model, various features are obtained using local information about pixels, considering the pixel neighbourhood correlation coefficient. Finally, the gene expression values are obtained. The performance of the proposed algorithm was evaluated using real microarray images of cervical cancer from the GMRCL microarray dataset as well as simulated images. The results show that the proposed algorithm achieves 90.91% and 98% accuracy in segmenting microarray spots for noiseless and noisy spots, respectively.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RePoint-Net detection and 3DSqU² Net segmentation for automatic identification of pulmonary nodules in computed tomography images RePoint-Net检测和3DSqU²Net分割用于计算机断层扫描图像中肺结节的自动识别
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-30 DOI: 10.1080/21681163.2023.2258998
Shabnam Ghasemi, Shahin Akbarpour, Ali Farzan, Mohammad Ali Jamali
Lung cancer is a leading cause of cancer-related deaths. Computer-aided detection (CAD) has emerged as a valuable tool to assist radiologists in the automated detection and segmentation of pulmonary nodules using Computed Tomography (CT) scans, indicating early stages of lung cancer. However, detecting small nodules remains challenging. This paper proposes novel techniques to address this challenge, achieving high sensitivity and low false-positive nodule identification using the RePoint-Net detection networks. Additionally, the 3DSqU2 Net, a novel nodule segmentation approach incorporating full-scale skip connections and deep supervision, is introduced. A 3DCNN model is employed for nodule candidate classification, generating final classification results by combining previous step outputs. Extensive training and testing on the LIDC/IDRI public lung CT database dataset validate the proposed model, demonstrating its superiority over human specialists with a remarkable 97.4% sensitivity in identifying nodule candidates. Moreover, CT texture analysis accurately differentiates between malignant and benign pulmonary nodules due to its ability to capture subtle tissue characteristic differences. This approach achieves a 95.8% sensitivity in nodule classification, promising non-invasive support for clinical decision-making in managing pulmonary nodules and improving patient outcomes.
肺癌是癌症相关死亡的主要原因。计算机辅助检测(CAD)已经成为一种有价值的工具,可以帮助放射科医生使用计算机断层扫描(CT)自动检测和分割肺结节,表明肺癌的早期阶段。然而,检测小结节仍然具有挑战性。本文提出了解决这一挑战的新技术,使用RePoint-Net检测网络实现高灵敏度和低假阳性结节识别。此外,还介绍了一种新型的基于全尺寸跳跃连接和深度监督的节点分割方法3DSqU2 Net。采用3DCNN模型进行节点候选分类,结合前步输出生成最终分类结果。在LIDC/IDRI公共肺部CT数据库数据集上进行的大量训练和测试验证了所提出的模型,证明其在识别结节候选物方面优于人类专家,灵敏度高达97.4%。此外,由于CT结构分析能够捕捉细微的组织特征差异,因此可以准确区分肺结节的恶性和良性。该方法对肺结节分类的敏感性达到95.8%,有望为临床决策治疗肺结节和改善患者预后提供无创支持。
{"title":"RePoint-Net detection and 3DSqU² Net segmentation for automatic identification of pulmonary nodules in computed tomography images","authors":"Shabnam Ghasemi, Shahin Akbarpour, Ali Farzan, Mohammad Ali Jamali","doi":"10.1080/21681163.2023.2258998","DOIUrl":"https://doi.org/10.1080/21681163.2023.2258998","url":null,"abstract":"Lung cancer is a leading cause of cancer-related deaths. Computer-aided detection (CAD) has emerged as a valuable tool to assist radiologists in the automated detection and segmentation of pulmonary nodules using Computed Tomography (CT) scans, indicating early stages of lung cancer. However, detecting small nodules remains challenging. This paper proposes novel techniques to address this challenge, achieving high sensitivity and low false-positive nodule identification using the RePoint-Net detection networks. Additionally, the 3DSqU2 Net, a novel nodule segmentation approach incorporating full-scale skip connections and deep supervision, is introduced. A 3DCNN model is employed for nodule candidate classification, generating final classification results by combining previous step outputs. Extensive training and testing on the LIDC/IDRI public lung CT database dataset validate the proposed model, demonstrating its superiority over human specialists with a remarkable 97.4% sensitivity in identifying nodule candidates. Moreover, CT texture analysis accurately differentiates between malignant and benign pulmonary nodules due to its ability to capture subtle tissue characteristic differences. This approach achieves a 95.8% sensitivity in nodule classification, promising non-invasive support for clinical decision-making in managing pulmonary nodules and improving patient outcomes.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136279919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The application of deep learning methods in knee joint sports injury diseases 深度学习方法在膝关节运动损伤疾病中的应用
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-23 DOI: 10.1080/21681163.2023.2261554
Yeqiang Luo, Jing Liang, Shanghui Lin, Tianmo Bai, Lingchuang Kong, Yan Jin, Xin Zhang, Baofeng Li, Bei Chen
ABSTRACTDeep learning is a powerful branch of machine learning, which presents a promising new approach for diagnose diseases. However, the deep learning for detecting anterior cruciate ligament still limits to the evaluation of whether there are injuries. The accuracy of the deep learning model is not high, and the parameters are complex. In this study, we have developed a deep learning model based on ResNet-18 to detect ACL conditions. The results suggest that there is no significant difference between our proposed model and two orthopaedic surgeons and radiologists in diagnosing ACL conditions.KEYWORDS: Deep-learningmachine-learningautomated modelanterior cruciate ligament Disclosure statementThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Data availability statementThis study used a MRNet dataset that gathered from Stanford University Medical Center. This dataset available online and anyone can be used.
摘要深度学习是机器学习的一个强大分支,为疾病诊断提供了一种很有前景的新方法。然而,对于前交叉韧带检测的深度学习仍然局限于是否有损伤的评估。深度学习模型的精度不高,参数复杂。在本研究中,我们开发了一个基于ResNet-18的深度学习模型来检测ACL状况。结果表明,我们提出的模型与两位骨科医生和放射科医生在诊断ACL状况方面没有显著差异。关键词:深度学习机器学习自动模型前交叉韧带披露声明作者声明本研究是在没有任何商业或财务关系的情况下进行的,这可能被解释为潜在的利益冲突。数据可用性声明本研究使用了从斯坦福大学医学中心收集的MRNet数据集。此数据集可在线获取,任何人都可以使用。
{"title":"The application of deep learning methods in knee joint sports injury diseases","authors":"Yeqiang Luo, Jing Liang, Shanghui Lin, Tianmo Bai, Lingchuang Kong, Yan Jin, Xin Zhang, Baofeng Li, Bei Chen","doi":"10.1080/21681163.2023.2261554","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261554","url":null,"abstract":"ABSTRACTDeep learning is a powerful branch of machine learning, which presents a promising new approach for diagnose diseases. However, the deep learning for detecting anterior cruciate ligament still limits to the evaluation of whether there are injuries. The accuracy of the deep learning model is not high, and the parameters are complex. In this study, we have developed a deep learning model based on ResNet-18 to detect ACL conditions. The results suggest that there is no significant difference between our proposed model and two orthopaedic surgeons and radiologists in diagnosing ACL conditions.KEYWORDS: Deep-learningmachine-learningautomated modelanterior cruciate ligament Disclosure statementThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.Data availability statementThis study used a MRNet dataset that gathered from Stanford University Medical Center. This dataset available online and anyone can be used.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135966897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A web-based human liver atlas 基于网络的人类肝脏图谱
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-21 DOI: 10.1080/21681163.2023.2261557
Haobo Yu, Adam Bartlett, Harvey Ho
The liver is the largest solid organ in the body that can be anatomically divided into segments. We present in this work a web-based subject-specific human liver atlas based on the Couinaud segments simulated from portal venous (PV) perfusion zones, hepatic arterial (HA) and hepatic venous (HV) trees, as well as biliary drainage. The purpose of the atlas is to provide the modelling community with freely accessible 3D hepatic structures for in silico simulations, which are of tremendous value in yielding novel insights in hepatic circulation, drug transport and clearance.
肝脏是人体最大的实心器官,在解剖学上可分为几个部分。在这项工作中,我们提出了一个基于网络的受试者特异性人类肝脏图谱,该图谱基于从门静脉(PV)灌注区、肝动脉(HA)和肝静脉(HV)树以及胆道引流模拟的库伊诺节。该图谱的目的是为建模界提供可自由访问的三维肝脏结构,用于计算机模拟,这在产生肝脏循环,药物运输和清除的新见解方面具有巨大价值。
{"title":"A web-based human liver atlas","authors":"Haobo Yu, Adam Bartlett, Harvey Ho","doi":"10.1080/21681163.2023.2261557","DOIUrl":"https://doi.org/10.1080/21681163.2023.2261557","url":null,"abstract":"The liver is the largest solid organ in the body that can be anatomically divided into segments. We present in this work a web-based subject-specific human liver atlas based on the Couinaud segments simulated from portal venous (PV) perfusion zones, hepatic arterial (HA) and hepatic venous (HV) trees, as well as biliary drainage. The purpose of the atlas is to provide the modelling community with freely accessible 3D hepatic structures for in silico simulations, which are of tremendous value in yielding novel insights in hepatic circulation, drug transport and clearance.","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136130522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodality medical image fusion analysis with multi-plane features of PET and MRI images using ONSCT 基于ONSCT的PET和MRI多平面特征的多模态医学图像融合分析
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-19 DOI: 10.1080/21681163.2023.2255684
Jampani Ravi, R. Narmadha
ABSTRACTThe Multimodal Medical Image Fusion (MMIF) is affected by poor image quality, which leads to the extraction of inefficient features. The main intent of this work is to fuse various planes in the PET and MRI medical images efficiently using the MMIF approach. Initially, the sample images containing the axial plane of PET and MRI images are aggregated from standard datasets. Then, the collected images are employed for the decomposition process, which is accomplished via Optimal Non-Subsampled Contourlet Transform (ONSCT). The parameters in the NSCT are optimized using the Modified Water Strider Algorithm (MWSA. Once the images are decomposed, it is segmented into two sub-bands as high frequency and low-frequency sub-bands. Consequently, the high-frequency sub-bands of both PET and MRI images are fused by using the optimal weighted average fusion, in which the weight factor is obtained optimally by the MWSA algorithm. Similarly, the low-frequency sub-bands of both medical images are combined by sparse fusion technique. Finally, both the resultant fused images are subjected to Inverse Non-Subsampled Contourlet Transform (INSCT) to get desired fused images. The experimental findings suggest that the proposed model has effectively fused the images, and it also enhances the similarity score with axial planes.KEYWORDS: Medical image fusionmodified water strider algorithmmagnetic resonance imagingoptimal non-subsampled contourlet transforoptimal weighted average fusion Disclosure statementNo potential conflict of interest was reported by the author(s).
摘要多模态医学图像融合(MMIF)受到图像质量差的影响,导致提取的特征效率低下。本工作的主要目的是利用MMIF方法有效地融合PET和MRI医学图像中的各种平面。首先,从标准数据集聚合包含PET和MRI图像轴向面的样本图像。然后,将采集到的图像进行分解,并通过最优非下采样Contourlet变换(ONSCT)完成分解。NSCT中的参数使用改进的水黾算法(Modified Water Strider Algorithm, MWSA)进行优化。对图像进行分解后,将其分割为高频子带和低频子带两个子带。因此,采用最优加权平均融合方法对PET和MRI图像的高频子带进行融合,其中权重因子通过MWSA算法得到最优。同样,采用稀疏融合技术对两幅医学图像的低频子带进行合并。最后,对得到的融合图像进行反非下采样Contourlet变换(INSCT),得到所需的融合图像。实验结果表明,该模型有效地融合了图像,提高了图像与轴向平面的相似度。关键词:医学图像融合改进水黾算法磁共振成像最优非下采样轮廓波变换最优加权平均融合披露声明作者未报告潜在利益冲突。
{"title":"Multimodality medical image fusion analysis with multi-plane features of PET and MRI images using ONSCT","authors":"Jampani Ravi, R. Narmadha","doi":"10.1080/21681163.2023.2255684","DOIUrl":"https://doi.org/10.1080/21681163.2023.2255684","url":null,"abstract":"ABSTRACTThe Multimodal Medical Image Fusion (MMIF) is affected by poor image quality, which leads to the extraction of inefficient features. The main intent of this work is to fuse various planes in the PET and MRI medical images efficiently using the MMIF approach. Initially, the sample images containing the axial plane of PET and MRI images are aggregated from standard datasets. Then, the collected images are employed for the decomposition process, which is accomplished via Optimal Non-Subsampled Contourlet Transform (ONSCT). The parameters in the NSCT are optimized using the Modified Water Strider Algorithm (MWSA. Once the images are decomposed, it is segmented into two sub-bands as high frequency and low-frequency sub-bands. Consequently, the high-frequency sub-bands of both PET and MRI images are fused by using the optimal weighted average fusion, in which the weight factor is obtained optimally by the MWSA algorithm. Similarly, the low-frequency sub-bands of both medical images are combined by sparse fusion technique. Finally, both the resultant fused images are subjected to Inverse Non-Subsampled Contourlet Transform (INSCT) to get desired fused images. The experimental findings suggest that the proposed model has effectively fused the images, and it also enhances the similarity score with axial planes.KEYWORDS: Medical image fusionmodified water strider algorithmmagnetic resonance imagingoptimal non-subsampled contourlet transforoptimal weighted average fusion Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135059109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presence of hypertension might pose a potential pitfall in detection of diabetes mellitus non-invasively using the second derivative of photoplethysmography 高血压的存在可能会对使用光容积脉搏波二阶导数无创检测糖尿病造成潜在的缺陷
Q4 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-14 DOI: 10.1080/21681163.2023.2256896
Ahmet Taş, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer
ABSTRACTIndices derived from photoplethysmography (PPG) have shown promising results as non-invasive digital biomarkers for the detection of diabetes mellitus (DM). Considering the mutual endothelial insult leading to similar undesirable peripheral hemodynamic perturbations, hypertension (HT) may blunt this classification performance. Second derivative PPG (SD-PPG) indices were derived from the second derivative of the PPG signal. The variables of interest were the previously described peaks of the initial positive (a), early negative (b), re-increasing (c), late re-decreasing (d), diastolic positive (e) and negative (f) waves and the ratios between them. Patients were classified according to their type 2 DM and hypertension phenotypes. SD-PPG indices were compared between diseased subgroups, healthy controls and also dichotomous classification performance was evaluated. Two SDPPG indices, b/a ratio and the vascular ageing index (VAI = (b-c-d-e)/a) responded to isolated DM type 2 (n = 29) amongst healthy subjects (n = 106) (area under the curve (AUC) = 0.629 p = 0.034 and 0.631 p = 0.031 20 respectively). However, the classification performance became insignificant with the inclusion of HT patients (n=30). (p = 0.839 vs. p = 0.656). These results suggest that the coexistence of HT and DM may hinder the use of SD-PPG for noninvasive DM detection.KEYWORDS: Second-derivative photoplethysmographydiabetesnon-invasive cardiovascular screeningfingertip waveformshypertension Abbreviations Body Mass Index=BMIDiabetes Mellitus=DMDiabetes Mellitus Type-2=DM2Diastolic Blood Pressure=DBPHypertension=HTPhotoplethysmography=PPGSecond derivative of Photoplethysmography=SD-PPGVascular Ageing Index=VAISystolic Blood Pressure=SBPDisclosure statementNo potential conflict of interest was reported by the authors.Author contributionsStudy conception and design was made by Ahmet Tas. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) have contributed to material preparation, data collection and analysis (signal and/or statistical and/or intellectual) and interpretation of results. The first draft of the manuscript was written by Ahmet Tas, and all authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) commented on previous versions of the manuscript. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) read and approved the final manuscript.Ethics approvalThe data is retrieved from open-dataset published in Nature Scientific Data (https://www.nature.com/articles/sdata201820#). The data collection had ethical approval denoted in data descripting article and all participants have given written consent per open-data descripting a
来自光体积脉搏波(PPG)的数据作为检测糖尿病(DM)的无创数字生物标志物显示出有希望的结果。考虑到相互内皮损伤导致类似的不良周围血流动力学扰动,高血压(HT)可能会削弱这种分类性能。二阶导数PPG (SD-PPG)指数由PPG信号的二阶导数推导而来。感兴趣的变量是先前描述的初始正(a),早期负(b),再增加(c),晚期再减少(d),舒张期正(e)和负(f)波的峰值以及它们之间的比值。根据2型糖尿病和高血压表型对患者进行分类。比较患病亚组与健康对照组之间的SD-PPG指数,并评价二分类性能。健康受试者(n = 106)中,2个SDPPG指数b/a比值和血管老化指数(VAI = (b-c-d-e)/a)对分离的2型糖尿病(n = 29)有反应(曲线下面积(AUC)分别为0.629 p = 0.034和0.631 p = 0.031 20)。然而,随着HT患者(n=30)的纳入,分类效果变得不显著。(p = 0.839对p = 0.656)。这些结果提示,HT和DM的共存可能会阻碍SD-PPG在无创DM检测中的应用。关键词:二阶导数光容积脉搏图糖尿病无创心血管筛查指尖波形高血压缩写体重指数= bmi糖尿病= dm糖尿病2型= dm舒张压= db高血压= ht光容积脉搏图= ppg光容积脉搏图二阶导数血管老化指数=血管收缩压= sbp披露声明作者未报告潜在的利益冲突。研究概念和设计由Ahmet Tas提出。所有作者(Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer)都为材料准备,数据收集和分析(信号和/或统计和/或智力)以及结果解释做出了贡献。手稿的初稿由Ahmet Tas撰写,所有作者(Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer)都对手稿的前几个版本进行了评论。所有作者(Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer)阅读并批准了最终手稿。伦理审批数据来源于《自然科学数据》(https://www.nature.com/articles/sdata201820#)上发表的开放数据集。数据收集已在数据描述文章中得到伦理批准,所有参与者已根据开放数据描述文章给予书面同意。所有作者均符合ICMJE的作者资格标准。其他信息资金:作者报告没有与本文所述工作相关的资金。
{"title":"Presence of hypertension might pose a potential pitfall in detection of diabetes mellitus non-invasively using the second derivative of photoplethysmography","authors":"Ahmet Taş, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer","doi":"10.1080/21681163.2023.2256896","DOIUrl":"https://doi.org/10.1080/21681163.2023.2256896","url":null,"abstract":"ABSTRACTIndices derived from photoplethysmography (PPG) have shown promising results as non-invasive digital biomarkers for the detection of diabetes mellitus (DM). Considering the mutual endothelial insult leading to similar undesirable peripheral hemodynamic perturbations, hypertension (HT) may blunt this classification performance. Second derivative PPG (SD-PPG) indices were derived from the second derivative of the PPG signal. The variables of interest were the previously described peaks of the initial positive (a), early negative (b), re-increasing (c), late re-decreasing (d), diastolic positive (e) and negative (f) waves and the ratios between them. Patients were classified according to their type 2 DM and hypertension phenotypes. SD-PPG indices were compared between diseased subgroups, healthy controls and also dichotomous classification performance was evaluated. Two SDPPG indices, b/a ratio and the vascular ageing index (VAI = (b-c-d-e)/a) responded to isolated DM type 2 (n = 29) amongst healthy subjects (n = 106) (area under the curve (AUC) = 0.629 p = 0.034 and 0.631 p = 0.031 20 respectively). However, the classification performance became insignificant with the inclusion of HT patients (n=30). (p = 0.839 vs. p = 0.656). These results suggest that the coexistence of HT and DM may hinder the use of SD-PPG for noninvasive DM detection.KEYWORDS: Second-derivative photoplethysmographydiabetesnon-invasive cardiovascular screeningfingertip waveformshypertension Abbreviations Body Mass Index=BMIDiabetes Mellitus=DMDiabetes Mellitus Type-2=DM2Diastolic Blood Pressure=DBPHypertension=HTPhotoplethysmography=PPGSecond derivative of Photoplethysmography=SD-PPGVascular Ageing Index=VAISystolic Blood Pressure=SBPDisclosure statementNo potential conflict of interest was reported by the authors.Author contributionsStudy conception and design was made by Ahmet Tas. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) have contributed to material preparation, data collection and analysis (signal and/or statistical and/or intellectual) and interpretation of results. The first draft of the manuscript was written by Ahmet Tas, and all authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) commented on previous versions of the manuscript. All authors (Ahmet Tas, Yaren Alan, Ilke Kara, Abdullah Savas, Muhammed Ikbal Bayhan, Diren Ekici, Zeynep Atay, Fatih Sezer, Cagla Kitapli, Sabahattin Umman, Murat Sezer) read and approved the final manuscript.Ethics approvalThe data is retrieved from open-dataset published in Nature Scientific Data (https://www.nature.com/articles/sdata201820#). The data collection had ethical approval denoted in data descripting article and all participants have given written consent per open-data descripting a","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134969928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1