Pub Date : 2024-08-24eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00302-w
Fatma Özcan
Cardiovascular disease, which remains one of the main causes of death, can be prevented by early diagnosis of heart sounds. Certain noisy signals, known as murmurs, may be present in heart sounds. On auscultation, the degree of murmur is closely related to the patient's clinical condition. Computer-aided decision-making systems can help doctors to detect murmurs and make faster decisions. The Mel spectrograms were generated from raw phonocardiograms and then presented to the OpenL3 network for transfer learning. In this way, the signals were classified to predict the presence or absence of murmurs and their level of severity. Pitch level (healthy, low, medium, high) and Levine scale (healthy, soft, loud) were used. The results obtained without prior segmentation are very impressive. The model used was then interpreted using an Explainable Artificial Intelligence (XAI) method, Occlusion Sensitivity. This approach shows that XAI methods are necessary to know the features used internally by the artificial neural network then to explain the automatic decision taken by the model. The averaged image of the occlusion sensitivity maps can give us either an overview or a precise detail per pixel of the features used. In the field of healthcare, particularly cardiology, for rapid diagnostic and preventive purposes, this work could provide more detail on the important features of the phonocardiogram.
{"title":"Rapid detection and interpretation of heart murmurs using phonocardiograms, transfer learning and explainable artificial intelligence.","authors":"Fatma Özcan","doi":"10.1007/s13755-024-00302-w","DOIUrl":"10.1007/s13755-024-00302-w","url":null,"abstract":"<p><p>Cardiovascular disease, which remains one of the main causes of death, can be prevented by early diagnosis of heart sounds. Certain noisy signals, known as murmurs, may be present in heart sounds. On auscultation, the degree of murmur is closely related to the patient's clinical condition. Computer-aided decision-making systems can help doctors to detect murmurs and make faster decisions. The Mel spectrograms were generated from raw phonocardiograms and then presented to the OpenL3 network for transfer learning. In this way, the signals were classified to predict the presence or absence of murmurs and their level of severity. Pitch level (healthy, low, medium, high) and Levine scale (healthy, soft, loud) were used. The results obtained without prior segmentation are very impressive. The model used was then interpreted using an Explainable Artificial Intelligence (XAI) method, Occlusion Sensitivity. This approach shows that XAI methods are necessary to know the features used internally by the artificial neural network then to explain the automatic decision taken by the model. The averaged image of the occlusion sensitivity maps can give us either an overview or a precise detail per pixel of the features used. In the field of healthcare, particularly cardiology, for rapid diagnostic and preventive purposes, this work could provide more detail on the important features of the phonocardiogram.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"43"},"PeriodicalIF":3.4,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00301-x
Usharani Bhimavarapu
Diabetic retinopathy, a complication of diabetes, damages the retina due to prolonged high blood sugar levels, leading to vision impairment and blindness. Early detection through regular eye exams and proper diabetes management are crucial in preventing vision loss. DR is categorized into five classes based on severity, ranging from no retinopathy to proliferative diabetic retinopathy. This study proposes an automated detection method using fundus images. Image segmentation divides fundus images into homogeneous regions, facilitating feature extraction. Feature selection aims to reduce computational costs and improve classification accuracy by selecting relevant features. The proposed algorithm integrates an Improved Tunicate Swarm Algorithm (ITSA) with Renyi's entropy for enhanced adaptability in the initial and final stages. An Improved Hybrid Butterfly Optimization (IHBO) Algorithm is also introduced for feature selection. The effectiveness of the proposed method is demonstrated using retinal fundus image datasets, achieving promising results in DR severity classification. For the IDRiD dataset, the proposed model achieves a segmentation Dice coefficient of 98.06% and classification accuracy of 98.21%. In contrast, the E-Optha dataset attains a segmentation Dice coefficient of 97.95% and classification accuracy of 99.96%. Experimental results indicate the algorithm's ability to accurately classify DR severity levels, highlighting its potential for early detection and prevention of diabetes-related blindness.
{"title":"Optimized automated detection of diabetic retinopathy severity: integrating improved multithresholding tunicate swarm algorithm and improved hybrid butterfly optimization.","authors":"Usharani Bhimavarapu","doi":"10.1007/s13755-024-00301-x","DOIUrl":"10.1007/s13755-024-00301-x","url":null,"abstract":"<p><p>Diabetic retinopathy, a complication of diabetes, damages the retina due to prolonged high blood sugar levels, leading to vision impairment and blindness. Early detection through regular eye exams and proper diabetes management are crucial in preventing vision loss. DR is categorized into five classes based on severity, ranging from no retinopathy to proliferative diabetic retinopathy. This study proposes an automated detection method using fundus images. Image segmentation divides fundus images into homogeneous regions, facilitating feature extraction. Feature selection aims to reduce computational costs and improve classification accuracy by selecting relevant features. The proposed algorithm integrates an Improved Tunicate Swarm Algorithm (ITSA) with Renyi's entropy for enhanced adaptability in the initial and final stages. An Improved Hybrid Butterfly Optimization (IHBO) Algorithm is also introduced for feature selection. The effectiveness of the proposed method is demonstrated using retinal fundus image datasets, achieving promising results in DR severity classification. For the IDRiD dataset, the proposed model achieves a segmentation Dice coefficient of 98.06% and classification accuracy of 98.21%. In contrast, the E-Optha dataset attains a segmentation Dice coefficient of 97.95% and classification accuracy of 99.96%. Experimental results indicate the algorithm's ability to accurately classify DR severity levels, highlighting its potential for early detection and prevention of diabetes-related blindness.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"42"},"PeriodicalIF":3.4,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141983523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Target-based strategy is a prevalent means of drug research and development (R&D), since targets provide effector molecules of drug action and offer the foundation of pharmacological investigation. Recently, the artificial intelligence (AI) technology has been utilized in various stages of drug R&D, where AI-assisted experimental methods show higher efficiency than sole experimental ones. It is a critical need to give a comprehensive review of AI applications in drug R &D for biopharmaceutical field.
Methods: Relevant literatures about AI-assisted drug R&D were collected from the public databases (Including Google Scholar, Web of Science, PubMed, IEEE Xplore Digital Library, Springer, and ScienceDirect) through a keyword searching strategy with the following terms [("Artificial Intelligence" OR "Knowledge Graph" OR "Machine Learning") AND ("Drug Target Identification" OR "New Drug Development")].
Results: In this review, we first introduced common strategies and novel trends of drug R&D, followed by characteristic description of AI algorithms widely used in drug R&D. Subsequently, we depicted detailed applications of AI algorithms in target identification, lead compound identification and optimization, drug repurposing, and drug analytical platform construction. Finally, we discussed the challenges and prospects of AI-assisted methods for drug discovery.
Conclusion: Collectively, this review provides comprehensive overview of AI applications in drug R&D and presents future perspectives for biopharmaceutical field, which may promote the development of drug industry.
目的:基于靶点的策略是药物研发(R&D)的普遍手段,因为靶点提供了药物作用的效应分子,为药理学研究提供了基础。最近,人工智能(AI)技术被应用于药物研发的各个阶段,其中人工智能辅助实验方法比单独实验方法显示出更高的效率。因此,亟需对人工智能在生物制药领域药物研发中的应用进行全面综述:方法:通过关键词检索策略,以[("人工智能 "或 "知识图谱 "或 "机器学习")和("药物靶点识别 "或 "新药研发")]为关键词,从公共数据库(包括 Google Scholar、Web of Science、PubMed、IEEE Xplore Digital Library、Springer 和 ScienceDirect)中收集有关人工智能辅助药物研发的相关文献:在这篇综述中,我们首先介绍了药物研发的常见策略和新趋势,然后介绍了广泛应用于药物研发的人工智能算法的特点。随后,我们详细介绍了人工智能算法在靶点识别、先导化合物识别与优化、药物再利用以及药物分析平台构建等方面的应用。最后,我们讨论了人工智能辅助药物发现方法所面临的挑战和前景:综上所述,本综述全面概述了人工智能在药物研发中的应用,并提出了生物制药领域的未来展望,可促进药物产业的发展。
{"title":"Comprehensive applications of the artificial intelligence technology in new drug research and development.","authors":"Hongyu Chen, Dong Lu, Ziyi Xiao, Shensuo Li, Wen Zhang, Xin Luan, Weidong Zhang, Guangyong Zheng","doi":"10.1007/s13755-024-00300-y","DOIUrl":"10.1007/s13755-024-00300-y","url":null,"abstract":"<p><strong>Purpose: </strong>Target-based strategy is a prevalent means of drug research and development (R&D), since targets provide effector molecules of drug action and offer the foundation of pharmacological investigation. Recently, the artificial intelligence (AI) technology has been utilized in various stages of drug R&D, where AI-assisted experimental methods show higher efficiency than sole experimental ones. It is a critical need to give a comprehensive review of AI applications in drug R &D for biopharmaceutical field.</p><p><strong>Methods: </strong>Relevant literatures about AI-assisted drug R&D were collected from the public databases (Including Google Scholar, Web of Science, PubMed, IEEE Xplore Digital Library, Springer, and ScienceDirect) through a keyword searching strategy with the following terms [(\"Artificial Intelligence\" OR \"Knowledge Graph\" OR \"Machine Learning\") AND (\"Drug Target Identification\" OR \"New Drug Development\")].</p><p><strong>Results: </strong>In this review, we first introduced common strategies and novel trends of drug R&D, followed by characteristic description of AI algorithms widely used in drug R&D. Subsequently, we depicted detailed applications of AI algorithms in target identification, lead compound identification and optimization, drug repurposing, and drug analytical platform construction. Finally, we discussed the challenges and prospects of AI-assisted methods for drug discovery.</p><p><strong>Conclusion: </strong>Collectively, this review provides comprehensive overview of AI applications in drug R&D and presents future perspectives for biopharmaceutical field, which may promote the development of drug industry.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"41"},"PeriodicalIF":3.4,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11310389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background and objective: Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.
Methods: We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model's performance and compare it with other works.
Results: Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.
Conclusion: This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.
背景和目的:及时准确地检测自闭症谱系障碍(ASD)对于早期干预和改善患者预后至关重要。本研究旨在利用机器学习(ML)技术的力量,通过结合时间眼动跟踪数据来改进 ASD 检测。我们开发了一种新型 ML 模型,利用眼球扫描路径、眼球运动距离序列和固定持续时间序列,增强分析的时间性,从而更有效地识别 ASD:方法:我们利用一个不带增强功能的眼动跟踪数据集来训练我们的 ML 模型,该模型由 CNN-GRU-ANN 架构组成。该模型由 CNN-GRU-ANN 架构组成,训练时使用了注视图、眼球定点之间的距离序列以及定点和眼球移动的持续时间。此外,我们还使用了一个验证数据集来评估模型的性能,并将其与其他作品进行比较:结果:与 VGG-16 模型相比,我们的 ML 模型在 ASD 检测方面表现优异。通过结合眼动跟踪数据中的时间信息,我们的模型获得了更高的准确度、精确度和召回率。基于序列特征的新颖添加使我们的模型能够有效区分 ASD 和典型发育个体,在验证数据集上达到了令人印象深刻的 93.10% 精确度值:本研究利用机器学习技术并结合时间眼动跟踪数据,提出了一种基于 ML 的 ASD 检测方法。我们的研究结果凸显了时间分析在改进 ASD 检测方面的潜力,并为基于眼动追踪的神经发育障碍诊断和干预领域的进一步发展提供了一个很有前景的方向。
{"title":"A novel multi-modal model to assist the diagnosis of autism spectrum disorder using eye-tracking data.","authors":"Brahim Benabderrahmane, Mohamed Gharzouli, Amira Benlecheb","doi":"10.1007/s13755-024-00299-2","DOIUrl":"10.1007/s13755-024-00299-2","url":null,"abstract":"<p><strong>Background and objective: </strong>Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.</p><p><strong>Methods: </strong>We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model's performance and compare it with other works.</p><p><strong>Results: </strong>Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.</p><p><strong>Conclusion: </strong>This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"40"},"PeriodicalIF":3.4,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11297859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00297-4
Demet Öztürk, Sena Aydoğan, İbrahim Kök, Işık Akın Bülbül, Selda Özdemir, Suat Özdemir, Diyar Akay
Diagnosing autism spectrum disorder (ASD) in children poses significant challenges due to its complex nature and impact on social communication development. While numerous data analytics techniques have been proposed for ASD evaluation, the process remains time-consuming and lacks clarity. Eye tracking (ET) data has emerged as a valuable resource for ASD risk assessment, yet existing literature predominantly focuses on predictive methods rather than descriptive techniques that offer human-friendly insights. Interpretation of ET data and Bayley scales, a widely used assessment tool, is challenging for ASD assessment of children. It should be understood clearly to perform better analytic tasks on ASD screening. Therefore, this study addresses this gap by employing linguistic summarization techniques to generate easily understandable summaries from raw ET data and Bayley scales. By integrating ET data and Bayley scores, the study aims to improve the identification of children with ASD from typically developing children (TD). Notably, this research represents one of the pioneering efforts to linguistically summarize ET data alongside Bayley scales, presenting comparative results between children with ASD and TD. Through linguistic summarization, this study facilitates the creation of simple, natural language statements, offering a first and unique approach to enhance ASD screening and contribute to our understanding of neurodevelopmental disorders.
由于儿童自闭症谱系障碍(ASD)的复杂性和对社会交流发展的影响,对其进行诊断是一项重大挑战。虽然已有许多数据分析技术被用于自闭症评估,但这一过程仍然耗时且缺乏清晰度。眼动追踪(ET)数据已成为 ASD 风险评估的宝贵资源,但现有文献主要侧重于预测方法,而不是提供人性化见解的描述性技术。对于 ASD 儿童评估而言,ET 数据和 Bayley 量表(一种广泛使用的评估工具)的解释具有挑战性。要想在 ASD 筛查中更好地完成分析任务,就必须清楚地了解这些数据。因此,本研究采用语言总结技术,从原始 ET 数据和 Bayley 量表中生成易于理解的总结,从而弥补了这一不足。通过整合 ET 数据和 Bayley 评分,本研究旨在提高从典型发育儿童(TD)中识别 ASD 儿童的能力。值得注意的是,本研究是用语言总结 ET 数据和 Bayley 量表的开创性研究之一,它展示了 ASD 儿童和 TD 儿童之间的比较结果。通过语言总结,这项研究有助于创建简单、自然的语言陈述,为加强 ASD 筛查提供了一种首创的独特方法,有助于我们了解神经发育障碍。
{"title":"Linguistic summarization of visual attention and developmental functioning of young children with autism spectrum disorder.","authors":"Demet Öztürk, Sena Aydoğan, İbrahim Kök, Işık Akın Bülbül, Selda Özdemir, Suat Özdemir, Diyar Akay","doi":"10.1007/s13755-024-00297-4","DOIUrl":"10.1007/s13755-024-00297-4","url":null,"abstract":"<p><p>Diagnosing autism spectrum disorder (ASD) in children poses significant challenges due to its complex nature and impact on social communication development. While numerous data analytics techniques have been proposed for ASD evaluation, the process remains time-consuming and lacks clarity. Eye tracking (ET) data has emerged as a valuable resource for ASD risk assessment, yet existing literature predominantly focuses on predictive methods rather than descriptive techniques that offer human-friendly insights. Interpretation of ET data and Bayley scales, a widely used assessment tool, is challenging for ASD assessment of children. It should be understood clearly to perform better analytic tasks on ASD screening. Therefore, this study addresses this gap by employing linguistic summarization techniques to generate easily understandable summaries from raw ET data and Bayley scales. By integrating ET data and Bayley scores, the study aims to improve the identification of children with ASD from typically developing children (TD). Notably, this research represents one of the pioneering efforts to linguistically summarize ET data alongside Bayley scales, presenting comparative results between children with ASD and TD. Through linguistic summarization, this study facilitates the creation of simple, natural language statements, offering a first and unique approach to enhance ASD screening and contribute to our understanding of neurodevelopmental disorders.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"39"},"PeriodicalIF":4.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11252111/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00296-5
Sana Alazwari, Mashael Maashi, Jamal Alsamri, Mohammad Alamgeer, Shouki A Ebad, Saud S Alotaibi, Marwa Obayya, Samah Al Zanin
Laryngeal cancer (LC) represents a substantial world health problem, with diminished survival rates attributed to late-stage diagnoses. Correct treatment for LC is complex, particularly in the final stages. This kind of cancer is a complex malignancy inside the head and neck region of patients. Recently, researchers serving medical consultants to recognize LC efficiently develop different analysis methods and tools. However, these existing tools and techniques have various problems regarding performance constraints, like lesser accuracy in detecting LC at the early stages, additional computational complexity, and colossal time utilization in patient screening. Deep learning (DL) approaches have been established that are effective in the recognition of LC. Therefore, this study develops an efficient LC Detection using the Chaotic Metaheuristics Integration with the DL (LCD-CMDL) technique. The LCD-CMDL technique mainly focuses on detecting and classifying LC utilizing throat region images. In the LCD-CMDL technique, the contrast enhancement process uses the CLAHE approach. For feature extraction, the LCD-CMDL technique applies the Squeeze-and-Excitation ResNet (SE-ResNet) model to learn the complex and intrinsic features from the image preprocessing. Moreover, the hyperparameter tuning of the SE-ResNet approach is performed using a chaotic adaptive sparrow search algorithm (CSSA). Finally, the extreme learning machine (ELM) model was applied to detect and classify the LC. The performance evaluation of the LCD-CMDL approach occurs utilizing a benchmark throat region image database. The experimental values implied the superior performance of the LCD-CMDL approach over recent state-of-the-art approaches.
{"title":"Improving laryngeal cancer detection using chaotic metaheuristics integration with squeeze-and-excitation resnet model.","authors":"Sana Alazwari, Mashael Maashi, Jamal Alsamri, Mohammad Alamgeer, Shouki A Ebad, Saud S Alotaibi, Marwa Obayya, Samah Al Zanin","doi":"10.1007/s13755-024-00296-5","DOIUrl":"10.1007/s13755-024-00296-5","url":null,"abstract":"<p><p>Laryngeal cancer (LC) represents a substantial world health problem, with diminished survival rates attributed to late-stage diagnoses. Correct treatment for LC is complex, particularly in the final stages. This kind of cancer is a complex malignancy inside the head and neck region of patients. Recently, researchers serving medical consultants to recognize LC efficiently develop different analysis methods and tools. However, these existing tools and techniques have various problems regarding performance constraints, like lesser accuracy in detecting LC at the early stages, additional computational complexity, and colossal time utilization in patient screening. Deep learning (DL) approaches have been established that are effective in the recognition of LC. Therefore, this study develops an efficient LC Detection using the Chaotic Metaheuristics Integration with the DL (LCD-CMDL) technique. The LCD-CMDL technique mainly focuses on detecting and classifying LC utilizing throat region images. In the LCD-CMDL technique, the contrast enhancement process uses the CLAHE approach. For feature extraction, the LCD-CMDL technique applies the Squeeze-and-Excitation ResNet (SE-ResNet) model to learn the complex and intrinsic features from the image preprocessing. Moreover, the hyperparameter tuning of the SE-ResNet approach is performed using a chaotic adaptive sparrow search algorithm (CSSA). Finally, the extreme learning machine (ELM) model was applied to detect and classify the LC. The performance evaluation of the LCD-CMDL approach occurs utilizing a benchmark throat region image database. The experimental values implied the superior performance of the LCD-CMDL approach over recent state-of-the-art approaches.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"38"},"PeriodicalIF":3.4,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11239646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00295-6
Ming Sheng, Shuliang Wang, Yong Zhang, Rui Hao, Ye Liang, Yi Luo, Wenhan Yang, Jincheng Wang, Yinan Li, Wenkui Zheng, Wenyao Li
Obtaining high-quality data sets from raw data is a key step before data exploration and analysis. Nowadays, in the medical domain, a large amount of data is in need of quality improvement before being used to analyze the health condition of patients. There have been many researches in data extraction, data cleaning and data imputation, respectively. However, there are seldom frameworks integrating with these three techniques, making the dataset suffer in accuracy, consistency and integrity. In this paper, a multi-source heterogeneous data enhancement framework based on a lakehouse MHDP is proposed, which includes three steps of data extraction, data cleaning and data imputation. In the data extraction step, a data fusion technique is offered to handle multi-modal and multi-source heterogeneous data. In the data cleaning step, we propose HoloCleanX, which provides a convenient interactive procedure. In the data imputation step, multiple imputation (MI) and the SOTA algorithm SAITS, are applied for different situations. We evaluate our framework via three tasks: clustering, classification and strategy prediction. The experimental results prove the effectiveness of our data enhancement framework.
从原始数据中获取高质量的数据集是数据探索和分析前的关键一步。如今,在医疗领域,大量数据在用于分析患者健康状况之前都需要提高质量。在数据提取、数据清洗和数据估算方面分别有许多研究。然而,很少有将这三种技术整合在一起的框架,使得数据集的准确性、一致性和完整性受到影响。本文提出了一种基于湖库 MHDP 的多源异构数据增强框架,包括数据提取、数据清洗和数据估算三个步骤。在数据提取步骤中,提供了一种数据融合技术来处理多模态和多源异构数据。在数据清洗步骤中,我们提出了 HoloCleanX,它提供了一种便捷的交互式程序。在数据估算步骤中,我们针对不同情况应用了多重估算(MI)和 SOTA 算法 SAITS。我们通过三个任务来评估我们的框架:聚类、分类和策略预测。实验结果证明了我们的数据增强框架的有效性。
{"title":"A multi-source heterogeneous medical data enhancement framework based on lakehouse.","authors":"Ming Sheng, Shuliang Wang, Yong Zhang, Rui Hao, Ye Liang, Yi Luo, Wenhan Yang, Jincheng Wang, Yinan Li, Wenkui Zheng, Wenyao Li","doi":"10.1007/s13755-024-00295-6","DOIUrl":"10.1007/s13755-024-00295-6","url":null,"abstract":"<p><p>Obtaining high-quality data sets from raw data is a key step before data exploration and analysis. Nowadays, in the medical domain, a large amount of data is in need of quality improvement before being used to analyze the health condition of patients. There have been many researches in data extraction, data cleaning and data imputation, respectively. However, there are seldom frameworks integrating with these three techniques, making the dataset suffer in accuracy, consistency and integrity. In this paper, a multi-source heterogeneous data enhancement framework based on a lakehouse MHDP is proposed, which includes three steps of data extraction, data cleaning and data imputation. In the data extraction step, a data fusion technique is offered to handle multi-modal and multi-source heterogeneous data. In the data cleaning step, we propose HoloCleanX, which provides a convenient interactive procedure. In the data imputation step, multiple imputation (MI) and the SOTA algorithm SAITS, are applied for different situations. We evaluate our framework via three tasks: clustering, classification and strategy prediction. The experimental results prove the effectiveness of our data enhancement framework.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"37"},"PeriodicalIF":3.4,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11226589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00293-8
Mahmood Ul Hassan, Amin A Al-Awady, Naeem Ahmed, Muhammad Saeed, Jarallah Alqahtani, Ali Mousa Mohamed Alahmari, Muhammad Wasim Javed
Ocular diseases pose significant challenges in timely diagnosis and effective treatment. Deep learning has emerged as a promising technique in medical image analysis, offering potential solutions for accurately detecting and classifying ocular diseases. In this research, we propose Ocular Net, a novel deep learning model for detecting and classifying ocular diseases, including Cataracts, Diabetic, Uveitis, and Glaucoma, using a large dataset of ocular images. The study utilized an image dataset comprising 6200 images of both eyes of patients. Specifically, 70% of these images (4000 images) were allocated for model training, while the remaining 30% (2200 images) were designated for testing purposes. The dataset contains images of five categories that include four diseases, and one normal category. The proposed model uses transfer learning, average pooling layers, Clipped Relu, Leaky Relu and various other layers to accurately detect the ocular diseases from images. Our approach involves training a novel Ocular Net model on diverse ocular images and evaluating its accuracy and performance metrics for disease detection. We also employ data augmentation techniques to improve model performance and mitigate overfitting. The proposed model is tested on different training and testing ratios with varied parameters. Additionally, we compare the performance of the Ocular Net with previous methods based on various evaluation parameters, assessing its potential for enhancing the accuracy and efficiency of ocular disease diagnosis. The results demonstrate that Ocular Net achieves 98.89% accuracy and 0.12% loss value in detecting and classifying ocular diseases by outperforming existing methods.
眼部疾病给及时诊断和有效治疗带来了巨大挑战。深度学习已成为医学图像分析领域一项前景广阔的技术,为准确检测和分类眼部疾病提供了潜在的解决方案。在这项研究中,我们提出了一种新型的深度学习模型--Ocular Net,用于使用大型眼科图像数据集检测和分类眼科疾病,包括白内障、糖尿病、葡萄膜炎和青光眼。该研究使用的图像数据集包含 6200 张患者双眼的图像。其中 70% 的图像(4000 张)用于模型训练,其余 30%(2200 张)用于测试。数据集包含五个类别的图像,其中包括四种疾病和一种正常类别。建议的模型使用迁移学习、平均池化层、Clipped Relu、Leaky Relu 和其他各种层来准确检测图像中的眼部疾病。我们的方法包括在各种眼科图像上训练新型眼科网络模型,并评估其疾病检测的准确性和性能指标。我们还采用了数据增强技术,以提高模型性能并减少过拟合。我们利用不同的参数,在不同的训练和测试比例上对所提出的模型进行了测试。此外,我们还根据不同的评估参数将 Ocular Net 的性能与之前的方法进行了比较,评估了其在提高眼部疾病诊断的准确性和效率方面的潜力。结果表明,Ocular Net 在检测和分类眼部疾病方面的准确率为 98.89%,损失值为 0.12%,优于现有方法。
{"title":"A transfer learning enabled approach for ocular disease detection and classification.","authors":"Mahmood Ul Hassan, Amin A Al-Awady, Naeem Ahmed, Muhammad Saeed, Jarallah Alqahtani, Ali Mousa Mohamed Alahmari, Muhammad Wasim Javed","doi":"10.1007/s13755-024-00293-8","DOIUrl":"10.1007/s13755-024-00293-8","url":null,"abstract":"<p><p>Ocular diseases pose significant challenges in timely diagnosis and effective treatment. Deep learning has emerged as a promising technique in medical image analysis, offering potential solutions for accurately detecting and classifying ocular diseases. In this research, we propose Ocular Net, a novel deep learning model for detecting and classifying ocular diseases, including Cataracts, Diabetic, Uveitis, and Glaucoma, using a large dataset of ocular images. The study utilized an image dataset comprising 6200 images of both eyes of patients. Specifically, 70% of these images (4000 images) were allocated for model training, while the remaining 30% (2200 images) were designated for testing purposes. The dataset contains images of five categories that include four diseases, and one normal category. The proposed model uses transfer learning, average pooling layers, Clipped Relu, Leaky Relu and various other layers to accurately detect the ocular diseases from images. Our approach involves training a novel Ocular Net model on diverse ocular images and evaluating its accuracy and performance metrics for disease detection. We also employ data augmentation techniques to improve model performance and mitigate overfitting. The proposed model is tested on different training and testing ratios with varied parameters. Additionally, we compare the performance of the Ocular Net with previous methods based on various evaluation parameters, assessing its potential for enhancing the accuracy and efficiency of ocular disease diagnosis. The results demonstrate that Ocular Net achieves 98.89% accuracy and 0.12% loss value in detecting and classifying ocular diseases by outperforming existing methods.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"36"},"PeriodicalIF":3.4,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11164840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141311973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-03eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00292-9
Arturo Martinez-Rodrigo, Jose Carlos Castillo, Alicia Saz-Lara, Iris Otero-Luis, Iván Cavero-Redondo
Purpose: Understanding early vascular ageing has become crucial for preventing adverse cardiovascular events. To this respect, recent AI-based risk clustering models offer early detection strategies focused on healthy populations, yet their complexity limits clinical use. This work introduces a novel recommendation system embedded in a web app to assess and mitigate early vascular ageing risk, leading patients towards improved cardiovascular health.
Methods: This system employs a methodology that calculates distances within multidimensional spaces and integrates cost functions to obtain personalized optimisation of recommendations. It also incorporates a classification system for determining the intensity levels of the clinical interventions.
Results: The recommendation system showed high efficiency in identifying and visualizing individuals at high risk of early vascular ageing among healthy patients. Additionally, the system corroborated its consistency and reliability in generating personalized recommendations among different levels of granularity, emphasizing its focus on moderate or low-intensity recommendations, which could improve patient adherence to the intervention.
Conclusion: This tool might significantly aid healthcare professionals in their daily analysis, improving the prevention and management of cardiovascular diseases.
{"title":"Development of a recommendation system and data analysis in personalized medicine: an approach towards healthy vascular ageing.","authors":"Arturo Martinez-Rodrigo, Jose Carlos Castillo, Alicia Saz-Lara, Iris Otero-Luis, Iván Cavero-Redondo","doi":"10.1007/s13755-024-00292-9","DOIUrl":"10.1007/s13755-024-00292-9","url":null,"abstract":"<p><strong>Purpose: </strong>Understanding early vascular ageing has become crucial for preventing adverse cardiovascular events. To this respect, recent AI-based risk clustering models offer early detection strategies focused on healthy populations, yet their complexity limits clinical use. This work introduces a novel recommendation system embedded in a web app to assess and mitigate early vascular ageing risk, leading patients towards improved cardiovascular health.</p><p><strong>Methods: </strong>This system employs a methodology that calculates distances within multidimensional spaces and integrates cost functions to obtain personalized optimisation of recommendations. It also incorporates a classification system for determining the intensity levels of the clinical interventions.</p><p><strong>Results: </strong>The recommendation system showed high efficiency in identifying and visualizing individuals at high risk of early vascular ageing among healthy patients. Additionally, the system corroborated its consistency and reliability in generating personalized recommendations among different levels of granularity, emphasizing its focus on moderate or low-intensity recommendations, which could improve patient adherence to the intervention.</p><p><strong>Conclusion: </strong>This tool might significantly aid healthcare professionals in their daily analysis, improving the prevention and management of cardiovascular diseases.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"34"},"PeriodicalIF":3.4,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11068708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140865388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-28eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00290-x
Ayşe Ayyüce Demirbaş, Hüseyin Üzen, Hüseyin Fırat
Gastrointestinal (GI) disorders, encompassing conditions like cancer and Crohn's disease, pose a significant threat to public health. Endoscopic examinations have become crucial for diagnosing and treating these disorders efficiently. However, the subjective nature of manual evaluations by gastroenterologists can lead to potential errors in disease classification. In addition, the difficulty of diagnosing diseased tissues in GI and the high similarity between classes made the subject a difficult area. Automated classification systems that use artificial intelligence to solve these problems have gained traction. Automatic detection of diseases in medical images greatly benefits in the diagnosis of diseases and reduces the time of disease detection. In this study, we suggested a new architecture to enable research on computer-assisted diagnosis and automated disease detection in GI diseases. This architecture, called Spatial-Attention ConvMixer (SAC), further developed the patch extraction technique used as the basis of the ConvMixer architecture with a spatial attention mechanism (SAM). The SAM enables the network to concentrate selectively on the most informative areas, assigning importance to each spatial location within the feature maps. We employ the Kvasir dataset to assess the accuracy of classifying GI illnesses using the SAC architecture. We compare our architecture's results with Vanilla ViT, Swin Transformer, ConvMixer, MLPMixer, ResNet50, and SqueezeNet models. Our SAC method gets 93.37% accuracy, while the other architectures get respectively 79.52%, 74.52%, 92.48%, 63.04%, 87.44%, and 85.59%. The proposed spatial attention block improves the accuracy of the ConvMixer architecture on the Kvasir, outperforming the state-of-the-art methods with an accuracy rate of 93.37%.
{"title":"Spatial-attention ConvMixer architecture for classification and detection of gastrointestinal diseases using the Kvasir dataset.","authors":"Ayşe Ayyüce Demirbaş, Hüseyin Üzen, Hüseyin Fırat","doi":"10.1007/s13755-024-00290-x","DOIUrl":"10.1007/s13755-024-00290-x","url":null,"abstract":"<p><p>Gastrointestinal (GI) disorders, encompassing conditions like cancer and Crohn's disease, pose a significant threat to public health. Endoscopic examinations have become crucial for diagnosing and treating these disorders efficiently. However, the subjective nature of manual evaluations by gastroenterologists can lead to potential errors in disease classification. In addition, the difficulty of diagnosing diseased tissues in GI and the high similarity between classes made the subject a difficult area. Automated classification systems that use artificial intelligence to solve these problems have gained traction. Automatic detection of diseases in medical images greatly benefits in the diagnosis of diseases and reduces the time of disease detection. In this study, we suggested a new architecture to enable research on computer-assisted diagnosis and automated disease detection in GI diseases. This architecture, called Spatial-Attention ConvMixer (SAC), further developed the patch extraction technique used as the basis of the ConvMixer architecture with a spatial attention mechanism (SAM). The SAM enables the network to concentrate selectively on the most informative areas, assigning importance to each spatial location within the feature maps. We employ the Kvasir dataset to assess the accuracy of classifying GI illnesses using the SAC architecture. We compare our architecture's results with Vanilla ViT, Swin Transformer, ConvMixer, MLPMixer, ResNet50, and SqueezeNet models. Our SAC method gets 93.37% accuracy, while the other architectures get respectively 79.52%, 74.52%, 92.48%, 63.04%, 87.44%, and 85.59%. The proposed spatial attention block improves the accuracy of the ConvMixer architecture on the Kvasir, outperforming the state-of-the-art methods with an accuracy rate of 93.37%.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"32"},"PeriodicalIF":4.7,"publicationDate":"2024-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11056348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}