Pub Date : 2024-09-06eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00303-9
Eliseo Bao, Anxo Pérez, Javier Parapar
Users of social platforms often perceive these sites as supportive spaces to post about their mental health issues. Those conversations contain important traces about individuals' health risks. Recently, researchers have exploited this online information to construct mental health detection models, which aim to identify users at risk on platforms like Twitter, Reddit or Facebook. Most of these models are focused on achieving good classification results, ignoring the explainability and interpretability of the decisions. Recent research has pointed out the importance of using clinical markers, such as the use of symptoms, to improve trust in the computational models by health professionals. In this paper, we introduce transformer-based architectures designed to detect and explain the appearance of depressive symptom markers in user-generated content from social media. We present two approaches: (i) train a model to classify, and another one to explain the classifier's decision separately and (ii) unify the two tasks simultaneously within a single model. Additionally, for this latter manner, we also investigated the performance of recent conversational Large Language Models (LLMs) utilizing both in-context learning and finetuning. Our models provide natural language explanations, aligning with validated symptoms, thus enabling clinicians to interpret the decisions more effectively. We evaluate our approaches using recent symptom-focused datasets, using both offline metrics and expert-in-the-loop evaluations to assess the quality of our models' explanations. Our findings demonstrate that it is possible to achieve good classification results while generating interpretable symptom-based explanations.
{"title":"Explainable depression symptom detection in social media.","authors":"Eliseo Bao, Anxo Pérez, Javier Parapar","doi":"10.1007/s13755-024-00303-9","DOIUrl":"10.1007/s13755-024-00303-9","url":null,"abstract":"<p><p>Users of social platforms often perceive these sites as supportive spaces to post about their mental health issues. Those conversations contain important traces about individuals' health risks. Recently, researchers have exploited this online information to construct mental health detection models, which aim to identify users at risk on platforms like Twitter, Reddit or Facebook. Most of these models are focused on achieving good classification results, ignoring the explainability and interpretability of the decisions. Recent research has pointed out the importance of using clinical markers, such as the use of symptoms, to improve trust in the computational models by health professionals. In this paper, we introduce transformer-based architectures designed to detect and explain the appearance of depressive symptom markers in user-generated content from social media. We present two approaches: (i) train a model to classify, and another one to explain the classifier's decision separately and (ii) unify the two tasks simultaneously within a single model. Additionally, for this latter manner, we also investigated the performance of recent conversational Large Language Models (LLMs) utilizing both in-context learning and finetuning. Our models provide natural language explanations, aligning with validated symptoms, thus enabling clinicians to interpret the decisions more effectively. We evaluate our approaches using recent symptom-focused datasets, using both offline metrics and expert-in-the-loop evaluations to assess the quality of our models' explanations. Our findings demonstrate that it is possible to achieve good classification results while generating interpretable symptom-based explanations.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"47"},"PeriodicalIF":4.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11379836/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heartbeats classification is a crucial tool for arrhythmia diagnosis. In this study, a multi-feature pseudo-color mapping (MfPc Mapping) was proposed, and a lightweight FlexShuffleNet was designed to classify heartbeats. MfPc Mapping converts one-dimensional (1-D) electrocardiogram (ECG) recordings into corresponding two-dimensional (2-D) multi-feature RGB graphs, and it offers good excellent interpretability and data visualization. FlexShuffleNet is a lightweight network that can be adapted to classification tasks of varying complexity by tuning hyperparameters. The method has three steps. The first step is data preprocessing, which includes de-noising the raw ECG recordings, removing baseline drift, extracting heartbeats, and performing data balancing, the second step is transforming the heartbeats using MfPc Mapping. Finally, the FlexShuffleNet is employed to classify heartbeats into 14 categories. This study was evaluated on the test set of the MIT-BIH arrhythmia database (MIT/BIH DB), and it yielded the results i.e., accuracy of 99.77%, sensitivity of 94.60%, precision of 89.83% and specificity of 99.85% and F1-score of 0.9125 in 14-category classification task. Additionally, validation on Shandong Province Hospital database (SPH DB) yielded the results i.e., accuracy of 92.08%, sensitivity of 93.63%, precision of 91.25% and specificity of 99.85% and F1-score of 0.9315. The results show the satisfied performance of the proposed method.
{"title":"A lightweight network based on multi-feature pseudo-color mapping for arrhythmia recognition.","authors":"Yijun Ma, Junyan Li, Jinbiao Zhang, Jilin Wang, Guozhen Sun, Yatao Zhang","doi":"10.1007/s13755-024-00304-8","DOIUrl":"10.1007/s13755-024-00304-8","url":null,"abstract":"<p><p>Heartbeats classification is a crucial tool for arrhythmia diagnosis. In this study, a multi-feature pseudo-color mapping (MfPc Mapping) was proposed, and a lightweight FlexShuffleNet was designed to classify heartbeats. MfPc Mapping converts one-dimensional (1-D) electrocardiogram (ECG) recordings into corresponding two-dimensional (2-D) multi-feature RGB graphs, and it offers good excellent interpretability and data visualization. FlexShuffleNet is a lightweight network that can be adapted to classification tasks of varying complexity by tuning hyperparameters. The method has three steps. The first step is data preprocessing, which includes de-noising the raw ECG recordings, removing baseline drift, extracting heartbeats, and performing data balancing, the second step is transforming the heartbeats using MfPc Mapping. Finally, the FlexShuffleNet is employed to classify heartbeats into 14 categories. This study was evaluated on the test set of the MIT-BIH arrhythmia database (MIT/BIH DB), and it yielded the results i.e., accuracy of 99.77%, sensitivity of 94.60%, precision of 89.83% and specificity of 99.85% and F1-score of 0.9125 in 14-category classification task. Additionally, validation on Shandong Province Hospital database (SPH DB) yielded the results i.e., accuracy of 92.08%, sensitivity of 93.63%, precision of 91.25% and specificity of 99.85% and F1-score of 0.9315. The results show the satisfied performance of the proposed method.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"46"},"PeriodicalIF":4.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371975/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00298-3
Zhisheng Huang, Qing Hu
Adolescent suicide has become an important social issue of general concern. Many young people express their suicidal feelings and intentions through online social media, e.g., Twitter, Microblog. The "tree hole" is the Chinese name for places on the Web where people post secrets. It provides the possibility of using Artificial Intelligence and big data technology to detect the posts where someone express the suicidal signal from those "tree hole" social media. We have developed the Web-based intelligent agents (i.e., AI-based programs) which can monitor the "tree hole" websites in Microblog every day by using knowledge graph technology. We have organized Tree-hole Rescue Team, which consists of more than 1000 volunteers, to carry out suicide rescue intervention according to the daily monitoring notifications. From 2018 to 2023, Tree-hole Rescue Team has prevented more than 6600 suicides. A few thousands of people have been saved within those 6 years. In this paper, we present the basic technology of Web-based Tree Hole intelligent agents and elaborate how the intelligent agents can discover suicide attempts and issue corresponding monitoring notifications and how the volunteers of Tree Hole Rescue Team can conduct online suicide intervention. This research also shows that the knowledge graph approach can be used for the semantic analysis on social media.
{"title":"Tree hole rescue: an AI approach for suicide risk detection and online suicide intervention.","authors":"Zhisheng Huang, Qing Hu","doi":"10.1007/s13755-024-00298-3","DOIUrl":"10.1007/s13755-024-00298-3","url":null,"abstract":"<p><p>Adolescent suicide has become an important social issue of general concern. Many young people express their suicidal feelings and intentions through online social media, e.g., Twitter, Microblog. The \"tree hole\" is the Chinese name for places on the Web where people post secrets. It provides the possibility of using Artificial Intelligence and big data technology to detect the posts where someone express the suicidal signal from those \"tree hole\" social media. We have developed the Web-based intelligent agents (i.e., AI-based programs) which can monitor the \"tree hole\" websites in Microblog every day by using knowledge graph technology. We have organized Tree-hole Rescue Team, which consists of more than 1000 volunteers, to carry out suicide rescue intervention according to the daily monitoring notifications. From 2018 to 2023, Tree-hole Rescue Team has prevented more than 6600 suicides. A few thousands of people have been saved within those 6 years. In this paper, we present the basic technology of Web-based Tree Hole intelligent agents and elaborate how the intelligent agents can discover suicide attempts and issue corresponding monitoring notifications and how the volunteers of Tree Hole Rescue Team can conduct online suicide intervention. This research also shows that the knowledge graph approach can be used for the semantic analysis on social media.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"45"},"PeriodicalIF":4.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142141312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00305-7
Umaisa Hassan, Amit Singhal
Purpose: Attention-deficit hyperactivity disorder (ADHD) stands as a significant psychiatric and neuro-developmental disorder with global prevalence. The prevalence of ADHD among school children in India is estimated to range from 5% to 8%. However, certain studies have reported higher prevalence rates, reaching as high as 11%. Utilizing electroencephalography (EEG) signals for the early detection and classification of ADHD in children is crucial.
Methods: In this study, we introduce a CNN architecture characterized by its simplicity, comprising solely two convolutional layers. Our approach involves pre-processing EEG signals through a band-pass filter and segmenting them into 5-s frames. Following this, the frames undergo normalization and canonical correlation analysis. Subsequently, the proposed CNN architecture is employed for training and testing purposes.
Results: Our methodology yields remarkable results, with 100% accuracy, sensitivity, and specificity when utilizing the complete 19-channel EEG signals for diagnosing ADHD in children. However, employing the entire set of EEG channels presents challenges related to the computational complexity. Therefore, we investigate the feasibility of using only frontal brain EEG channels for ADHD detection, which yields an accuracy of 99.08%.
Conclusions: The proposed method yields high accuracy and is easy to implement, hence, it has the potential for widespread practical deployment to diagnose ADHD.
{"title":"Convolutional neural network framework for EEG-based ADHD diagnosis in children.","authors":"Umaisa Hassan, Amit Singhal","doi":"10.1007/s13755-024-00305-7","DOIUrl":"10.1007/s13755-024-00305-7","url":null,"abstract":"<p><strong>Purpose: </strong>Attention-deficit hyperactivity disorder (ADHD) stands as a significant psychiatric and neuro-developmental disorder with global prevalence. The prevalence of ADHD among school children in India is estimated to range from 5% to 8%. However, certain studies have reported higher prevalence rates, reaching as high as 11%. Utilizing electroencephalography (EEG) signals for the early detection and classification of ADHD in children is crucial.</p><p><strong>Methods: </strong>In this study, we introduce a CNN architecture characterized by its simplicity, comprising solely two convolutional layers. Our approach involves pre-processing EEG signals through a band-pass filter and segmenting them into 5-s frames. Following this, the frames undergo normalization and canonical correlation analysis. Subsequently, the proposed CNN architecture is employed for training and testing purposes.</p><p><strong>Results: </strong>Our methodology yields remarkable results, with 100% accuracy, sensitivity, and specificity when utilizing the complete 19-channel EEG signals for diagnosing ADHD in children. However, employing the entire set of EEG channels presents challenges related to the computational complexity. Therefore, we investigate the feasibility of using only frontal brain EEG channels for ADHD detection, which yields an accuracy of 99.08%.</p><p><strong>Conclusions: </strong>The proposed method yields high accuracy and is easy to implement, hence, it has the potential for widespread practical deployment to diagnose ADHD.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"44"},"PeriodicalIF":4.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11365922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-24eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00302-w
Fatma Özcan
Cardiovascular disease, which remains one of the main causes of death, can be prevented by early diagnosis of heart sounds. Certain noisy signals, known as murmurs, may be present in heart sounds. On auscultation, the degree of murmur is closely related to the patient's clinical condition. Computer-aided decision-making systems can help doctors to detect murmurs and make faster decisions. The Mel spectrograms were generated from raw phonocardiograms and then presented to the OpenL3 network for transfer learning. In this way, the signals were classified to predict the presence or absence of murmurs and their level of severity. Pitch level (healthy, low, medium, high) and Levine scale (healthy, soft, loud) were used. The results obtained without prior segmentation are very impressive. The model used was then interpreted using an Explainable Artificial Intelligence (XAI) method, Occlusion Sensitivity. This approach shows that XAI methods are necessary to know the features used internally by the artificial neural network then to explain the automatic decision taken by the model. The averaged image of the occlusion sensitivity maps can give us either an overview or a precise detail per pixel of the features used. In the field of healthcare, particularly cardiology, for rapid diagnostic and preventive purposes, this work could provide more detail on the important features of the phonocardiogram.
{"title":"Rapid detection and interpretation of heart murmurs using phonocardiograms, transfer learning and explainable artificial intelligence.","authors":"Fatma Özcan","doi":"10.1007/s13755-024-00302-w","DOIUrl":"10.1007/s13755-024-00302-w","url":null,"abstract":"<p><p>Cardiovascular disease, which remains one of the main causes of death, can be prevented by early diagnosis of heart sounds. Certain noisy signals, known as murmurs, may be present in heart sounds. On auscultation, the degree of murmur is closely related to the patient's clinical condition. Computer-aided decision-making systems can help doctors to detect murmurs and make faster decisions. The Mel spectrograms were generated from raw phonocardiograms and then presented to the OpenL3 network for transfer learning. In this way, the signals were classified to predict the presence or absence of murmurs and their level of severity. Pitch level (healthy, low, medium, high) and Levine scale (healthy, soft, loud) were used. The results obtained without prior segmentation are very impressive. The model used was then interpreted using an Explainable Artificial Intelligence (XAI) method, Occlusion Sensitivity. This approach shows that XAI methods are necessary to know the features used internally by the artificial neural network then to explain the automatic decision taken by the model. The averaged image of the occlusion sensitivity maps can give us either an overview or a precise detail per pixel of the features used. In the field of healthcare, particularly cardiology, for rapid diagnostic and preventive purposes, this work could provide more detail on the important features of the phonocardiogram.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"43"},"PeriodicalIF":4.7,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11344737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142074193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00301-x
Usharani Bhimavarapu
Diabetic retinopathy, a complication of diabetes, damages the retina due to prolonged high blood sugar levels, leading to vision impairment and blindness. Early detection through regular eye exams and proper diabetes management are crucial in preventing vision loss. DR is categorized into five classes based on severity, ranging from no retinopathy to proliferative diabetic retinopathy. This study proposes an automated detection method using fundus images. Image segmentation divides fundus images into homogeneous regions, facilitating feature extraction. Feature selection aims to reduce computational costs and improve classification accuracy by selecting relevant features. The proposed algorithm integrates an Improved Tunicate Swarm Algorithm (ITSA) with Renyi's entropy for enhanced adaptability in the initial and final stages. An Improved Hybrid Butterfly Optimization (IHBO) Algorithm is also introduced for feature selection. The effectiveness of the proposed method is demonstrated using retinal fundus image datasets, achieving promising results in DR severity classification. For the IDRiD dataset, the proposed model achieves a segmentation Dice coefficient of 98.06% and classification accuracy of 98.21%. In contrast, the E-Optha dataset attains a segmentation Dice coefficient of 97.95% and classification accuracy of 99.96%. Experimental results indicate the algorithm's ability to accurately classify DR severity levels, highlighting its potential for early detection and prevention of diabetes-related blindness.
{"title":"Optimized automated detection of diabetic retinopathy severity: integrating improved multithresholding tunicate swarm algorithm and improved hybrid butterfly optimization.","authors":"Usharani Bhimavarapu","doi":"10.1007/s13755-024-00301-x","DOIUrl":"10.1007/s13755-024-00301-x","url":null,"abstract":"<p><p>Diabetic retinopathy, a complication of diabetes, damages the retina due to prolonged high blood sugar levels, leading to vision impairment and blindness. Early detection through regular eye exams and proper diabetes management are crucial in preventing vision loss. DR is categorized into five classes based on severity, ranging from no retinopathy to proliferative diabetic retinopathy. This study proposes an automated detection method using fundus images. Image segmentation divides fundus images into homogeneous regions, facilitating feature extraction. Feature selection aims to reduce computational costs and improve classification accuracy by selecting relevant features. The proposed algorithm integrates an Improved Tunicate Swarm Algorithm (ITSA) with Renyi's entropy for enhanced adaptability in the initial and final stages. An Improved Hybrid Butterfly Optimization (IHBO) Algorithm is also introduced for feature selection. The effectiveness of the proposed method is demonstrated using retinal fundus image datasets, achieving promising results in DR severity classification. For the IDRiD dataset, the proposed model achieves a segmentation Dice coefficient of 98.06% and classification accuracy of 98.21%. In contrast, the E-Optha dataset attains a segmentation Dice coefficient of 97.95% and classification accuracy of 99.96%. Experimental results indicate the algorithm's ability to accurately classify DR severity levels, highlighting its potential for early detection and prevention of diabetes-related blindness.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"42"},"PeriodicalIF":4.7,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141983523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Target-based strategy is a prevalent means of drug research and development (R&D), since targets provide effector molecules of drug action and offer the foundation of pharmacological investigation. Recently, the artificial intelligence (AI) technology has been utilized in various stages of drug R&D, where AI-assisted experimental methods show higher efficiency than sole experimental ones. It is a critical need to give a comprehensive review of AI applications in drug R &D for biopharmaceutical field.
Methods: Relevant literatures about AI-assisted drug R&D were collected from the public databases (Including Google Scholar, Web of Science, PubMed, IEEE Xplore Digital Library, Springer, and ScienceDirect) through a keyword searching strategy with the following terms [("Artificial Intelligence" OR "Knowledge Graph" OR "Machine Learning") AND ("Drug Target Identification" OR "New Drug Development")].
Results: In this review, we first introduced common strategies and novel trends of drug R&D, followed by characteristic description of AI algorithms widely used in drug R&D. Subsequently, we depicted detailed applications of AI algorithms in target identification, lead compound identification and optimization, drug repurposing, and drug analytical platform construction. Finally, we discussed the challenges and prospects of AI-assisted methods for drug discovery.
Conclusion: Collectively, this review provides comprehensive overview of AI applications in drug R&D and presents future perspectives for biopharmaceutical field, which may promote the development of drug industry.
目的:基于靶点的策略是药物研发(R&D)的普遍手段,因为靶点提供了药物作用的效应分子,为药理学研究提供了基础。最近,人工智能(AI)技术被应用于药物研发的各个阶段,其中人工智能辅助实验方法比单独实验方法显示出更高的效率。因此,亟需对人工智能在生物制药领域药物研发中的应用进行全面综述:方法:通过关键词检索策略,以[("人工智能 "或 "知识图谱 "或 "机器学习")和("药物靶点识别 "或 "新药研发")]为关键词,从公共数据库(包括 Google Scholar、Web of Science、PubMed、IEEE Xplore Digital Library、Springer 和 ScienceDirect)中收集有关人工智能辅助药物研发的相关文献:在这篇综述中,我们首先介绍了药物研发的常见策略和新趋势,然后介绍了广泛应用于药物研发的人工智能算法的特点。随后,我们详细介绍了人工智能算法在靶点识别、先导化合物识别与优化、药物再利用以及药物分析平台构建等方面的应用。最后,我们讨论了人工智能辅助药物发现方法所面临的挑战和前景:综上所述,本综述全面概述了人工智能在药物研发中的应用,并提出了生物制药领域的未来展望,可促进药物产业的发展。
{"title":"Comprehensive applications of the artificial intelligence technology in new drug research and development.","authors":"Hongyu Chen, Dong Lu, Ziyi Xiao, Shensuo Li, Wen Zhang, Xin Luan, Weidong Zhang, Guangyong Zheng","doi":"10.1007/s13755-024-00300-y","DOIUrl":"10.1007/s13755-024-00300-y","url":null,"abstract":"<p><strong>Purpose: </strong>Target-based strategy is a prevalent means of drug research and development (R&D), since targets provide effector molecules of drug action and offer the foundation of pharmacological investigation. Recently, the artificial intelligence (AI) technology has been utilized in various stages of drug R&D, where AI-assisted experimental methods show higher efficiency than sole experimental ones. It is a critical need to give a comprehensive review of AI applications in drug R &D for biopharmaceutical field.</p><p><strong>Methods: </strong>Relevant literatures about AI-assisted drug R&D were collected from the public databases (Including Google Scholar, Web of Science, PubMed, IEEE Xplore Digital Library, Springer, and ScienceDirect) through a keyword searching strategy with the following terms [(\"Artificial Intelligence\" OR \"Knowledge Graph\" OR \"Machine Learning\") AND (\"Drug Target Identification\" OR \"New Drug Development\")].</p><p><strong>Results: </strong>In this review, we first introduced common strategies and novel trends of drug R&D, followed by characteristic description of AI algorithms widely used in drug R&D. Subsequently, we depicted detailed applications of AI algorithms in target identification, lead compound identification and optimization, drug repurposing, and drug analytical platform construction. Finally, we discussed the challenges and prospects of AI-assisted methods for drug discovery.</p><p><strong>Conclusion: </strong>Collectively, this review provides comprehensive overview of AI applications in drug R&D and presents future perspectives for biopharmaceutical field, which may promote the development of drug industry.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"41"},"PeriodicalIF":4.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11310389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141917743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background and objective: Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.
Methods: We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model's performance and compare it with other works.
Results: Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.
Conclusion: This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.
背景和目的:及时准确地检测自闭症谱系障碍(ASD)对于早期干预和改善患者预后至关重要。本研究旨在利用机器学习(ML)技术的力量,通过结合时间眼动跟踪数据来改进 ASD 检测。我们开发了一种新型 ML 模型,利用眼球扫描路径、眼球运动距离序列和固定持续时间序列,增强分析的时间性,从而更有效地识别 ASD:方法:我们利用一个不带增强功能的眼动跟踪数据集来训练我们的 ML 模型,该模型由 CNN-GRU-ANN 架构组成。该模型由 CNN-GRU-ANN 架构组成,训练时使用了注视图、眼球定点之间的距离序列以及定点和眼球移动的持续时间。此外,我们还使用了一个验证数据集来评估模型的性能,并将其与其他作品进行比较:结果:与 VGG-16 模型相比,我们的 ML 模型在 ASD 检测方面表现优异。通过结合眼动跟踪数据中的时间信息,我们的模型获得了更高的准确度、精确度和召回率。基于序列特征的新颖添加使我们的模型能够有效区分 ASD 和典型发育个体,在验证数据集上达到了令人印象深刻的 93.10% 精确度值:本研究利用机器学习技术并结合时间眼动跟踪数据,提出了一种基于 ML 的 ASD 检测方法。我们的研究结果凸显了时间分析在改进 ASD 检测方面的潜力,并为基于眼动追踪的神经发育障碍诊断和干预领域的进一步发展提供了一个很有前景的方向。
{"title":"A novel multi-modal model to assist the diagnosis of autism spectrum disorder using eye-tracking data.","authors":"Brahim Benabderrahmane, Mohamed Gharzouli, Amira Benlecheb","doi":"10.1007/s13755-024-00299-2","DOIUrl":"10.1007/s13755-024-00299-2","url":null,"abstract":"<p><strong>Background and objective: </strong>Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.</p><p><strong>Methods: </strong>We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model's performance and compare it with other works.</p><p><strong>Results: </strong>Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.</p><p><strong>Conclusion: </strong>This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"40"},"PeriodicalIF":4.7,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11297859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00297-4
Demet Öztürk, Sena Aydoğan, İbrahim Kök, Işık Akın Bülbül, Selda Özdemir, Suat Özdemir, Diyar Akay
Diagnosing autism spectrum disorder (ASD) in children poses significant challenges due to its complex nature and impact on social communication development. While numerous data analytics techniques have been proposed for ASD evaluation, the process remains time-consuming and lacks clarity. Eye tracking (ET) data has emerged as a valuable resource for ASD risk assessment, yet existing literature predominantly focuses on predictive methods rather than descriptive techniques that offer human-friendly insights. Interpretation of ET data and Bayley scales, a widely used assessment tool, is challenging for ASD assessment of children. It should be understood clearly to perform better analytic tasks on ASD screening. Therefore, this study addresses this gap by employing linguistic summarization techniques to generate easily understandable summaries from raw ET data and Bayley scales. By integrating ET data and Bayley scores, the study aims to improve the identification of children with ASD from typically developing children (TD). Notably, this research represents one of the pioneering efforts to linguistically summarize ET data alongside Bayley scales, presenting comparative results between children with ASD and TD. Through linguistic summarization, this study facilitates the creation of simple, natural language statements, offering a first and unique approach to enhance ASD screening and contribute to our understanding of neurodevelopmental disorders.
由于儿童自闭症谱系障碍(ASD)的复杂性和对社会交流发展的影响,对其进行诊断是一项重大挑战。虽然已有许多数据分析技术被用于自闭症评估,但这一过程仍然耗时且缺乏清晰度。眼动追踪(ET)数据已成为 ASD 风险评估的宝贵资源,但现有文献主要侧重于预测方法,而不是提供人性化见解的描述性技术。对于 ASD 儿童评估而言,ET 数据和 Bayley 量表(一种广泛使用的评估工具)的解释具有挑战性。要想在 ASD 筛查中更好地完成分析任务,就必须清楚地了解这些数据。因此,本研究采用语言总结技术,从原始 ET 数据和 Bayley 量表中生成易于理解的总结,从而弥补了这一不足。通过整合 ET 数据和 Bayley 评分,本研究旨在提高从典型发育儿童(TD)中识别 ASD 儿童的能力。值得注意的是,本研究是用语言总结 ET 数据和 Bayley 量表的开创性研究之一,它展示了 ASD 儿童和 TD 儿童之间的比较结果。通过语言总结,这项研究有助于创建简单、自然的语言陈述,为加强 ASD 筛查提供了一种首创的独特方法,有助于我们了解神经发育障碍。
{"title":"Linguistic summarization of visual attention and developmental functioning of young children with autism spectrum disorder.","authors":"Demet Öztürk, Sena Aydoğan, İbrahim Kök, Işık Akın Bülbül, Selda Özdemir, Suat Özdemir, Diyar Akay","doi":"10.1007/s13755-024-00297-4","DOIUrl":"10.1007/s13755-024-00297-4","url":null,"abstract":"<p><p>Diagnosing autism spectrum disorder (ASD) in children poses significant challenges due to its complex nature and impact on social communication development. While numerous data analytics techniques have been proposed for ASD evaluation, the process remains time-consuming and lacks clarity. Eye tracking (ET) data has emerged as a valuable resource for ASD risk assessment, yet existing literature predominantly focuses on predictive methods rather than descriptive techniques that offer human-friendly insights. Interpretation of ET data and Bayley scales, a widely used assessment tool, is challenging for ASD assessment of children. It should be understood clearly to perform better analytic tasks on ASD screening. Therefore, this study addresses this gap by employing linguistic summarization techniques to generate easily understandable summaries from raw ET data and Bayley scales. By integrating ET data and Bayley scores, the study aims to improve the identification of children with ASD from typically developing children (TD). Notably, this research represents one of the pioneering efforts to linguistically summarize ET data alongside Bayley scales, presenting comparative results between children with ASD and TD. Through linguistic summarization, this study facilitates the creation of simple, natural language statements, offering a first and unique approach to enhance ASD screening and contribute to our understanding of neurodevelopmental disorders.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"39"},"PeriodicalIF":4.7,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11252111/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12eCollection Date: 2024-12-01DOI: 10.1007/s13755-024-00296-5
Sana Alazwari, Mashael Maashi, Jamal Alsamri, Mohammad Alamgeer, Shouki A Ebad, Saud S Alotaibi, Marwa Obayya, Samah Al Zanin
Laryngeal cancer (LC) represents a substantial world health problem, with diminished survival rates attributed to late-stage diagnoses. Correct treatment for LC is complex, particularly in the final stages. This kind of cancer is a complex malignancy inside the head and neck region of patients. Recently, researchers serving medical consultants to recognize LC efficiently develop different analysis methods and tools. However, these existing tools and techniques have various problems regarding performance constraints, like lesser accuracy in detecting LC at the early stages, additional computational complexity, and colossal time utilization in patient screening. Deep learning (DL) approaches have been established that are effective in the recognition of LC. Therefore, this study develops an efficient LC Detection using the Chaotic Metaheuristics Integration with the DL (LCD-CMDL) technique. The LCD-CMDL technique mainly focuses on detecting and classifying LC utilizing throat region images. In the LCD-CMDL technique, the contrast enhancement process uses the CLAHE approach. For feature extraction, the LCD-CMDL technique applies the Squeeze-and-Excitation ResNet (SE-ResNet) model to learn the complex and intrinsic features from the image preprocessing. Moreover, the hyperparameter tuning of the SE-ResNet approach is performed using a chaotic adaptive sparrow search algorithm (CSSA). Finally, the extreme learning machine (ELM) model was applied to detect and classify the LC. The performance evaluation of the LCD-CMDL approach occurs utilizing a benchmark throat region image database. The experimental values implied the superior performance of the LCD-CMDL approach over recent state-of-the-art approaches.
{"title":"Improving laryngeal cancer detection using chaotic metaheuristics integration with squeeze-and-excitation resnet model.","authors":"Sana Alazwari, Mashael Maashi, Jamal Alsamri, Mohammad Alamgeer, Shouki A Ebad, Saud S Alotaibi, Marwa Obayya, Samah Al Zanin","doi":"10.1007/s13755-024-00296-5","DOIUrl":"10.1007/s13755-024-00296-5","url":null,"abstract":"<p><p>Laryngeal cancer (LC) represents a substantial world health problem, with diminished survival rates attributed to late-stage diagnoses. Correct treatment for LC is complex, particularly in the final stages. This kind of cancer is a complex malignancy inside the head and neck region of patients. Recently, researchers serving medical consultants to recognize LC efficiently develop different analysis methods and tools. However, these existing tools and techniques have various problems regarding performance constraints, like lesser accuracy in detecting LC at the early stages, additional computational complexity, and colossal time utilization in patient screening. Deep learning (DL) approaches have been established that are effective in the recognition of LC. Therefore, this study develops an efficient LC Detection using the Chaotic Metaheuristics Integration with the DL (LCD-CMDL) technique. The LCD-CMDL technique mainly focuses on detecting and classifying LC utilizing throat region images. In the LCD-CMDL technique, the contrast enhancement process uses the CLAHE approach. For feature extraction, the LCD-CMDL technique applies the Squeeze-and-Excitation ResNet (SE-ResNet) model to learn the complex and intrinsic features from the image preprocessing. Moreover, the hyperparameter tuning of the SE-ResNet approach is performed using a chaotic adaptive sparrow search algorithm (CSSA). Finally, the extreme learning machine (ELM) model was applied to detect and classify the LC. The performance evaluation of the LCD-CMDL approach occurs utilizing a benchmark throat region image database. The experimental values implied the superior performance of the LCD-CMDL approach over recent state-of-the-art approaches.</p>","PeriodicalId":46312,"journal":{"name":"Health Information Science and Systems","volume":"12 1","pages":"38"},"PeriodicalIF":4.7,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11239646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}