Alzheimer's disease is a complex neurodegenerative disorder that profoundly impacts millions of individuals worldwide, presenting significant challenges in both diagnosis and treatment. Recent advances in deep learning-based methods have shown promising potential for predicting disease progression using multimodal data. However, the majority of studies in this domain have predominantly focused on cross-sectional data, neglecting the crucial temporal dimension of the disease's progression. In this study, we propose a novel approach to predict the progression of Alzheimer's disease by leveraging a multimodal time-series forecasting system based on graph representation learning. Our approach incorporates a Temporal Graph Network encoder, employing k-nearest neighbors and Cumulative Bayesian Ridge with high correlation imputation to generate graph node embeddings at each time step. Furthermore, we employ an Encoder-Decoder architecture, where a Graph Attention Network translates a dynamic graph into node embeddings, and a decoder estimates future edge probabilities. When utilizing all available patient features in the ADNI dataset, our proposed method achieved an Area Under the Curve (AUC) of 0.8090 for dynamic edge prediction. Furthermore, for neuroimaging data, the AUC improved significantly to 0.8807.
阿尔茨海默病是一种复杂的神经退行性疾病,严重影响着全球数百万人,给诊断和治疗带来了巨大挑战。基于深度学习的方法的最新进展表明,利用多模态数据预测疾病进展具有广阔的前景。然而,该领域的大多数研究主要关注横截面数据,忽略了疾病进展的关键时间维度。在本研究中,我们提出了一种新方法,利用基于图表示学习的多模态时间序列预测系统来预测阿尔茨海默病的进展。我们的方法结合了时序图网络编码器,采用 k 近邻和累积贝叶斯岭以及高相关性估算,在每个时间步生成图节点嵌入。此外,我们还采用了编码器-解码器架构,其中图形注意网络将动态图转化为节点嵌入,而解码器则估算未来的边缘概率。当利用 ADNI 数据集中所有可用的患者特征时,我们提出的方法在动态边缘预测方面的曲线下面积 (AUC) 达到了 0.8090。此外,对于神经影像数据,AUC 显著提高到 0.8807。
{"title":"Predictive modeling of Alzheimer's disease progression: Integrating temporal clinical factors and outcomes in time series forecasting","authors":"K.H. Aqil , Prashanth Dumpuri , Keerthi Ram , Mohanasankar Sivaprakasam","doi":"10.1016/j.ibmed.2024.100159","DOIUrl":"10.1016/j.ibmed.2024.100159","url":null,"abstract":"<div><p>Alzheimer's disease is a complex neurodegenerative disorder that profoundly impacts millions of individuals worldwide, presenting significant challenges in both diagnosis and treatment. Recent advances in deep learning-based methods have shown promising potential for predicting disease progression using multimodal data. However, the majority of studies in this domain have predominantly focused on cross-sectional data, neglecting the crucial temporal dimension of the disease's progression. In this study, we propose a novel approach to predict the progression of Alzheimer's disease by leveraging a multimodal time-series forecasting system based on graph representation learning. Our approach incorporates a Temporal Graph Network encoder, employing k-nearest neighbors and Cumulative Bayesian Ridge with high correlation imputation to generate graph node embeddings at each time step. Furthermore, we employ an Encoder-Decoder architecture, where a Graph Attention Network translates a dynamic graph into node embeddings, and a decoder estimates future edge probabilities. When utilizing all available patient features in the ADNI dataset, our proposed method achieved an Area Under the Curve (AUC) of 0.8090 for dynamic edge prediction. Furthermore, for neuroimaging data, the AUC improved significantly to 0.8807.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100159"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521224000267/pdfft?md5=966a05e54125ad7b71aab383d1ad9557&pid=1-s2.0-S2666521224000267-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141736590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autism Spectrum Disorders (ASD) are one of the most serious health problems that our generation is facing [1]. It affects around one out of every 54 children and causes issues with social interaction, communication [2] and repetitive behaviors [3]. The development of full biomarkers for neuroimaging is a crucial step in diagnosing and tailoring medical care for autism spectrum disorder [4]. Volumetric studies focused on 3D MRI texture features have shown a high capacity for detecting abnormalities and characterizing variations caused by tissue heterogeneity. Recently, it has been the interest of comprehensive studies. However, only a few studies have aimed to investigate the link between object texture and ASD. This paper suggests a framework based on geometric texture features analyzing the variations between ASD and development control (DC) subjects. Our study uses 1114 T1-weighted MRI scans from two groups of subjects: 521 individuals with ASD and 593 controls (age range: 6–64 years) [5], divided into three broad age groups. We then computed the features from automatically labeled subcortical and cortical regions and encoded them as texture features by applying seven global Riemannian geometry descriptors and eight local features of standard Harlicks quantifier functions. Significant tests were used to identify texture volumetric differences between ASD and DC subjects. The most discriminative features are selected by applying the Correlation Matrix, and these features are used to classify the two classes using an Artificial Neural Network analysis. Preliminary results indicate that in ASD subjects, all 15 structure-derived features and subcortical regions tested have significantly different distributions from DC subjects.
{"title":"Automatic characterization of cerebral MRI images for the detection of autism spectrum disorders","authors":"Nour El Houda Mezrioui , Kamel Aloui , Amine Nait-Ali , Mohamed Saber Naceur","doi":"10.1016/j.ibmed.2023.100127","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100127","url":null,"abstract":"<div><p>Autism Spectrum Disorders (ASD) are one of the most serious health problems that our generation is facing [1]. It affects around one out of every 54 children and causes issues with social interaction, communication [2] and repetitive behaviors [3]. The development of full biomarkers for neuroimaging is a crucial step in diagnosing and tailoring medical care for autism spectrum disorder [4]. Volumetric studies focused on 3D MRI texture features have shown a high capacity for detecting abnormalities and characterizing variations caused by tissue heterogeneity. Recently, it has been the interest of comprehensive studies. However, only a few studies have aimed to investigate the link between object texture and ASD. This paper suggests a framework based on geometric texture features analyzing the variations between ASD and development control (DC) subjects. Our study uses 1114 T1-weighted MRI scans from two groups of subjects: 521 individuals with ASD and 593 controls (age range: 6–64 years) [5], divided into three broad age groups. We then computed the features from automatically labeled subcortical and cortical regions and encoded them as texture features by applying seven global Riemannian geometry descriptors and eight local features of standard Harlicks quantifier functions. Significant tests were used to identify texture volumetric differences between ASD and DC subjects. The most discriminative features are selected by applying the Correlation Matrix, and these features are used to classify the two classes using an Artificial Neural Network analysis. Preliminary results indicate that in ASD subjects, all 15 structure-derived features and subcortical regions tested have significantly different distributions from DC subjects.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000418/pdfft?md5=52f7350c7f1b4866d790132947d0352d&pid=1-s2.0-S2666521223000418-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139737405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.ibmed.2024.100187
Jason Le , Oisín Butler , Ann-Kathrin Frenz , Ankur Sharma
Purpose
We sought to compare the performance of AI applications in real-world studies to validation study data used to gain regulatory approval.
Methods
We searched PubMed, EBSCO, and EMBASE for publications from 2018 to 2023. We included articles that evaluated the sensitivity and specificity of ICH and LVO detection applications in real-world populations. We performed a quality and applicability assessment using QUADAS-2. We used a bivariate or two univariate meta-analyses, where appropriate, to calculate summary point estimates for sensitivity and specificity.
Results
Eighteen articles met the criteria of the systematic literature review. The included articles evaluated five applications indicated for ICH or LVO triage. Three of the five applications yielded adequate studies to be included in the meta-analysis. For most applications, we did not observe any systematic differences in sensitivity and specificity results between the point estimates from the meta-analysis and the respective 510k studies. For VIZ LVO and RAPID LVO, the 95 % CI for real-world sensitivity sat within the 95 % CI from their respective validation study. For BriefCase ICH, the 95 % CI for real-world sensitivity sat below the 95 % CI of the respective validation study. Additionally, the 95 % CI for real-world specificity for all three of the applications sat within the 95 % CI of their respective validation studies. Data from the individual real-world studies for RAPID ICH and CINA LVO followed a similar trend.
Conclusion
The performance of applications in real-world settings was non-inferior to the performance observed in validation studies used to obtain 510k clearance.
目的我们试图比较人工智能应用在真实世界研究中的表现与用于获得监管部门批准的验证研究数据。方法我们检索了PubMed、EBSCO和EMBASE上2018年至2023年的出版物。我们纳入了评估真实世界人群中 ICH 和 LVO 检测应用灵敏度和特异性的文章。我们使用 QUADAS-2 进行了质量和适用性评估。我们酌情使用双变量或两个单变量荟萃分析来计算灵敏度和特异性的汇总点估计值。纳入的文章评估了五种用于 ICH 或 LVO 分流的应用。在这五种应用中,有三种应用的研究结果足以纳入荟萃分析。对于大多数应用,我们没有观察到荟萃分析的点估计值与相应的 510k 研究之间在灵敏度和特异性结果上存在任何系统性差异。对于 VIZ LVO 和 RAPID LVO,真实世界灵敏度的 95 % CI 位于各自验证研究的 95 % CI 范围内。对于 BriefCase ICH,实际灵敏度的 95 % CI 低于各自验证研究的 95 % CI。此外,所有三种应用的实际特异性的 95 % CI 都在各自验证研究的 95 % CI 范围内。RAPID ICH 和 CINA LVO 的单项真实世界研究数据也呈现类似趋势。
{"title":"Systematic literature review and meta-analysis for real-world versus clinical validation performance of artificial intelligence applications indicated for ICH and LVO detection","authors":"Jason Le , Oisín Butler , Ann-Kathrin Frenz , Ankur Sharma","doi":"10.1016/j.ibmed.2024.100187","DOIUrl":"10.1016/j.ibmed.2024.100187","url":null,"abstract":"<div><h3>Purpose</h3><div>We sought to compare the performance of AI applications in real-world studies to validation study data used to gain regulatory approval.</div></div><div><h3>Methods</h3><div>We searched PubMed, EBSCO, and EMBASE for publications from 2018 to 2023. We included articles that evaluated the sensitivity and specificity of ICH and LVO detection applications in real-world populations. We performed a quality and applicability assessment using QUADAS-2. We used a bivariate or two univariate meta-analyses, where appropriate, to calculate summary point estimates for sensitivity and specificity.</div></div><div><h3>Results</h3><div>Eighteen articles met the criteria of the systematic literature review. The included articles evaluated five applications indicated for ICH or LVO triage. Three of the five applications yielded adequate studies to be included in the meta-analysis. For most applications, we did not observe any systematic differences in sensitivity and specificity results between the point estimates from the meta-analysis and the respective 510k studies. For VIZ LVO and RAPID LVO, the 95 % CI for real-world sensitivity sat within the 95 % CI from their respective validation study. For BriefCase ICH, the 95 % CI for real-world sensitivity sat below the 95 % CI of the respective validation study. Additionally, the 95 % CI for real-world specificity for all three of the applications sat within the 95 % CI of their respective validation studies. Data from the individual real-world studies for RAPID ICH and CINA LVO followed a similar trend.</div></div><div><h3>Conclusion</h3><div>The performance of applications in real-world settings was non-inferior to the performance observed in validation studies used to obtain 510k clearance.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100187"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.ibmed.2024.100181
Shihabudeen H. , Rajeesh J.
Medical imaging has been widely used to diagnose diseases over the past two decades. The lack of information in this field makes it difficult for medical experts to diagnose diseases with a single modality. The combination of image fusion techniques enables the integration of pictures depicting various tissues and disorders from multiple medical imaging devices, facilitating enhanced research and treatment by providing complementary information through multimodal medical imaging fusion. The proposed work employs the nuclear norm and residual connections to combine the complementary features from both CT and MRI imaging approaches. The autoencoder eventually creates a merged image. The fused pictures are categorized as benign or malignant in the following phase using the present Radial Basis Function Network (RBFN). The performance measures, such as Mutual Information, Structural Similarity Index Measure, , and , have shown improved values, specifically 4.6328, 0.6492, 0.8300, and 0.8185 respectively, when compared with different fusion methods. Additionally, the classification algorithm yields 97% accuracy, 89% precision, and 92% recall when combined with the proposed fusion algorithm.
{"title":"NUC-Fuse: Multimodal medical image fusion using nuclear norm & classification of brain tumors using ARBFN","authors":"Shihabudeen H. , Rajeesh J.","doi":"10.1016/j.ibmed.2024.100181","DOIUrl":"10.1016/j.ibmed.2024.100181","url":null,"abstract":"<div><div>Medical imaging has been widely used to diagnose diseases over the past two decades. The lack of information in this field makes it difficult for medical experts to diagnose diseases with a single modality. The combination of image fusion techniques enables the integration of pictures depicting various tissues and disorders from multiple medical imaging devices, facilitating enhanced research and treatment by providing complementary information through multimodal medical imaging fusion. The proposed work employs the nuclear norm and residual connections to combine the complementary features from both CT and MRI imaging approaches. The autoencoder eventually creates a merged image. The fused pictures are categorized as benign or malignant in the following phase using the present Radial Basis Function Network (RBFN). The performance measures, such as Mutual Information, Structural Similarity Index Measure, <span><math><msub><mrow><mi>Q</mi></mrow><mrow><mi>w</mi></mrow></msub></math></span>, and <span><math><msub><mrow><mi>Q</mi></mrow><mrow><mi>e</mi></mrow></msub></math></span>, have shown improved values, specifically 4.6328, 0.6492, 0.8300, and 0.8185 respectively, when compared with different fusion methods. Additionally, the classification algorithm yields 97% accuracy, 89% precision, and 92% recall when combined with the proposed fusion algorithm.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100181"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.ibmed.2024.100179
Mohammad Hossein Abbasi , Melek Somai , Hamidreza Saber
Background
Artificial Intelligence (AI) is an increasingly popular research focus for multiple areas of science. The trend of using AI-based clinical research in different fields of medicine and defining the shortcomings of those trials will guide researchers and future studies.
Methods
We systematically reviewed trials registered in ClinicalTrials.gov that apply AI in clinical research. We explored the trend of AI-applied clinical research and described the design and conduct of such trials. Also, we considered high-quality trials to represent their enrollees’ and other characteristics.
Results
Our search yielded 839 trials involving a direct application of AI, among which 330 (39.3 %) trials were interventional, and the rest were observational (60.7 %). Most of the studies aimed to improve diagnosis (70.2 %); in less than a quarter of trials, management was targeted (22.8 %), and AI was implemented in an acute setting (13 %). Gastrointestinal, cardiovascular, and neurology were the significant fields of medicine with the application of AI in their research. High-quality published AI trials showed good generalizability in terms of their enrollees’ characteristics, with an average age of 52.46 years old and 50.28 % female participants.
Conclusion
The incorporation of AI in different fields of medicine needs to be more balanced, and attempts should be made to broaden the spectrum of AI-based clinical research and to improve its deployment in real-world practice.
{"title":"The trend of artificial intelligence application in medicine and neurology; the state-of-the-art systematic scoping review 2010–2022","authors":"Mohammad Hossein Abbasi , Melek Somai , Hamidreza Saber","doi":"10.1016/j.ibmed.2024.100179","DOIUrl":"10.1016/j.ibmed.2024.100179","url":null,"abstract":"<div><h3>Background</h3><div>Artificial Intelligence (AI) is an increasingly popular research focus for multiple areas of science. The trend of using AI-based clinical research in different fields of medicine and defining the shortcomings of those trials will guide researchers and future studies.</div></div><div><h3>Methods</h3><div>We systematically reviewed trials registered in <span><span>ClinicalTrials.gov</span><svg><path></path></svg></span> that apply AI in clinical research. We explored the trend of AI-applied clinical research and described the design and conduct of such trials. Also, we considered high-quality trials to represent their enrollees’ and other characteristics.</div></div><div><h3>Results</h3><div>Our search yielded 839 trials involving a direct application of AI, among which 330 (39.3 %) trials were interventional, and the rest were observational (60.7 %). Most of the studies aimed to improve diagnosis (70.2 %); in less than a quarter of trials, management was targeted (22.8 %), and AI was implemented in an acute setting (13 %). Gastrointestinal, cardiovascular, and neurology were the significant fields of medicine with the application of AI in their research. High-quality published AI trials showed good generalizability in terms of their enrollees’ characteristics, with an average age of 52.46 years old and 50.28 % female participants.</div></div><div><h3>Conclusion</h3><div>The incorporation of AI in different fields of medicine needs to be more balanced, and attempts should be made to broaden the spectrum of AI-based clinical research and to improve its deployment in real-world practice.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100179"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chronic kidney disease (CKD) is becoming an increasingly significant health issue, especially in low-income countries where access to affordable treatment is limited. Additionally, CKD is associated with various dietary factors, including liver failure, diabetes, anemia, nerve damage, inflammation, peroxidation, obesity, and other related conditions. Therefore, early prediction of CKD is important to progress the functionality of the kidney. In recent times, IoT has been widely used in a diversity of healthcare sectors through the incorporation of monitoring devices such as digital sensors and medical devices for patient monitoring from remote places. To overcome the problem, this research proposed a conceptual architecture for CKD detection. The sensor layer of the architecture includes IoT devices to collect data and the proposed classifier, MLP (Multi-Layer Perceptron), utilizes the Anova-F feature selection technique to effectively detect CKD (Chronic Kidney Disease). In addition to MLP, four other classifiers including ANN (Artificial Neural Network), Simple RNN (Recurrent Neural Network), GRU (Gated Recurrent Unit), and SVM (Support Vector Machine), are employed for comparative analysis of accuracy. Furthermore, three additional feature selection techniques, namely Chi-squared, SFFS (Sequential Floating Forward Selection), and SBFS (Sequential Backward Floating Selection), are utilized to evaluate their impact on the accuracy of CKD detection. Our proposed method outperforms all other approaches with a remarkable accuracy of 99 % while maintaining efficient computational time. This advancement is crucial in developing a highly accurate machine capable of predicting CKD in remote areas with ease.
{"title":"A conceptual IoT framework based on Anova-F feature selection for chronic kidney disease detection using deep learning approach","authors":"Md Morshed Ali, Md Saiful Islam, Mohammed Nasir Uddin, Md. Ashraf Uddin","doi":"10.1016/j.ibmed.2024.100170","DOIUrl":"10.1016/j.ibmed.2024.100170","url":null,"abstract":"<div><div>Chronic kidney disease (CKD) is becoming an increasingly significant health issue, especially in low-income countries where access to affordable treatment is limited. Additionally, CKD is associated with various dietary factors, including liver failure, diabetes, anemia, nerve damage, inflammation, peroxidation, obesity, and other related conditions. Therefore, early prediction of CKD is important to progress the functionality of the kidney. In recent times, IoT has been widely used in a diversity of healthcare sectors through the incorporation of monitoring devices such as digital sensors and medical devices for patient monitoring from remote places. To overcome the problem, this research proposed a conceptual architecture for CKD detection. The sensor layer of the architecture includes IoT devices to collect data and the proposed classifier, MLP (Multi-Layer Perceptron), utilizes the Anova-F feature selection technique to effectively detect CKD (Chronic Kidney Disease). In addition to MLP, four other classifiers including ANN (Artificial Neural Network), Simple RNN (Recurrent Neural Network), GRU (Gated Recurrent Unit), and SVM (Support Vector Machine), are employed for comparative analysis of accuracy. Furthermore, three additional feature selection techniques, namely Chi-squared, SFFS (Sequential Floating Forward Selection), and SBFS (Sequential Backward Floating Selection), are utilized to evaluate their impact on the accuracy of CKD detection. Our proposed method outperforms all other approaches with a remarkable accuracy of 99 % while maintaining efficient computational time. This advancement is crucial in developing a highly accurate machine capable of predicting CKD in remote areas with ease.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100170"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142532833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.ibmed.2024.100169
Yujin Nam , Jooae Choe , Sang Min Lee , Joon Beom Seo , Hyunna Lee
Objective
When reconstructing a computed tomography (CT) volume, different filter kernels can be used to highlight different structures depending on the medical purpose. The aim of this study was to perform CT conversion for intra-/inter-vendor kernel conversion while preserving image quality.
Materials and methods
This study used CT scans from 632 patients who underwent contrast-enhanced chest CT on either a GE or Siemens scanner. Raw data from each CT scan was reconstructed with Standard and Chest kernels of GE or B10f, B30f, B50f, and B70f kernels of Siemens. In intra-vendor, all images reconstructed with one kernel are paired with another kernel, so the U-Net based supervised method was applied. In the case of inter-vendor where the input and target kernels have each different vendor, Siemens' B30f and GE's Standard kernel were trained through unsupervised image-to-image translation using contrastive learning.
Results
In the intra-vendor, quantitative evaluation of the image quality of our model showed reasonable performance on the internal test set (structural similarity index measure (SSIM) > 0.96, peak signal-to-noise ratio (PSNR) > 42.55) compared with the SR-block model (SSIM > 0.93, PSNR > 42.92). In the 6-class classification to evaluate the inter-vendor conversion performance, similar accuracy was shown in the converted image (0.977) compared to the original image (0.998).
Conclusions
In this study, we developed a network that can translate a given CT image into a target kernel among multi-vendors. Our model showed clinically acceptable quality in quantitative and qualitative evaluations, including image quality metrics.
{"title":"A hybrid of supervised and unsupervised deep learning models for multi-vendor kernel conversion of chest CT images","authors":"Yujin Nam , Jooae Choe , Sang Min Lee , Joon Beom Seo , Hyunna Lee","doi":"10.1016/j.ibmed.2024.100169","DOIUrl":"10.1016/j.ibmed.2024.100169","url":null,"abstract":"<div><h3>Objective</h3><div>When reconstructing a computed tomography (CT) volume, different filter kernels can be used to highlight different structures depending on the medical purpose. The aim of this study was to perform CT conversion for intra-/inter-vendor kernel conversion while preserving image quality.</div></div><div><h3>Materials and methods</h3><div>This study used CT scans from 632 patients who underwent contrast-enhanced chest CT on either a GE or Siemens scanner. Raw data from each CT scan was reconstructed with Standard and Chest kernels of GE or B10f, B30f, B50f, and B70f kernels of Siemens. In intra-vendor, all images reconstructed with one kernel are paired with another kernel, so the U-Net based supervised method was applied. In the case of inter-vendor where the input and target kernels have each different vendor, Siemens' B30f and GE's Standard kernel were trained through unsupervised image-to-image translation using contrastive learning.</div></div><div><h3>Results</h3><div>In the intra-vendor, quantitative evaluation of the image quality of our model showed reasonable performance on the internal test set (structural similarity index measure (SSIM) > 0.96, peak signal-to-noise ratio (PSNR) > 42.55) compared with the SR-block model (SSIM > 0.93, PSNR > 42.92). In the 6-class classification to evaluate the inter-vendor conversion performance, similar accuracy was shown in the converted image (0.977) compared to the original image (0.998).</div></div><div><h3>Conclusions</h3><div>In this study, we developed a network that can translate a given CT image into a target kernel among multi-vendors. Our model showed clinically acceptable quality in quantitative and qualitative evaluations, including image quality metrics.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100169"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.ibmed.2024.100171
Odifentse Mapula-e Lehasa, Uche A.K. Chude-Okonkwo
With over 1 billion affected adults, hypertension is one of the most critical public health challenges worldwide. If left untreated over time, hypertension increases the likelihood of premature disability or death from cardiovascular diseases. Despite the range of medications available for the treatment of hypertension, many individuals do not respond positively to the treatment. Additionally, a significant percentage of the population does not take the medication as prescribed, which is sometimes attributed to intolerable side effects. Hence, there is still the need to develop new hypertension drugs that provide patients with favourable treatment outcomes. This paper explores the computational method of drug discovery to generate new lead drug molecules for hypertension by targeting the renin-angiotensin-aldosterone system (RAAS). Specifically, we proposed a framework that integrates computational fragment-based methods and an unsupervised machine learning technique to generate new lead Angiotensin-Converting Enzyme Inhibitor (ACEI) and Angiotensin-Receptor Blocker (ARB) molecules. The molecule generation process is initiated using all the approved agents acting on the RAAS that are available in the ChEMBL and DrugBank databases to create a fragment pool. The fragments are used to generate new molecules, which are categorised into ACEI and ARB clusters using unsupervised machine learning techniques. The generated molecules in each category are screened to determine their suitability as oral drug molecules, considering their physicochemical properties. Further screening is performed to determine the molecules’ suitability as ACEIs or ARBs, based on the presence of the appropriate functional groups and their similarities with existing drug molecules. The resultant molecules that passed screening are proposed as new lead antihypertensive agents. A synthesizability test is also performed on the final new lead molecules to determine the ease of making them compared to the original molecules.
{"title":"Machine Learning-aided Computational Fragment-based Design of Small Molecules for Hypertension Treatment","authors":"Odifentse Mapula-e Lehasa, Uche A.K. Chude-Okonkwo","doi":"10.1016/j.ibmed.2024.100171","DOIUrl":"10.1016/j.ibmed.2024.100171","url":null,"abstract":"<div><div>With over 1 billion affected adults, hypertension is one of the most critical public health challenges worldwide. If left untreated over time, hypertension increases the likelihood of premature disability or death from cardiovascular diseases. Despite the range of medications available for the treatment of hypertension, many individuals do not respond positively to the treatment. Additionally, a significant percentage of the population does not take the medication as prescribed, which is sometimes attributed to intolerable side effects. Hence, there is still the need to develop new hypertension drugs that provide patients with favourable treatment outcomes. This paper explores the computational method of drug discovery to generate new lead drug molecules for hypertension by targeting the renin-angiotensin-aldosterone system (RAAS). Specifically, we proposed a framework that integrates computational fragment-based methods and an unsupervised machine learning technique to generate new lead Angiotensin-Converting Enzyme Inhibitor (ACEI) and Angiotensin-Receptor Blocker (ARB) molecules. The molecule generation process is initiated using all the approved agents acting on the RAAS that are available in the ChEMBL and DrugBank databases to create a fragment pool. The fragments are used to generate new molecules, which are categorised into ACEI and ARB clusters using unsupervised machine learning techniques. The generated molecules in each category are screened to determine their suitability as oral drug molecules, considering their physicochemical properties. Further screening is performed to determine the molecules’ suitability as ACEIs or ARBs, based on the presence of the appropriate functional groups and their similarities with existing drug molecules. The resultant molecules that passed screening are proposed as new lead antihypertensive agents. A synthesizability test is also performed on the final new lead molecules to determine the ease of making them compared to the original molecules.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100171"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1016/j.ibmed.2024.100182
R.O. Oveh , M. Adewunmi , A.O. Solomon , K.Y. Christopher , P.N. Ezeobi
In recent times, researchers with Computational background have found it easier to relate to Artificial Intelligence with the advancement of the transformer model, and unstructured medical data. This paper explores the heterogeneity of keyBERT, BERTopic, PyCaret and LDAs as key phrase generators and topic model extractors with P53 in ovarian cancer as a use case. PubMed abstract on mutant p53 was first extracted with the Entrez-global database and then preprocessed with Natural Toolkit (NLTK). keyBERT was then used for extracting keyphrases, and BERTopic modelling was used for extracting the related themes. PyCaret was further used for unigram topics and LDAs for examining the interaction among the topics in the word corpus. Lastly, Jaccard similarity index was used to check the similarity among the four methods. The results showed no relationship exists with KeyBERT, having a score of 0.0 while relationship exists among the three other topic models with score of 0.095, 0.235, 0.4 and 0.111. Based on the result, it was observed that keywords, keyphrases, similar topics, and entities embedded in the data use a closely related framework, which can give insights into medical data before modelling.
{"title":"Heterogenous analysis of KeyBERT, BERTopic, PyCaret and LDAs methods: P53 in ovarian cancer use case","authors":"R.O. Oveh , M. Adewunmi , A.O. Solomon , K.Y. Christopher , P.N. Ezeobi","doi":"10.1016/j.ibmed.2024.100182","DOIUrl":"10.1016/j.ibmed.2024.100182","url":null,"abstract":"<div><div>In recent times, researchers with Computational background have found it easier to relate to Artificial Intelligence with the advancement of the transformer model, and unstructured medical data. This paper explores the heterogeneity of keyBERT, BERTopic, PyCaret and LDAs as key phrase generators and topic model extractors with P53 in ovarian cancer as a use case. PubMed abstract on mutant p53 was first extracted with the Entrez-global database and then preprocessed with Natural Toolkit (NLTK). keyBERT was then used for extracting keyphrases, and BERTopic modelling was used for extracting the related themes. PyCaret was further used for unigram topics and LDAs for examining the interaction among the topics in the word corpus. Lastly, Jaccard similarity index was used to check the similarity among the four methods. The results showed no relationship exists with KeyBERT, having a score of 0.0 while relationship exists among the three other topic models with score of 0.095, 0.235, 0.4 and 0.111. Based on the result, it was observed that keywords, keyphrases, similar topics, and entities embedded in the data use a closely related framework, which can give insights into medical data before modelling.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"10 ","pages":"Article 100182"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Gram staining method is one of the most effective morphological identification procedures for detecting bacteria from direct smear microscopy. This staining process is inexpensive. It aids in diagnosing bacterial infections quickly as it is used for direct clinical sample specimens such as pus, urine, and sputum. The computer-aided diagnostic system aids the clinician by avoiding tedious manual evaluation procedures. However, images captured often suffer from contrast, illumination, and stain variations due to various camera settings and situations. These differences are due to image acquisition conditions, sample quality, and poor staining procedures. These variations affect the diagnosis process, lowering the image analysis performance of the computer-aided diagnosis system. In this context, the present work proposes a novel color normalization approach based on a Cycle Generative Adversarial Network(GAN). We introduce a novel normalization loss function, Lcycm, which is integrated into our dedicated normalization loss, LN, within the framework of Cycle GAN(CGAN). The proposed method is compared with the state-of-the-art normalization algorithms qualitatively and quantitatively using the KMC dataset. In addition, the study demonstrates the impact of normalization on the Convolutional Neural Network (CNN) -based segmentation and classification process. Furthermore, a bacteria detection framework is proposed based on the U2Net segmentation model and a CNN classifier. The proposed normalization achieved an SSIM score of 0.93 ± 0.07 and PSNR of 29 ± 3.7. The accuracy of the CNN-based classifier improved by 40 % after normalization.
{"title":"Cycle Generative Adversarial Aetwork approach for normalization of Gram-stain images for bacteria detection","authors":"V. Shwetha , Keerthana Prasad , Chiranjay Mukhopadhyay , Barnini Banerjee","doi":"10.1016/j.ibmed.2024.100138","DOIUrl":"https://doi.org/10.1016/j.ibmed.2024.100138","url":null,"abstract":"<div><p>The Gram staining method is one of the most effective morphological identification procedures for detecting bacteria from direct smear microscopy. This staining process is inexpensive. It aids in diagnosing bacterial infections quickly as it is used for direct clinical sample specimens such as pus, urine, and sputum. The computer-aided diagnostic system aids the clinician by avoiding tedious manual evaluation procedures. However, images captured often suffer from contrast, illumination, and stain variations due to various camera settings and situations. These differences are due to image acquisition conditions, sample quality, and poor staining procedures. These variations affect the diagnosis process, lowering the image analysis performance of the computer-aided diagnosis system. In this context, the present work proposes a novel color normalization approach based on a Cycle Generative Adversarial Network(GAN). We introduce a novel normalization loss function, <em>L</em><sub><em>cycm</em></sub>, which is integrated into our dedicated normalization loss, <em>L</em><sub><em>N</em></sub>, within the framework of Cycle GAN(CGAN). The proposed method is compared with the state-of-the-art normalization algorithms qualitatively and quantitatively using the KMC dataset. In addition, the study demonstrates the impact of normalization on the Convolutional Neural Network (CNN) -based segmentation and classification process. Furthermore, a bacteria detection framework is proposed based on the U2Net segmentation model and a CNN classifier. The proposed normalization achieved an SSIM score of <strong>0.93 ± 0.07</strong> and PSNR of <strong>29 ± 3.7</strong>. The accuracy of the CNN-based classifier improved by 40 % after normalization.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100138"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266652122400005X/pdfft?md5=0d3ebedcc6a7f6f11414a2556ff844f2&pid=1-s2.0-S266652122400005X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}