Pub Date : 2023-12-06DOI: 10.1016/j.ibmed.2023.100125
Luis Ángel Calvo Pascual , David Castro Corredor , Eduardo César Garrido Merchán
Objectives
Predict the 25 dihydroxy 20 epi vitamin d3 level (low, medium, or high) in spondyloarthritis patients.
Methods
Observational, descriptive, and cross-sectional study. We collected information from 115 patients. From a total of 32 variables, we selected the most relevant using mutual information tests, and, finally, we estimated two classification models using machine learning.
Result
We obtain an interpretable decision tree and an ensemble maximizing the expected accuracy using Bayesian optimization and 10-fold cross-validation over a preprocessed dataset.
Conclusion
We identify relevant variables not considered in previous research, such as age and post-treatment. We also estimate more flexible and high-capacity models using advanced data science techniques.
{"title":"Machine learning classification of vitamin D levels in spondyloarthritis patients","authors":"Luis Ángel Calvo Pascual , David Castro Corredor , Eduardo César Garrido Merchán","doi":"10.1016/j.ibmed.2023.100125","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100125","url":null,"abstract":"<div><h3>Objectives</h3><p>Predict the 25 dihydroxy 20 epi vitamin d3 level (low, medium, or high) in spondyloarthritis patients.</p></div><div><h3>Methods</h3><p>Observational, descriptive, and cross-sectional study. We collected information from 115 patients. From a total of 32 variables, we selected the most relevant using mutual information tests, and, finally, we estimated two classification models using machine learning.</p></div><div><h3>Result</h3><p>We obtain an interpretable decision tree and an ensemble maximizing the expected accuracy using Bayesian optimization and 10-fold cross-validation over a preprocessed dataset.</p></div><div><h3>Conclusion</h3><p>We identify relevant variables not considered in previous research, such as age and post-treatment. We also estimate more flexible and high-capacity models using advanced data science techniques.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266652122300039X/pdfft?md5=5a755d50c23cbe6f7d801f6f56e92a1e&pid=1-s2.0-S266652122300039X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of an image classification depends on the efficiency of the feature learning process. This process is a challenging task that traditionally requires prior knowledge from domain experts. Recently, representation learning was introduced to extract features directly from the raw images without any prior knowledge. Deep learning using a Convolutional Neural Network (CNN) has gained massive attention for performing image classification, as it achieves remarkable accuracy that sometimes exceeds human performance. But this type of network learns features by using a back-propagation approach. This approach requires a huge amount of training data and suffers from the vanishing gradient problem that deteriorates the feature learning. The forward-propagation approach uses predefined filters or filters learned outside the model and applied in a feed-forward manner. This approach is proven to achieve good results with small size labeled datasets. In this work, we investigate the suitability of using two feed-forward methods such as Convolutional Logistic Regression Network (CLR), and Convolutional Support Vector Machine Network for Histopathology Images (CSVM-H). The experiments we have conducted on two small breast cancer datasets (Sultan Qaboos University Hospital (SQUH) and BreaKHis dataset) demonstrate the advantage of using feed-forward approaches over the traditional back-propagation ones. On those datasets, the proposed models CLR and CSVM-H were faster to train and achieved better classification performance than the traditional back-propagation methods (VggNet-16 and ResNet-50) on the SQUH dataset. Importantly, our proposed approach CLR and CSVM-H efficiently learn representations from small amounts of breast cancer whole-slide images and achieve an AUC of 0.83 and 0.84, respectively, on the SQUH dataset. Moreover, the proposed models reduce memory footprint in the classification of Whole-Slide histopathology images since their training time is significantly reduced compared to the traditional CNN on the SQUH and BreaKHis datasets.
{"title":"Feed-forward networks using logistic regression and support vector machine for whole-slide breast cancer histopathology image classification","authors":"ArunaDevi Karuppasamy , Abdelhamid Abdesselam , Rachid Hedjam , Hamza zidoum , Maiya Al-Bahri","doi":"10.1016/j.ibmed.2023.100126","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100126","url":null,"abstract":"<div><p>The performance of an image classification depends on the efficiency of the feature learning process. This process is a challenging task that traditionally requires prior knowledge from domain experts. Recently, representation learning was introduced to extract features directly from the raw images without any prior knowledge. Deep learning using a Convolutional Neural Network (CNN) has gained massive attention for performing image classification, as it achieves remarkable accuracy that sometimes exceeds human performance. But this type of network learns features by using a back-propagation approach. This approach requires a huge amount of training data and suffers from the vanishing gradient problem that deteriorates the feature learning. The forward-propagation approach uses predefined filters or filters learned outside the model and applied in a feed-forward manner. This approach is proven to achieve good results with small size labeled datasets. In this work, we investigate the suitability of using two feed-forward methods such as Convolutional Logistic Regression Network (CLR), and Convolutional Support Vector Machine Network for Histopathology Images (CSVM-H). The experiments we have conducted on two small breast cancer datasets (Sultan Qaboos University Hospital (SQUH) and BreaKHis dataset) demonstrate the advantage of using feed-forward approaches over the traditional back-propagation ones. On those datasets, the proposed models CLR and CSVM-H were faster to train and achieved better classification performance than the traditional back-propagation methods (VggNet-16 and ResNet-50) on the SQUH dataset. Importantly, our proposed approach CLR and CSVM-H efficiently learn representations from small amounts of breast cancer whole-slide images and achieve an AUC of 0.83 and 0.84, respectively, on the SQUH dataset. Moreover, the proposed models reduce memory footprint in the classification of Whole-Slide histopathology images since their training time is significantly reduced compared to the traditional CNN on the SQUH and BreaKHis datasets.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000406/pdfft?md5=460230f9ae89e01af52e8dfee4ad8f06&pid=1-s2.0-S2666521223000406-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138490194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-30DOI: 10.1016/j.ibmed.2023.100130
Paolo Giaccone , Federico D'Antoni , Fabrizio Russo , Manuel Volpecina , Carlo Augusto Mallio , Giuseppe Francesco Papalia , Gianluca Vadalà , Vincenzo Denaro , Luca Vollero , Mario Merone
Chronic Low Back Pain (LBP) is one of the most prevalent musculoskeletal conditions and is the leading cause of disability worldwide. The morphology and composition of lumbar paraspinal muscles, in terms of infiltrated adipose tissue, constitute important guidelines for diagnosis and treatment choice but still require manual procedures to be assessed. We developed a fully automated artificial intelligence based algorithm both to segment paraspinal muscles from MRI scans through a U-Net architecture and to estimate the amount of fatty infiltrations by a home-made intensity- and region-based processing; we further validated our results by statistical assessment of the accuracy and agreement between our automated measures and the clinically reported values, achieving dice scores greater than 95 % on the preliminary segmentation task, as well as an excellent degree of agreement on the following area estimates (ICC2,1 = 0.89). Furthermore, we employed an external public dataset to validate our model generalization abilities, reaching dice scores greater than 94 % with an average processing time of 21.92s(±3.38s) per subject. Hence, a deterministic and reliable measuring tool is proposed, without any manual confounding effect, to efficiently support daily clinical practice in LBP management.
{"title":"Fully automated evaluation of paraspinal muscle morphology and composition in patients with low back pain","authors":"Paolo Giaccone , Federico D'Antoni , Fabrizio Russo , Manuel Volpecina , Carlo Augusto Mallio , Giuseppe Francesco Papalia , Gianluca Vadalà , Vincenzo Denaro , Luca Vollero , Mario Merone","doi":"10.1016/j.ibmed.2023.100130","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100130","url":null,"abstract":"<div><p>Chronic Low Back Pain (LBP) is one of the most prevalent musculoskeletal conditions and is the leading cause of disability worldwide. The morphology and composition of lumbar paraspinal muscles, in terms of infiltrated adipose tissue, constitute important guidelines for diagnosis and treatment choice but still require manual procedures to be assessed. We developed a fully automated artificial intelligence based algorithm both to segment paraspinal muscles from MRI scans through a U-Net architecture and to estimate the amount of fatty infiltrations by a home-made intensity- and region-based processing; we further validated our results by statistical assessment of the accuracy and agreement between our automated measures and the clinically reported values, achieving dice scores greater than 95 % on the preliminary segmentation task, as well as an excellent degree of agreement on the following area estimates (ICC<sub>2,1</sub> = 0.89). Furthermore, we employed an external public dataset to validate our model generalization abilities, reaching dice scores greater than 94 % with an average processing time of 21.92<em>s</em>(±3.38<em>s</em>) per subject. Hence, a deterministic and reliable measuring tool is proposed, without any manual confounding effect, to efficiently support daily clinical practice in LBP management.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000443/pdfft?md5=02297588e6a46fe364e4e125ef7bf9b7&pid=1-s2.0-S2666521223000443-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138490193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-26DOI: 10.1016/j.ibmed.2023.100129
Sarah Jane Kho , Brian Loh Chung Shiong , Vong Wan-Tze , Law Kian Boon , Mohan Dass Pathmanathan , Mohd Aizuddin Bin Abdul Rahman , Kuan Pei Xuan , Wan Nabila Binti Wan Hanafi , Kalaiarasu M. Peariasamy , Patrick Then Hang Hui
The use of cough sounds as a diagnostic tool for various respiratory illnesses, including COVID-19, has gained significant attention in recent years. Artificial intelligence (AI) has been employed in cough sound analysis to provide a quick and convenient pre-screening tool for COVID-19 detection. However, few works have employed segmentation to standardize cough sounds, and most models are trained datasets from a single source. In this paper, a deep learning framework is proposed that uses the Mini VGGNet model and segmentation methods for COVID-19 detection using cough sounds. In addition, data augmentation was studied to investigate the effects on model performance when applied to individual cough sounds. The framework includes both single and cross-dataset model training and testing, using data from the University of Cambridge, Coswara project, and National Institute of Health (NIH) Malaysia. Results demonstrate that the use of segmented cough sounds significantly improves the performance of trained models. In addition, findings suggest that using data augmentation on individual cough sounds does not show any improvement towards the performance of the model. The proposed framework achieved an optimum test accuracy of 0.921, 0.973 AUC, 0.910 precision, and 0.910 recall, for a model trained on a combination of the three datasets using non-augmented data. The findings of this study highlight the importance of segmentation and the use of diverse datasets for AI-based COVID-19 detection through cough sounds. Furthermore, the proposed framework provides a foundation for extending the use of deep learning in detecting other pulmonary diseases and studying the signal properties of cough sounds from various respiratory illnesses.
{"title":"Malaysian cough sound analysis and COVID-19 classification with deep learning","authors":"Sarah Jane Kho , Brian Loh Chung Shiong , Vong Wan-Tze , Law Kian Boon , Mohan Dass Pathmanathan , Mohd Aizuddin Bin Abdul Rahman , Kuan Pei Xuan , Wan Nabila Binti Wan Hanafi , Kalaiarasu M. Peariasamy , Patrick Then Hang Hui","doi":"10.1016/j.ibmed.2023.100129","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100129","url":null,"abstract":"<div><p>The use of cough sounds as a diagnostic tool for various respiratory illnesses, including COVID-19, has gained significant attention in recent years. Artificial intelligence (AI) has been employed in cough sound analysis to provide a quick and convenient pre-screening tool for COVID-19 detection. However, few works have employed segmentation to standardize cough sounds, and most models are trained datasets from a single source. In this paper, a deep learning framework is proposed that uses the Mini VGGNet model and segmentation methods for COVID-19 detection using cough sounds. In addition, data augmentation was studied to investigate the effects on model performance when applied to individual cough sounds. The framework includes both single and cross-dataset model training and testing, using data from the University of Cambridge, Coswara project, and National Institute of Health (NIH) Malaysia. Results demonstrate that the use of segmented cough sounds significantly improves the performance of trained models. In addition, findings suggest that using data augmentation on individual cough sounds does not show any improvement towards the performance of the model. The proposed framework achieved an optimum test accuracy of 0.921, 0.973 AUC, 0.910 precision, and 0.910 recall, for a model trained on a combination of the three datasets using non-augmented data. The findings of this study highlight the importance of segmentation and the use of diverse datasets for AI-based COVID-19 detection through cough sounds. Furthermore, the proposed framework provides a foundation for extending the use of deep learning in detecting other pulmonary diseases and studying the signal properties of cough sounds from various respiratory illnesses.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2023-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000431/pdfft?md5=fdbaa0160ecfeb64ea5b8dc61c3f6978&pid=1-s2.0-S2666521223000431-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138483907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.ibmed.2022.100083
Gerard TN. Burger , Ameen Abu-Hanna , Nicolette F. de Keizer , Huibert Burger , Ronald Cornet
Introduction
In the Netherlands, pathology reports are annotated using a nationwide pathology network (PALGA) thesaurus. Annotations must address topography, procedure, and diagnosis.
The Pathology Report Annotation Module (PRAM) can be used to annotate the report conclusion with PALGA-compliant code series. The equivalence of these generated annotations to manual annotations is unknown. We assess the equivalence of annotations by authoring pathologists, pathologists participating in this study, and PRAM.
Methods
New annotations were created for one thousand histopathology reports by the PRAM and a pathologist panel. We calculated dissimilarity of annotations using a semantic distance measure, Minimal Transition Cost (MTC). In absence of a gold standard, we compared dissimilarity scores having one common annotator. The resulting comparisons yielded a measure for the coding dissimilarity between PRAM, the pathologist panel and the authoring pathologist. To compare the comprehensiveness of the coding methods, we assessed number and length of the annotations.
Results
Eight of the twelve comparisons of dissimilarity scores were significantly equivalent. Non-equivalent score pairs involved dissimilarity between the code series by the original pathologist and the panel pathologists.
Coding dissimilarity was lowest for procedures, highest for diagnoses: MTC overall = 0.30, topographies = 0.22, procedures = 0.13, diagnoses = 0.33.
Both number and length of annotations per report increased with report conclusion length, mostly in PRAM-annotated conclusions: conclusion length ranging from 2 to 373 words, number of annotations ranged from 1 to 10 for pathologists, 1–19 for PRAM, annotation length ranged from 3 to 43 codes for pathologists, 4–123 for PRAM.
Conclusions
We measured annotation similarity among PRAM, authoring pathologists and panel pathologists. Annotating by PRAM, the panel pathologists and to a lesser extent by the authoring pathologist was equivalent. Therefore, the use of annotations by PRAM in a practical setting is justified. PRAM annotations are equivalent to study-setting annotations, and more comprehensive than routine coding. Further research on annotation quality is needed.
{"title":"Equivalence of pathologists' and rule-based parser's annotations of Dutch pathology reports","authors":"Gerard TN. Burger , Ameen Abu-Hanna , Nicolette F. de Keizer , Huibert Burger , Ronald Cornet","doi":"10.1016/j.ibmed.2022.100083","DOIUrl":"https://doi.org/10.1016/j.ibmed.2022.100083","url":null,"abstract":"<div><h3>Introduction</h3><p>In the Netherlands, pathology reports are annotated using a nationwide pathology network (PALGA) thesaurus. Annotations must address topography, procedure, and diagnosis.</p><p>The Pathology Report Annotation Module (PRAM) can be used to annotate the report conclusion with PALGA-compliant code series. The equivalence of these generated annotations to manual annotations is unknown. We assess the equivalence of annotations by authoring pathologists, pathologists participating in this study, and PRAM.</p></div><div><h3>Methods</h3><p>New annotations were created for one thousand histopathology reports by the PRAM and a pathologist panel. We calculated dissimilarity of annotations using a semantic distance measure, Minimal Transition Cost (MTC). In absence of a gold standard, we compared dissimilarity scores having one common annotator. The resulting comparisons yielded a measure for the coding dissimilarity between PRAM, the pathologist panel and the authoring pathologist. To compare the comprehensiveness of the coding methods, we assessed number and length of the annotations.</p></div><div><h3>Results</h3><p>Eight of the twelve comparisons of dissimilarity scores were significantly equivalent. Non-equivalent score pairs involved dissimilarity between the code series by the original pathologist and the panel pathologists.</p><p>Coding dissimilarity was lowest for procedures, highest for diagnoses: MTC overall = 0.30, topographies = 0.22, procedures = 0.13, diagnoses = 0.33.</p><p>Both number and length of annotations per report increased with report conclusion length, mostly in PRAM-annotated conclusions: conclusion length ranging from 2 to 373 words, number of annotations ranged from 1 to 10 for pathologists, 1–19 for PRAM, annotation length ranged from 3 to 43 codes for pathologists, 4–123 for PRAM.</p></div><div><h3>Conclusions</h3><p>We measured annotation similarity among PRAM, authoring pathologists and panel pathologists. Annotating by PRAM, the panel pathologists and to a lesser extent by the authoring pathologist was equivalent. Therefore, the use of annotations by PRAM in a practical setting is justified. PRAM annotations are equivalent to study-setting annotations, and more comprehensive than routine coding. Further research on annotation quality is needed.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49857635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.ibmed.2023.100106
Carolina Toledo Ferraz, Ana Maria Alvim Liberatore, Tatiane Lissa Yamada, Ivan Hong Jun Koh
Background
Triggers of organ dysfunction have been associated with the worsening of microcirculatory dysfunction in sepsis, and because microcirculatory changes occur before macro-hemodynamic abnormalities, they can potentially detect disease progression early on. The difficulty in distinguishing altered microcirculatory characteristics corresponding to varying stages of sepsis severity has been a limiting factor for the use of microcirculatory imaging as a diagnostic and prognostic tool in sepsis. The aim of this study was to develop a convolutional neural network (CNN) based on progressive sublingual microcirculatory dysfunction images in sepsis, and test its diagnostic accuracy for these progressive stages.
Methods
Sepsis was induced in Wistar rats (2 mL of E. coli 108 CFU/mL inoculation into the jugular vein), and 2 mL saline injection in sham animals was the control. Sublingual microvessels of all animals with surrounding tissue images were captured by Sidestream dark field imaging (SDF) at T0 (basal) and T2, T4, and T6 h after sepsis induction. From a total of 137 videos, 37.930 frames were extracted; a part (29.341) was used for the training of Resnet-50 (CNN-construct), and the remaining (8.589) was used for validation of accuracy.
Results
The CNN-construct successfully classified the various stages of sepsis with a high accuracy (97.07%). The average AUC value of the ROC curve was 0.9833, and the sensitivity and specificity ranged from 94.57% to 99.91%, respectively, at all time points.
Conclusions
By blind testing with new sublingual microscopy images captured at different periods of the acute phase of sepsis, the CNN-construct was able to accurately diagnose the four stages of sepsis severity. Thus, this new method presents the diagnostic potential for different stages of microcirculatory dysfunction and enables the prediction of clinical evolution and therapeutic efficacy. Automated simultaneous assessment of multiple characteristics, both microvessels and adjacent tissues, may account for this diagnostic skill. As such a task cannot be analyzed with human visual criteria only, CNN is a novel method to identify the different stages of sepsis by assessing the distinct features of each stage.
{"title":"A new convolutional neural network-construct for sepsis enhances pattern identification of microcirculatory dysfunction","authors":"Carolina Toledo Ferraz, Ana Maria Alvim Liberatore, Tatiane Lissa Yamada, Ivan Hong Jun Koh","doi":"10.1016/j.ibmed.2023.100106","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100106","url":null,"abstract":"<div><h3>Background</h3><p>Triggers of organ dysfunction have been associated with the worsening of microcirculatory dysfunction in sepsis, and because microcirculatory changes occur before macro-hemodynamic abnormalities, they can potentially detect disease progression early on. The difficulty in distinguishing altered microcirculatory characteristics corresponding to varying stages of sepsis severity has been a limiting factor for the use of microcirculatory imaging as a diagnostic and prognostic tool in sepsis. The aim of this study was to develop a convolutional neural network (CNN) based on progressive sublingual microcirculatory dysfunction images in sepsis, and test its diagnostic accuracy for these progressive stages.</p></div><div><h3>Methods</h3><p>Sepsis was induced in Wistar rats (2 mL of <em>E. coli</em> 10<sup>8</sup> CFU/mL inoculation into the jugular vein), and 2 mL saline injection in sham animals was the control. Sublingual microvessels of all animals with surrounding tissue images were captured by Sidestream dark field imaging (SDF) at T0 (basal) and T2, T4, and T6 h after sepsis induction. From a total of 137 videos, 37.930 frames were extracted; a part (29.341) was used for the training of Resnet-50 (CNN-construct), and the remaining (8.589) was used for validation of accuracy.</p></div><div><h3>Results</h3><p>The CNN-construct successfully classified the various stages of sepsis with a high accuracy (97.07%). The average AUC value of the ROC curve was 0.9833, and the sensitivity and specificity ranged from 94.57% to 99.91%, respectively, at all time points.</p></div><div><h3>Conclusions</h3><p>By blind testing with new sublingual microscopy images captured at different periods of the acute phase of sepsis, the CNN-construct was able to accurately diagnose the four stages of sepsis severity. Thus, this new method presents the diagnostic potential for different stages of microcirculatory dysfunction and enables the prediction of clinical evolution and therapeutic efficacy. Automated simultaneous assessment of multiple characteristics, both microvessels and adjacent tissues, may account for this diagnostic skill. As such a task cannot be analyzed with human visual criteria only, CNN is a novel method to identify the different stages of sepsis by assessing the distinct features of each stage.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49869158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.ibmed.2023.100122
Tyler Gorham , Audrey Anand , Jay Anand , Steve Rust , George El-Ferzli
{"title":"Predicting Hospital Readmission Risk in Patients with Severe Bronchopulmonary Dysplasia: Exploring the Impact of Neighborhood-Level Social Determinants of Health","authors":"Tyler Gorham , Audrey Anand , Jay Anand , Steve Rust , George El-Ferzli","doi":"10.1016/j.ibmed.2023.100122","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100122","url":null,"abstract":"","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000364/pdfft?md5=3d3b010d91d948080e99be280dfec786&pid=1-s2.0-S2666521223000364-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.ibmed.2023.100087
Babak Afshin-Pour , Michael Qiu , Shahrzad Hosseini Vajargah , Helen Cheyne , Kevin Ha , Molly Stewart , Jan Horsky , Rachel Aviv , Nasen Zhang , Mangala Narasimhan , John Chelico , Gabriel Musso , Negin Hajizadeh
Acute Respiratory Distress Syndrome (ARDS) is associated with high morbidity and mortality. Identification of ARDS enables lung protective strategies, quality improvement interventions, and clinical trial enrolment, but remains challenging particularly in the first 24 hours of mechanical ventilation. To address this we built an algorithm capable of discriminating ARDS from other similarly presenting disorders immediately following mechanical ventilation. Specifically, a clinical team examined medical records from 1263 ICU-admitted, mechanically ventilated patients, retrospectively assigning each patient a diagnosis of “ARDS” or “non-ARDS” (e.g., pulmonary edema). Exploiting data readily available in the clinical setting, including patient demographics, laboratory test results from before the initiation of mechanical ventilation, and features extracted by natural language processing of radiology reports, we applied an iterative pre-processing and machine learning framework. The resulting model successfully discriminated ARDS from non-ARDS causes of respiratory failure (AUC = 0.85) among patients meeting Berlin criteria for severe hypoxia. This analysis also highlighted novel patient variables that were informative for identifying ARDS in ICU settings.
{"title":"Discriminating Acute Respiratory Distress Syndrome from other forms of respiratory failure via iterative machine learning","authors":"Babak Afshin-Pour , Michael Qiu , Shahrzad Hosseini Vajargah , Helen Cheyne , Kevin Ha , Molly Stewart , Jan Horsky , Rachel Aviv , Nasen Zhang , Mangala Narasimhan , John Chelico , Gabriel Musso , Negin Hajizadeh","doi":"10.1016/j.ibmed.2023.100087","DOIUrl":"10.1016/j.ibmed.2023.100087","url":null,"abstract":"<div><p>Acute Respiratory Distress Syndrome (ARDS) is associated with high morbidity and mortality. Identification of ARDS enables lung protective strategies, quality improvement interventions, and clinical trial enrolment, but remains challenging particularly in the first 24 hours of mechanical ventilation. To address this we built an algorithm capable of discriminating ARDS from other similarly presenting disorders immediately following mechanical ventilation. Specifically, a clinical team examined medical records from 1263 ICU-admitted, mechanically ventilated patients, retrospectively assigning each patient a diagnosis of “ARDS” or “non-ARDS” (e.g., pulmonary edema). Exploiting data readily available in the clinical setting, including patient demographics, laboratory test results from before the initiation of mechanical ventilation, and features extracted by natural language processing of radiology reports, we applied an iterative pre-processing and machine learning framework. The resulting model successfully discriminated ARDS from non-ARDS causes of respiratory failure (AUC = 0.85) among patients meeting Berlin criteria for severe hypoxia. This analysis also highlighted novel patient variables that were informative for identifying ARDS in ICU settings.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9812471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10665721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.ibmed.2022.100081
María S. del Río , Juan P. Trevino
Keratoconus is the most common primary ectasia, as the treatment is not easy, its early diagnosis is essential. The main goal of this study is to develop a method for classification of specific types of corneal shapes where 55 Zernike coefficients (angular index m = 9) are used as inputs. We describe and apply six Machine Learning (ML) classification methods and an ensemble of them to objectively discriminate between keratoconic and non-keratoconic corneal shapes. Earlier attempts by other authors have successfully implemented several Machine Learning models using different parameters (usually, indirect measurements) and have obtained positive results. Given the importance and ubiquity of Zernike polynomials in the eye care community, our proposal should be a suitable choice to incorporate to current methods which might serve as a prescreening test. In this project we work with 475 corneas, classified by experts in two groups, 50 keratoconics and 425 non-keratoconics. All six models yield high rated results with accuracies above 98%, precisions above 97%, or sensitivities above 93%. Also, by building an assembly with the models, we further improve the accuracy of our classification, for example we found an accuracy of 99.7%, a precision of 99.8% and sensitivity of 98.3%. The model can be easily implemented in any system, being very simple to use, thus providing ophthalmologists with a effortless and powerful tool to make a first diagnosis.
{"title":"Machine learning algorithms for classifying corneas by Zernike descriptors","authors":"María S. del Río , Juan P. Trevino","doi":"10.1016/j.ibmed.2022.100081","DOIUrl":"https://doi.org/10.1016/j.ibmed.2022.100081","url":null,"abstract":"<div><p>Keratoconus is the most common primary ectasia, as the treatment is not easy, its early diagnosis is essential. The main goal of this study is to develop a method for classification of specific types of corneal shapes where 55 Zernike coefficients (angular index <em>m</em> = 9) are used as inputs. We describe and apply six Machine Learning (ML) classification methods and an ensemble of them to objectively discriminate between keratoconic and non-keratoconic corneal shapes. Earlier attempts by other authors have successfully implemented several Machine Learning models using different parameters (usually, indirect measurements) and have obtained positive results. Given the importance and ubiquity of Zernike polynomials in the eye care community, our proposal should be a suitable choice to incorporate to current methods which might serve as a prescreening test. In this project we work with 475 corneas, classified by experts in two groups, 50 keratoconics and 425 non-keratoconics. All six models yield high rated results with accuracies above 98%, precisions above 97%, or sensitivities above 93%. Also, by building an assembly with the models, we further improve the accuracy of our classification, for example we found an accuracy of 99.7%, a precision of 99.8% and sensitivity of 98.3%. The model can be easily implemented in any system, being very simple to use, thus providing ophthalmologists with a effortless and powerful tool to make a first diagnosis.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49857634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.ibmed.2023.100107
Adrito Das , Danyal Z. Khan , John G. Hanrahan , Hani J. Marcus , Danail Stoyanov
Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-F1 score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.
{"title":"Automatic generation of operation notes in endoscopic pituitary surgery videos using workflow recognition","authors":"Adrito Das , Danyal Z. Khan , John G. Hanrahan , Hani J. Marcus , Danail Stoyanov","doi":"10.1016/j.ibmed.2023.100107","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100107","url":null,"abstract":"<div><p>Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-<em>F</em><sub>1</sub> score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49869238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}