首页 > 最新文献

Intelligence-based medicine最新文献

英文 中文
Machine learning classification of vitamin D levels in spondyloarthritis patients 对脊柱关节炎患者维生素 D 水平进行机器学习分类
Pub Date : 2023-12-06 DOI: 10.1016/j.ibmed.2023.100125
Luis Ángel Calvo Pascual , David Castro Corredor , Eduardo César Garrido Merchán

Objectives

Predict the 25 dihydroxy 20 epi vitamin d3 level (low, medium, or high) in spondyloarthritis patients.

Methods

Observational, descriptive, and cross-sectional study. We collected information from 115 patients. From a total of 32 variables, we selected the most relevant using mutual information tests, and, finally, we estimated two classification models using machine learning.

Result

We obtain an interpretable decision tree and an ensemble maximizing the expected accuracy using Bayesian optimization and 10-fold cross-validation over a preprocessed dataset.

Conclusion

We identify relevant variables not considered in previous research, such as age and post-treatment. We also estimate more flexible and high-capacity models using advanced data science techniques.

目的预测脊柱关节炎患者的 25 二羟基 20 表维生素 d3 水平(低、中或高)。 方法观察性、描述性和横断面研究。我们收集了 115 名患者的信息。结果我们获得了一棵可解释的决策树,并通过贝叶斯优化和对预处理数据集进行 10 倍交叉验证,获得了预期准确率最大化的集合。我们还利用先进的数据科学技术估算出了更灵活、容量更大的模型。
{"title":"Machine learning classification of vitamin D levels in spondyloarthritis patients","authors":"Luis Ángel Calvo Pascual ,&nbsp;David Castro Corredor ,&nbsp;Eduardo César Garrido Merchán","doi":"10.1016/j.ibmed.2023.100125","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100125","url":null,"abstract":"<div><h3>Objectives</h3><p>Predict the 25 dihydroxy 20 epi vitamin d3 level (low, medium, or high) in spondyloarthritis patients.</p></div><div><h3>Methods</h3><p>Observational, descriptive, and cross-sectional study. We collected information from 115 patients. From a total of 32 variables, we selected the most relevant using mutual information tests, and, finally, we estimated two classification models using machine learning.</p></div><div><h3>Result</h3><p>We obtain an interpretable decision tree and an ensemble maximizing the expected accuracy using Bayesian optimization and 10-fold cross-validation over a preprocessed dataset.</p></div><div><h3>Conclusion</h3><p>We identify relevant variables not considered in previous research, such as age and post-treatment. We also estimate more flexible and high-capacity models using advanced data science techniques.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266652122300039X/pdfft?md5=5a755d50c23cbe6f7d801f6f56e92a1e&pid=1-s2.0-S266652122300039X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feed-forward networks using logistic regression and support vector machine for whole-slide breast cancer histopathology image classification 使用逻辑回归和支持向量机的前馈网络用于全切片乳腺癌组织病理学图像分类
Pub Date : 2023-12-02 DOI: 10.1016/j.ibmed.2023.100126
ArunaDevi Karuppasamy , Abdelhamid Abdesselam , Rachid Hedjam , Hamza zidoum , Maiya Al-Bahri

The performance of an image classification depends on the efficiency of the feature learning process. This process is a challenging task that traditionally requires prior knowledge from domain experts. Recently, representation learning was introduced to extract features directly from the raw images without any prior knowledge. Deep learning using a Convolutional Neural Network (CNN) has gained massive attention for performing image classification, as it achieves remarkable accuracy that sometimes exceeds human performance. But this type of network learns features by using a back-propagation approach. This approach requires a huge amount of training data and suffers from the vanishing gradient problem that deteriorates the feature learning. The forward-propagation approach uses predefined filters or filters learned outside the model and applied in a feed-forward manner. This approach is proven to achieve good results with small size labeled datasets. In this work, we investigate the suitability of using two feed-forward methods such as Convolutional Logistic Regression Network (CLR), and Convolutional Support Vector Machine Network for Histopathology Images (CSVM-H). The experiments we have conducted on two small breast cancer datasets (Sultan Qaboos University Hospital (SQUH) and BreaKHis dataset) demonstrate the advantage of using feed-forward approaches over the traditional back-propagation ones. On those datasets, the proposed models CLR and CSVM-H were faster to train and achieved better classification performance than the traditional back-propagation methods (VggNet-16 and ResNet-50) on the SQUH dataset. Importantly, our proposed approach CLR and CSVM-H efficiently learn representations from small amounts of breast cancer whole-slide images and achieve an AUC of 0.83 and 0.84, respectively, on the SQUH dataset. Moreover, the proposed models reduce memory footprint in the classification of Whole-Slide histopathology images since their training time is significantly reduced compared to the traditional CNN on the SQUH and BreaKHis datasets.

图像分类的性能取决于特征学习过程的效率。这一过程是一项具有挑战性的任务,传统上需要领域专家提供先验知识。最近,人们引入了表征学习,无需任何先验知识,直接从原始图像中提取特征。使用卷积神经网络(CNN)的深度学习在进行图像分类时获得了极大的关注,因为它的准确率非常高,有时甚至超过了人类的表现。但这种网络是通过反向传播方法来学习特征的。这种方法需要大量的训练数据,而且存在梯度消失问题,从而影响了特征学习。前向传播方法使用预定义滤波器或在模型外学习的滤波器,并以前馈方式应用。事实证明,这种方法可以在小规模的标注数据集上取得良好的效果。在这项工作中,我们研究了使用卷积逻辑回归网络(CLR)和用于组织病理学图像的卷积支持向量机网络(CSVM-H)这两种前馈方法的适用性。我们在两个小型乳腺癌数据集(苏丹卡布斯大学医院(Sultan Qaboos University Hospital,SQUH)和 BreaKHis 数据集)上进行的实验表明,前馈方法比传统的反向传播方法更具优势。在这些数据集上,与 SQUH 数据集上的传统反向传播方法(VggNet-16 和 ResNet-50)相比,我们提出的 CLR 和 CSVM-H 模型训练速度更快,分类性能更好。重要的是,我们提出的 CLR 和 CSVM-H 方法能有效地从少量的乳腺癌全滑动图像中学习表征,在 SQUH 数据集上的 AUC 分别达到了 0.83 和 0.84。此外,在 SQUH 和 BreaKHis 数据集上,与传统的 CNN 相比,所提模型的训练时间大大缩短,从而减少了全滑动组织病理学图像分类的内存占用。
{"title":"Feed-forward networks using logistic regression and support vector machine for whole-slide breast cancer histopathology image classification","authors":"ArunaDevi Karuppasamy ,&nbsp;Abdelhamid Abdesselam ,&nbsp;Rachid Hedjam ,&nbsp;Hamza zidoum ,&nbsp;Maiya Al-Bahri","doi":"10.1016/j.ibmed.2023.100126","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100126","url":null,"abstract":"<div><p>The performance of an image classification depends on the efficiency of the feature learning process. This process is a challenging task that traditionally requires prior knowledge from domain experts. Recently, representation learning was introduced to extract features directly from the raw images without any prior knowledge. Deep learning using a Convolutional Neural Network (CNN) has gained massive attention for performing image classification, as it achieves remarkable accuracy that sometimes exceeds human performance. But this type of network learns features by using a back-propagation approach. This approach requires a huge amount of training data and suffers from the vanishing gradient problem that deteriorates the feature learning. The forward-propagation approach uses predefined filters or filters learned outside the model and applied in a feed-forward manner. This approach is proven to achieve good results with small size labeled datasets. In this work, we investigate the suitability of using two feed-forward methods such as Convolutional Logistic Regression Network (CLR), and Convolutional Support Vector Machine Network for Histopathology Images (CSVM-H). The experiments we have conducted on two small breast cancer datasets (Sultan Qaboos University Hospital (SQUH) and BreaKHis dataset) demonstrate the advantage of using feed-forward approaches over the traditional back-propagation ones. On those datasets, the proposed models CLR and CSVM-H were faster to train and achieved better classification performance than the traditional back-propagation methods (VggNet-16 and ResNet-50) on the SQUH dataset. Importantly, our proposed approach CLR and CSVM-H efficiently learn representations from small amounts of breast cancer whole-slide images and achieve an AUC of 0.83 and 0.84, respectively, on the SQUH dataset. Moreover, the proposed models reduce memory footprint in the classification of Whole-Slide histopathology images since their training time is significantly reduced compared to the traditional CNN on the SQUH and BreaKHis datasets.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2023-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000406/pdfft?md5=460230f9ae89e01af52e8dfee4ad8f06&pid=1-s2.0-S2666521223000406-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138490194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully automated evaluation of paraspinal muscle morphology and composition in patients with low back pain 对腰背痛患者脊柱旁肌肉形态和构成的全自动评估
Pub Date : 2023-11-30 DOI: 10.1016/j.ibmed.2023.100130
Paolo Giaccone , Federico D'Antoni , Fabrizio Russo , Manuel Volpecina , Carlo Augusto Mallio , Giuseppe Francesco Papalia , Gianluca Vadalà , Vincenzo Denaro , Luca Vollero , Mario Merone

Chronic Low Back Pain (LBP) is one of the most prevalent musculoskeletal conditions and is the leading cause of disability worldwide. The morphology and composition of lumbar paraspinal muscles, in terms of infiltrated adipose tissue, constitute important guidelines for diagnosis and treatment choice but still require manual procedures to be assessed. We developed a fully automated artificial intelligence based algorithm both to segment paraspinal muscles from MRI scans through a U-Net architecture and to estimate the amount of fatty infiltrations by a home-made intensity- and region-based processing; we further validated our results by statistical assessment of the accuracy and agreement between our automated measures and the clinically reported values, achieving dice scores greater than 95 % on the preliminary segmentation task, as well as an excellent degree of agreement on the following area estimates (ICC2,1 = 0.89). Furthermore, we employed an external public dataset to validate our model generalization abilities, reaching dice scores greater than 94 % with an average processing time of 21.92s(±3.38s) per subject. Hence, a deterministic and reliable measuring tool is proposed, without any manual confounding effect, to efficiently support daily clinical practice in LBP management.

慢性腰背痛(LBP)是最常见的肌肉骨骼疾病之一,也是全球致残的主要原因。腰椎旁肌肉浸润脂肪组织的形态和组成是诊断和治疗选择的重要依据,但仍需要人工程序进行评估。我们开发了一种基于人工智能的全自动算法,既能通过 U-Net 架构从核磁共振扫描中分割脊柱旁肌肉,又能通过自制的基于强度和区域的处理方法估算脂肪浸润的数量;我们还通过统计评估我们的自动测量结果与临床报告值之间的准确性和一致性,进一步验证了我们的结果,在初步分割任务中,骰子得分大于 95%,在随后的面积估算中也达到了极佳的一致性(ICC2,1 = 0.89)。此外,我们还使用了一个外部公共数据集来验证我们的模型泛化能力,在每个受试者平均处理时间为 21.92 秒(±3.38 秒)的情况下,骰子得分超过了 94%。因此,我们提出了一种确定且可靠的测量工具,不存在任何人工混淆效应,可有效支持腰背痛管理的日常临床实践。
{"title":"Fully automated evaluation of paraspinal muscle morphology and composition in patients with low back pain","authors":"Paolo Giaccone ,&nbsp;Federico D'Antoni ,&nbsp;Fabrizio Russo ,&nbsp;Manuel Volpecina ,&nbsp;Carlo Augusto Mallio ,&nbsp;Giuseppe Francesco Papalia ,&nbsp;Gianluca Vadalà ,&nbsp;Vincenzo Denaro ,&nbsp;Luca Vollero ,&nbsp;Mario Merone","doi":"10.1016/j.ibmed.2023.100130","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100130","url":null,"abstract":"<div><p>Chronic Low Back Pain (LBP) is one of the most prevalent musculoskeletal conditions and is the leading cause of disability worldwide. The morphology and composition of lumbar paraspinal muscles, in terms of infiltrated adipose tissue, constitute important guidelines for diagnosis and treatment choice but still require manual procedures to be assessed. We developed a fully automated artificial intelligence based algorithm both to segment paraspinal muscles from MRI scans through a U-Net architecture and to estimate the amount of fatty infiltrations by a home-made intensity- and region-based processing; we further validated our results by statistical assessment of the accuracy and agreement between our automated measures and the clinically reported values, achieving dice scores greater than 95 % on the preliminary segmentation task, as well as an excellent degree of agreement on the following area estimates (ICC<sub>2,1</sub> = 0.89). Furthermore, we employed an external public dataset to validate our model generalization abilities, reaching dice scores greater than 94 % with an average processing time of 21.92<em>s</em>(±3.38<em>s</em>) per subject. Hence, a deterministic and reliable measuring tool is proposed, without any manual confounding effect, to efficiently support daily clinical practice in LBP management.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000443/pdfft?md5=02297588e6a46fe364e4e125ef7bf9b7&pid=1-s2.0-S2666521223000443-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138490193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Malaysian cough sound analysis and COVID-19 classification with deep learning 基于深度学习的马来西亚咳嗽声分析与COVID-19分类
Pub Date : 2023-11-26 DOI: 10.1016/j.ibmed.2023.100129
Sarah Jane Kho , Brian Loh Chung Shiong , Vong Wan-Tze , Law Kian Boon , Mohan Dass Pathmanathan , Mohd Aizuddin Bin Abdul Rahman , Kuan Pei Xuan , Wan Nabila Binti Wan Hanafi , Kalaiarasu M. Peariasamy , Patrick Then Hang Hui

The use of cough sounds as a diagnostic tool for various respiratory illnesses, including COVID-19, has gained significant attention in recent years. Artificial intelligence (AI) has been employed in cough sound analysis to provide a quick and convenient pre-screening tool for COVID-19 detection. However, few works have employed segmentation to standardize cough sounds, and most models are trained datasets from a single source. In this paper, a deep learning framework is proposed that uses the Mini VGGNet model and segmentation methods for COVID-19 detection using cough sounds. In addition, data augmentation was studied to investigate the effects on model performance when applied to individual cough sounds. The framework includes both single and cross-dataset model training and testing, using data from the University of Cambridge, Coswara project, and National Institute of Health (NIH) Malaysia. Results demonstrate that the use of segmented cough sounds significantly improves the performance of trained models. In addition, findings suggest that using data augmentation on individual cough sounds does not show any improvement towards the performance of the model. The proposed framework achieved an optimum test accuracy of 0.921, 0.973 AUC, 0.910 precision, and 0.910 recall, for a model trained on a combination of the three datasets using non-augmented data. The findings of this study highlight the importance of segmentation and the use of diverse datasets for AI-based COVID-19 detection through cough sounds. Furthermore, the proposed framework provides a foundation for extending the use of deep learning in detecting other pulmonary diseases and studying the signal properties of cough sounds from various respiratory illnesses.

近年来,使用咳嗽声作为包括COVID-19在内的各种呼吸道疾病的诊断工具受到了极大的关注。将人工智能(AI)应用于咳嗽声分析,为新冠肺炎检测提供快速便捷的预筛查工具。然而,很少有研究使用分割来标准化咳嗽声音,大多数模型都是来自单一来源的训练数据集。本文提出了一个使用Mini VGGNet模型和分割方法的深度学习框架,用于基于咳嗽声的COVID-19检测。此外,还研究了数据增强,以研究应用于单个咳嗽声时对模型性能的影响。该框架包括单数据集和跨数据集模型训练和测试,使用的数据来自剑桥大学、Coswara项目和马来西亚国立卫生研究院。结果表明,使用分段咳嗽声显著提高了训练模型的性能。此外,研究结果表明,对单个咳嗽声使用数据增强并没有显示出对模型性能的任何改善。对于使用非增强数据的三个数据集组合训练的模型,所提出的框架获得了0.921,0.973 AUC, 0.910精度和0.910召回率的最佳测试精度。这项研究的结果强调了分割和使用不同数据集对基于人工智能的咳嗽声检测COVID-19的重要性。此外,所提出的框架为将深度学习扩展到检测其他肺部疾病和研究各种呼吸系统疾病咳嗽声的信号特性提供了基础。
{"title":"Malaysian cough sound analysis and COVID-19 classification with deep learning","authors":"Sarah Jane Kho ,&nbsp;Brian Loh Chung Shiong ,&nbsp;Vong Wan-Tze ,&nbsp;Law Kian Boon ,&nbsp;Mohan Dass Pathmanathan ,&nbsp;Mohd Aizuddin Bin Abdul Rahman ,&nbsp;Kuan Pei Xuan ,&nbsp;Wan Nabila Binti Wan Hanafi ,&nbsp;Kalaiarasu M. Peariasamy ,&nbsp;Patrick Then Hang Hui","doi":"10.1016/j.ibmed.2023.100129","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100129","url":null,"abstract":"<div><p>The use of cough sounds as a diagnostic tool for various respiratory illnesses, including COVID-19, has gained significant attention in recent years. Artificial intelligence (AI) has been employed in cough sound analysis to provide a quick and convenient pre-screening tool for COVID-19 detection. However, few works have employed segmentation to standardize cough sounds, and most models are trained datasets from a single source. In this paper, a deep learning framework is proposed that uses the Mini VGGNet model and segmentation methods for COVID-19 detection using cough sounds. In addition, data augmentation was studied to investigate the effects on model performance when applied to individual cough sounds. The framework includes both single and cross-dataset model training and testing, using data from the University of Cambridge, Coswara project, and National Institute of Health (NIH) Malaysia. Results demonstrate that the use of segmented cough sounds significantly improves the performance of trained models. In addition, findings suggest that using data augmentation on individual cough sounds does not show any improvement towards the performance of the model. The proposed framework achieved an optimum test accuracy of 0.921, 0.973 AUC, 0.910 precision, and 0.910 recall, for a model trained on a combination of the three datasets using non-augmented data. The findings of this study highlight the importance of segmentation and the use of diverse datasets for AI-based COVID-19 detection through cough sounds. Furthermore, the proposed framework provides a foundation for extending the use of deep learning in detecting other pulmonary diseases and studying the signal properties of cough sounds from various respiratory illnesses.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"9 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2023-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000431/pdfft?md5=fdbaa0160ecfeb64ea5b8dc61c3f6978&pid=1-s2.0-S2666521223000431-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138483907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equivalence of pathologists' and rule-based parser's annotations of Dutch pathology reports 荷兰病理学报告病理学家和基于规则的解析器注释的等价性
Pub Date : 2023-01-01 DOI: 10.1016/j.ibmed.2022.100083
Gerard TN. Burger , Ameen Abu-Hanna , Nicolette F. de Keizer , Huibert Burger , Ronald Cornet

Introduction

In the Netherlands, pathology reports are annotated using a nationwide pathology network (PALGA) thesaurus. Annotations must address topography, procedure, and diagnosis.

The Pathology Report Annotation Module (PRAM) can be used to annotate the report conclusion with PALGA-compliant code series. The equivalence of these generated annotations to manual annotations is unknown. We assess the equivalence of annotations by authoring pathologists, pathologists participating in this study, and PRAM.

Methods

New annotations were created for one thousand histopathology reports by the PRAM and a pathologist panel. We calculated dissimilarity of annotations using a semantic distance measure, Minimal Transition Cost (MTC). In absence of a gold standard, we compared dissimilarity scores having one common annotator. The resulting comparisons yielded a measure for the coding dissimilarity between PRAM, the pathologist panel and the authoring pathologist. To compare the comprehensiveness of the coding methods, we assessed number and length of the annotations.

Results

Eight of the twelve comparisons of dissimilarity scores were significantly equivalent. Non-equivalent score pairs involved dissimilarity between the code series by the original pathologist and the panel pathologists.

Coding dissimilarity was lowest for procedures, highest for diagnoses: MTC overall = 0.30, topographies = 0.22, procedures = 0.13, diagnoses = 0.33.

Both number and length of annotations per report increased with report conclusion length, mostly in PRAM-annotated conclusions: conclusion length ranging from 2 to 373 words, number of annotations ranged from 1 to 10 for pathologists, 1–19 for PRAM, annotation length ranged from 3 to 43 codes for pathologists, 4–123 for PRAM.

Conclusions

We measured annotation similarity among PRAM, authoring pathologists and panel pathologists. Annotating by PRAM, the panel pathologists and to a lesser extent by the authoring pathologist was equivalent. Therefore, the use of annotations by PRAM in a practical setting is justified. PRAM annotations are equivalent to study-setting annotations, and more comprehensive than routine coding. Further research on annotation quality is needed.

在荷兰,病理报告是使用全国病理网络(PALGA)辞典注释。注释必须处理地形、程序和诊断。病理报告注释模块(PRAM)可用于用符合palga标准的代码序列注释报告结论。这些生成的注释与手动注释的等价性是未知的。我们评估了作者病理学家、参与本研究的病理学家和PRAM的注释的等效性。方法由PRAM和病理学专家小组对1000份组织病理学报告进行新的注释。我们使用语义距离度量最小转换成本(MTC)来计算注释的不相似性。在没有金标准的情况下,我们比较了具有一个通用注释器的不同分数。由此产生的比较产生了PRAM,病理学家小组和撰写病理学家之间编码差异的测量。为了比较编码方法的全面性,我们评估了注释的数量和长度。结果12个比较中,有8个比较的差异分有显著性相等。非等效分数对涉及原始病理学家和小组病理学家的代码序列之间的不相似性。编码差异在程序方面最低,在诊断方面最高:MTC总体= 0.30,地形= 0.22,程序= 0.13,诊断= 0.33。每篇报告注释的数量和长度都随着报告结论长度的增加而增加,主要以PRAM注释的结论为主:结论长度为2 ~ 373字,病理学家注释数为1 ~ 10条,PRAM注释数为1 ~ 19条,病理学家注释数为3 ~ 43条,PRAM注释数为4 ~ 123条。结论我们测量了PRAM、撰写病理学家和小组病理学家注释的相似性。由PRAM注释,小组病理学家和撰写病理学家在较小程度上是相同的。因此,PRAM在实际环境中使用注释是合理的。PRAM注释相当于研究设置注释,比常规编码更全面。标注质量有待进一步研究。
{"title":"Equivalence of pathologists' and rule-based parser's annotations of Dutch pathology reports","authors":"Gerard TN. Burger ,&nbsp;Ameen Abu-Hanna ,&nbsp;Nicolette F. de Keizer ,&nbsp;Huibert Burger ,&nbsp;Ronald Cornet","doi":"10.1016/j.ibmed.2022.100083","DOIUrl":"https://doi.org/10.1016/j.ibmed.2022.100083","url":null,"abstract":"<div><h3>Introduction</h3><p>In the Netherlands, pathology reports are annotated using a nationwide pathology network (PALGA) thesaurus. Annotations must address topography, procedure, and diagnosis.</p><p>The Pathology Report Annotation Module (PRAM) can be used to annotate the report conclusion with PALGA-compliant code series. The equivalence of these generated annotations to manual annotations is unknown. We assess the equivalence of annotations by authoring pathologists, pathologists participating in this study, and PRAM.</p></div><div><h3>Methods</h3><p>New annotations were created for one thousand histopathology reports by the PRAM and a pathologist panel. We calculated dissimilarity of annotations using a semantic distance measure, Minimal Transition Cost (MTC). In absence of a gold standard, we compared dissimilarity scores having one common annotator. The resulting comparisons yielded a measure for the coding dissimilarity between PRAM, the pathologist panel and the authoring pathologist. To compare the comprehensiveness of the coding methods, we assessed number and length of the annotations.</p></div><div><h3>Results</h3><p>Eight of the twelve comparisons of dissimilarity scores were significantly equivalent. Non-equivalent score pairs involved dissimilarity between the code series by the original pathologist and the panel pathologists.</p><p>Coding dissimilarity was lowest for procedures, highest for diagnoses: MTC overall = 0.30, topographies = 0.22, procedures = 0.13, diagnoses = 0.33.</p><p>Both number and length of annotations per report increased with report conclusion length, mostly in PRAM-annotated conclusions: conclusion length ranging from 2 to 373 words, number of annotations ranged from 1 to 10 for pathologists, 1–19 for PRAM, annotation length ranged from 3 to 43 codes for pathologists, 4–123 for PRAM.</p></div><div><h3>Conclusions</h3><p>We measured annotation similarity among PRAM, authoring pathologists and panel pathologists. Annotating by PRAM, the panel pathologists and to a lesser extent by the authoring pathologist was equivalent. Therefore, the use of annotations by PRAM in a practical setting is justified. PRAM annotations are equivalent to study-setting annotations, and more comprehensive than routine coding. Further research on annotation quality is needed.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100083"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49857635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new convolutional neural network-construct for sepsis enhances pattern identification of microcirculatory dysfunction 一种新的用于败血症的卷积神经网络结构增强了微循环功能障碍的模式识别
Pub Date : 2023-01-01 DOI: 10.1016/j.ibmed.2023.100106
Carolina Toledo Ferraz, Ana Maria Alvim Liberatore, Tatiane Lissa Yamada, Ivan Hong Jun Koh

Background

Triggers of organ dysfunction have been associated with the worsening of microcirculatory dysfunction in sepsis, and because microcirculatory changes occur before macro-hemodynamic abnormalities, they can potentially detect disease progression early on. The difficulty in distinguishing altered microcirculatory characteristics corresponding to varying stages of sepsis severity has been a limiting factor for the use of microcirculatory imaging as a diagnostic and prognostic tool in sepsis. The aim of this study was to develop a convolutional neural network (CNN) based on progressive sublingual microcirculatory dysfunction images in sepsis, and test its diagnostic accuracy for these progressive stages.

Methods

Sepsis was induced in Wistar rats (2 mL of E. coli 108 CFU/mL inoculation into the jugular vein), and 2 mL saline injection in sham animals was the control. Sublingual microvessels of all animals with surrounding tissue images were captured by Sidestream dark field imaging (SDF) at T0 (basal) and T2, T4, and T6 h after sepsis induction. From a total of 137 videos, 37.930 frames were extracted; a part (29.341) was used for the training of Resnet-50 (CNN-construct), and the remaining (8.589) was used for validation of accuracy.

Results

The CNN-construct successfully classified the various stages of sepsis with a high accuracy (97.07%). The average AUC value of the ROC curve was 0.9833, and the sensitivity and specificity ranged from 94.57% to 99.91%, respectively, at all time points.

Conclusions

By blind testing with new sublingual microscopy images captured at different periods of the acute phase of sepsis, the CNN-construct was able to accurately diagnose the four stages of sepsis severity. Thus, this new method presents the diagnostic potential for different stages of microcirculatory dysfunction and enables the prediction of clinical evolution and therapeutic efficacy. Automated simultaneous assessment of multiple characteristics, both microvessels and adjacent tissues, may account for this diagnostic skill. As such a task cannot be analyzed with human visual criteria only, CNN is a novel method to identify the different stages of sepsis by assessing the distinct features of each stage.

背景器官功能障碍的触发因素与败血症中微循环功能障碍的恶化有关,并且由于微循环变化发生在宏观血液动力学异常之前,它们有可能在早期发现疾病进展。区分与败血症严重程度不同阶段相对应的微循环特征改变的困难一直是使用微循环成像作为败血症诊断和预后工具的限制因素。本研究的目的是开发一种基于败血症进行性舌下微循环功能障碍图像的卷积神经网络(CNN),并测试其对这些进行性阶段的诊断准确性。方法用Wistar大鼠(颈静脉注射大肠杆菌108CFU/mL,2mL)诱导脓毒症,假动物注射生理盐水2mL作为对照。在败血症诱导后T0(基础)和T2、T4和T6小时,通过Sidestream暗场成像(SDF)捕获所有动物的舌下微血管及其周围组织图像。从总共137个视频中,提取了37.930帧;一部分(29.341)用于训练Resnet-50(CNN结构),其余部分(8.589)用于验证准确性。结果CNN构建成功地对败血症的各个阶段进行了高准确率(97.07%)的分类。ROC曲线的平均AUC值为0.9833,在所有时间点的敏感性和特异性分别为94.57%和99.91%。结论通过对脓毒症急性期不同时期新拍摄的舌下显微镜图像进行盲检,CNN构建能够准确诊断脓毒症严重程度的四个阶段。因此,这种新方法为不同阶段的微循环功能障碍提供了诊断潜力,并能够预测临床进展和治疗效果。对微血管和邻近组织的多种特征进行自动同时评估可能是这种诊断技巧的原因。由于这样的任务不能仅用人类视觉标准进行分析,CNN是一种通过评估每个阶段的不同特征来识别败血症不同阶段的新方法。
{"title":"A new convolutional neural network-construct for sepsis enhances pattern identification of microcirculatory dysfunction","authors":"Carolina Toledo Ferraz,&nbsp;Ana Maria Alvim Liberatore,&nbsp;Tatiane Lissa Yamada,&nbsp;Ivan Hong Jun Koh","doi":"10.1016/j.ibmed.2023.100106","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100106","url":null,"abstract":"<div><h3>Background</h3><p>Triggers of organ dysfunction have been associated with the worsening of microcirculatory dysfunction in sepsis, and because microcirculatory changes occur before macro-hemodynamic abnormalities, they can potentially detect disease progression early on. The difficulty in distinguishing altered microcirculatory characteristics corresponding to varying stages of sepsis severity has been a limiting factor for the use of microcirculatory imaging as a diagnostic and prognostic tool in sepsis. The aim of this study was to develop a convolutional neural network (CNN) based on progressive sublingual microcirculatory dysfunction images in sepsis, and test its diagnostic accuracy for these progressive stages.</p></div><div><h3>Methods</h3><p>Sepsis was induced in Wistar rats (2 mL of <em>E. coli</em> 10<sup>8</sup> CFU/mL inoculation into the jugular vein), and 2 mL saline injection in sham animals was the control. Sublingual microvessels of all animals with surrounding tissue images were captured by Sidestream dark field imaging (SDF) at T0 (basal) and T2, T4, and T6 h after sepsis induction. From a total of 137 videos, 37.930 frames were extracted; a part (29.341) was used for the training of Resnet-50 (CNN-construct), and the remaining (8.589) was used for validation of accuracy.</p></div><div><h3>Results</h3><p>The CNN-construct successfully classified the various stages of sepsis with a high accuracy (97.07%). The average AUC value of the ROC curve was 0.9833, and the sensitivity and specificity ranged from 94.57% to 99.91%, respectively, at all time points.</p></div><div><h3>Conclusions</h3><p>By blind testing with new sublingual microscopy images captured at different periods of the acute phase of sepsis, the CNN-construct was able to accurately diagnose the four stages of sepsis severity. Thus, this new method presents the diagnostic potential for different stages of microcirculatory dysfunction and enables the prediction of clinical evolution and therapeutic efficacy. Automated simultaneous assessment of multiple characteristics, both microvessels and adjacent tissues, may account for this diagnostic skill. As such a task cannot be analyzed with human visual criteria only, CNN is a novel method to identify the different stages of sepsis by assessing the distinct features of each stage.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49869158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Hospital Readmission Risk in Patients with Severe Bronchopulmonary Dysplasia: Exploring the Impact of Neighborhood-Level Social Determinants of Health 预测严重支气管肺发育不良患者再入院风险:探索邻里层面的健康社会决定因素的影响
Pub Date : 2023-01-01 DOI: 10.1016/j.ibmed.2023.100122
Tyler Gorham , Audrey Anand , Jay Anand , Steve Rust , George El-Ferzli
{"title":"Predicting Hospital Readmission Risk in Patients with Severe Bronchopulmonary Dysplasia: Exploring the Impact of Neighborhood-Level Social Determinants of Health","authors":"Tyler Gorham ,&nbsp;Audrey Anand ,&nbsp;Jay Anand ,&nbsp;Steve Rust ,&nbsp;George El-Ferzli","doi":"10.1016/j.ibmed.2023.100122","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100122","url":null,"abstract":"","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666521223000364/pdfft?md5=3d3b010d91d948080e99be280dfec786&pid=1-s2.0-S2666521223000364-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138558705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminating Acute Respiratory Distress Syndrome from other forms of respiratory failure via iterative machine learning 通过迭代机器学习区分急性呼吸窘迫综合征和其他形式的呼吸衰竭
Pub Date : 2023-01-01 DOI: 10.1016/j.ibmed.2023.100087
Babak Afshin-Pour , Michael Qiu , Shahrzad Hosseini Vajargah , Helen Cheyne , Kevin Ha , Molly Stewart , Jan Horsky , Rachel Aviv , Nasen Zhang , Mangala Narasimhan , John Chelico , Gabriel Musso , Negin Hajizadeh

Acute Respiratory Distress Syndrome (ARDS) is associated with high morbidity and mortality. Identification of ARDS enables lung protective strategies, quality improvement interventions, and clinical trial enrolment, but remains challenging particularly in the first 24 hours of mechanical ventilation. To address this we built an algorithm capable of discriminating ARDS from other similarly presenting disorders immediately following mechanical ventilation. Specifically, a clinical team examined medical records from 1263 ICU-admitted, mechanically ventilated patients, retrospectively assigning each patient a diagnosis of “ARDS” or “non-ARDS” (e.g., pulmonary edema). Exploiting data readily available in the clinical setting, including patient demographics, laboratory test results from before the initiation of mechanical ventilation, and features extracted by natural language processing of radiology reports, we applied an iterative pre-processing and machine learning framework. The resulting model successfully discriminated ARDS from non-ARDS causes of respiratory failure (AUC = 0.85) among patients meeting Berlin criteria for severe hypoxia. This analysis also highlighted novel patient variables that were informative for identifying ARDS in ICU settings.

急性呼吸窘迫综合征(ARDS)具有较高的发病率和死亡率。ARDS的识别有助于肺保护策略、质量改善干预措施和临床试验的招募,但仍然具有挑战性,特别是在机械通气的前24小时。为了解决这个问题,我们建立了一个算法,能够在机械通气后立即将ARDS与其他类似症状的疾病区分开来。具体而言,临床小组检查了1263名icu住院机械通气患者的医疗记录,回顾性地为每位患者诊断为“ARDS”或“非ARDS”(例如肺水肿)。利用临床环境中现成的数据,包括患者人口统计数据、机械通气开始前的实验室测试结果,以及通过放射学报告的自然语言处理提取的特征,我们应用了迭代预处理和机器学习框架。该模型成功地在符合严重缺氧柏林标准的患者中区分出ARDS与非ARDS原因的呼吸衰竭(AUC = 0.85)。该分析还强调了新的患者变量,为在ICU环境中识别ARDS提供了信息。
{"title":"Discriminating Acute Respiratory Distress Syndrome from other forms of respiratory failure via iterative machine learning","authors":"Babak Afshin-Pour ,&nbsp;Michael Qiu ,&nbsp;Shahrzad Hosseini Vajargah ,&nbsp;Helen Cheyne ,&nbsp;Kevin Ha ,&nbsp;Molly Stewart ,&nbsp;Jan Horsky ,&nbsp;Rachel Aviv ,&nbsp;Nasen Zhang ,&nbsp;Mangala Narasimhan ,&nbsp;John Chelico ,&nbsp;Gabriel Musso ,&nbsp;Negin Hajizadeh","doi":"10.1016/j.ibmed.2023.100087","DOIUrl":"10.1016/j.ibmed.2023.100087","url":null,"abstract":"<div><p>Acute Respiratory Distress Syndrome (ARDS) is associated with high morbidity and mortality. Identification of ARDS enables lung protective strategies, quality improvement interventions, and clinical trial enrolment, but remains challenging particularly in the first 24 hours of mechanical ventilation. To address this we built an algorithm capable of discriminating ARDS from other similarly presenting disorders immediately following mechanical ventilation. Specifically, a clinical team examined medical records from 1263 ICU-admitted, mechanically ventilated patients, retrospectively assigning each patient a diagnosis of “ARDS” or “non-ARDS” (e.g., pulmonary edema). Exploiting data readily available in the clinical setting, including patient demographics, laboratory test results from before the initiation of mechanical ventilation, and features extracted by natural language processing of radiology reports, we applied an iterative pre-processing and machine learning framework. The resulting model successfully discriminated ARDS from non-ARDS causes of respiratory failure (AUC = 0.85) among patients meeting Berlin criteria for severe hypoxia. This analysis also highlighted novel patient variables that were informative for identifying ARDS in ICU settings.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9812471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10665721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine learning algorithms for classifying corneas by Zernike descriptors 基于Zernike描述符的角膜分类机器学习算法
Pub Date : 2023-01-01 DOI: 10.1016/j.ibmed.2022.100081
María S. del Río , Juan P. Trevino

Keratoconus is the most common primary ectasia, as the treatment is not easy, its early diagnosis is essential. The main goal of this study is to develop a method for classification of specific types of corneal shapes where 55 Zernike coefficients (angular index m = 9) are used as inputs. We describe and apply six Machine Learning (ML) classification methods and an ensemble of them to objectively discriminate between keratoconic and non-keratoconic corneal shapes. Earlier attempts by other authors have successfully implemented several Machine Learning models using different parameters (usually, indirect measurements) and have obtained positive results. Given the importance and ubiquity of Zernike polynomials in the eye care community, our proposal should be a suitable choice to incorporate to current methods which might serve as a prescreening test. In this project we work with 475 corneas, classified by experts in two groups, 50 keratoconics and 425 non-keratoconics. All six models yield high rated results with accuracies above 98%, precisions above 97%, or sensitivities above 93%. Also, by building an assembly with the models, we further improve the accuracy of our classification, for example we found an accuracy of 99.7%, a precision of 99.8% and sensitivity of 98.3%. The model can be easily implemented in any system, being very simple to use, thus providing ophthalmologists with a effortless and powerful tool to make a first diagnosis.

圆锥角膜是最常见的原发性扩张,由于治疗不容易,早期诊断至关重要。本研究的主要目标是开发一种以55个泽尼克系数(角指数m = 9)作为输入的特定类型角膜形状的分类方法。我们描述并应用六种机器学习(ML)分类方法及其集合来客观地区分角膜锥形和非角膜锥形角膜形状。其他作者早期的尝试已经成功地使用不同的参数(通常是间接测量)实现了几个机器学习模型,并获得了积极的结果。鉴于泽尼克多项式在眼保健界的重要性和普遍性,我们的建议应该是一个合适的选择,以纳入现有的方法,可能作为一个预筛选测试。在这个项目中,我们使用了475个角膜,由专家分为两组,50个角膜移植组和425个非角膜移植组。所有六个模型产生高评级的结果,准确度高于98%,精度高于97%,或灵敏度高于93%。此外,通过与模型构建装配,我们进一步提高了分类的准确性,例如我们发现准确率为99.7%,精密度为99.8%,灵敏度为98.3%。该模型可以很容易地在任何系统中实现,使用非常简单,从而为眼科医生提供了一个轻松而强大的工具来进行首次诊断。
{"title":"Machine learning algorithms for classifying corneas by Zernike descriptors","authors":"María S. del Río ,&nbsp;Juan P. Trevino","doi":"10.1016/j.ibmed.2022.100081","DOIUrl":"https://doi.org/10.1016/j.ibmed.2022.100081","url":null,"abstract":"<div><p>Keratoconus is the most common primary ectasia, as the treatment is not easy, its early diagnosis is essential. The main goal of this study is to develop a method for classification of specific types of corneal shapes where 55 Zernike coefficients (angular index <em>m</em> = 9) are used as inputs. We describe and apply six Machine Learning (ML) classification methods and an ensemble of them to objectively discriminate between keratoconic and non-keratoconic corneal shapes. Earlier attempts by other authors have successfully implemented several Machine Learning models using different parameters (usually, indirect measurements) and have obtained positive results. Given the importance and ubiquity of Zernike polynomials in the eye care community, our proposal should be a suitable choice to incorporate to current methods which might serve as a prescreening test. In this project we work with 475 corneas, classified by experts in two groups, 50 keratoconics and 425 non-keratoconics. All six models yield high rated results with accuracies above 98%, precisions above 97%, or sensitivities above 93%. Also, by building an assembly with the models, we further improve the accuracy of our classification, for example we found an accuracy of 99.7%, a precision of 99.8% and sensitivity of 98.3%. The model can be easily implemented in any system, being very simple to use, thus providing ophthalmologists with a effortless and powerful tool to make a first diagnosis.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"7 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49857634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic generation of operation notes in endoscopic pituitary surgery videos using workflow recognition 基于工作流识别的垂体内窥镜手术视频操作笔记的自动生成
Pub Date : 2023-01-01 DOI: 10.1016/j.ibmed.2023.100107
Adrito Das , Danyal Z. Khan , John G. Hanrahan , Hani J. Marcus , Danail Stoyanov

Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-F1 score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.

手术记录是患者护理的重要组成部分。然而,手动编写它们很容易出现人为错误,尤其是在高压临床环境中。从视频记录中自动生成操作说明可以减轻一些管理负担,提高准确性,并提供额外信息。为了在垂体内窥镜手术中实现这一点,通过专家共识确定了27个步骤。然后,对于为这项研究录制的97个视频,由专家外科医生对每个步骤的时间戳进行注释。为了自动确定视频中是否存在步骤,创建了一个三阶段架构。首先,对于每一步,使用卷积神经网络对视频的每一帧进行二值图像分类。其次,对于每个步骤,将二进制帧分类传递给用于二进制视频分类的鉴别器。第三,对于每个视频,将二进制视频分类传递给累加器进行多标签步骤分类。该体系结构在77个视频上进行了训练,并在20个视频中进行了测试,获得了0.80的F1分数。分类被输入到基于临床的预定义模板中,并通过额外的视频分析进一步丰富。因此,这项工作表明,从手术视频中自动生成手术记录是可行的,并且可以在记录过程中为外科医生提供帮助。
{"title":"Automatic generation of operation notes in endoscopic pituitary surgery videos using workflow recognition","authors":"Adrito Das ,&nbsp;Danyal Z. Khan ,&nbsp;John G. Hanrahan ,&nbsp;Hani J. Marcus ,&nbsp;Danail Stoyanov","doi":"10.1016/j.ibmed.2023.100107","DOIUrl":"https://doi.org/10.1016/j.ibmed.2023.100107","url":null,"abstract":"<div><p>Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-<em>F</em><sub>1</sub> score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49869238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligence-based medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1