首页 > 最新文献

NPJ Digital Medicine最新文献

英文 中文
Can artificial intelligence improve medicine’s uncomfortable relationship with Maths? 人工智能能否改善医学与数学的不和谐关系?
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-22 DOI: 10.1038/s41746-024-01168-8
Alexandra Valetopoulou, Simon Williams, Hani J. Marcus
{"title":"Can artificial intelligence improve medicine’s uncomfortable relationship with Maths?","authors":"Alexandra Valetopoulou, Simon Williams, Hani J. Marcus","doi":"10.1038/s41746-024-01168-8","DOIUrl":"10.1038/s41746-024-01168-8","url":null,"abstract":"","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41746-024-01168-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141439834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation and application of computer vision algorithms for video-based tremor analysis 基于视频的震颤分析计算机视觉算法的验证与应用
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-21 DOI: 10.1038/s41746-024-01153-1
Maximilian U. Friedrich, Anna-Julia Roenn, Chiara Palmisano, Jane Alty, Steffen Paschen, Guenther Deuschl, Chi Wang Ip, Jens Volkmann, Muthuraman Muthuraman, Robert Peach, Martin M. Reich
Tremor is one of the most common neurological symptoms. Its clinical and neurobiological complexity necessitates novel approaches for granular phenotyping. Instrumented neurophysiological analyses have proven useful, but are highly resource-intensive and lack broad accessibility. In contrast, bedside scores are simple to administer, but lack the granularity to capture subtle but relevant tremor features. We utilise the open-source computer vision pose tracking algorithm Mediapipe to track hands in clinical video recordings and use the resulting time series to compute canonical tremor features. This approach is compared to marker-based 3D motion capture, wrist-worn accelerometry, clinical scoring and a second, specifically trained tremor-specific algorithm in two independent clinical cohorts. These cohorts consisted of 66 patients diagnosed with essential tremor, assessed in different task conditions and states of deep brain stimulation therapy. We find that Mediapipe-derived tremor metrics exhibit high convergent clinical validity to scores (Spearman’s ρ = 0.55–0.86, p≤ .01) as well as an accuracy of up to 2.60 mm (95% CI [−3.13, 8.23]) and ≤0.21 Hz (95% CI [−0.05, 0.46]) for tremor amplitude and frequency measurements, matching gold-standard equipment. Mediapipe, but not the disease-specific algorithm, was capable of analysing videos involving complex configurational changes of the hands. Moreover, it enabled the extraction of tremor features with diagnostic and prognostic relevance, a dimension which conventional tremor scores were unable to provide. Collectively, this demonstrates that current computer vision algorithms can be transformed into an accurate and highly accessible tool for video-based tremor analysis, yielding comparable results to gold standard tremor recordings.
震颤是最常见的神经症状之一。由于其临床和神经生物学的复杂性,有必要采用新方法对其进行细致的表型分析。神经生理学仪器分析已被证明是有用的,但需要大量资源,而且缺乏广泛的可及性。与此相反,床旁评分操作简单,但缺乏精细度,无法捕捉到细微但相关的震颤特征。我们利用开源计算机视觉姿势跟踪算法 Mediapipe 来跟踪临床视频记录中的手,并利用由此产生的时间序列来计算典型震颤特征。在两个独立的临床队列中,该方法与基于标记的三维运动捕捉、腕戴式加速度测量、临床评分和第二种经过专门训练的震颤专用算法进行了比较。这些队列由 66 名确诊为本质性震颤的患者组成,在不同的任务条件和深部脑刺激治疗状态下进行评估。我们发现,Mediapipe 衍生的震颤指标与评分具有很高的临床收敛性(Spearman's ρ = 0.55-0.86,p≤ .01),震颤幅度和频率测量的准确度分别高达 2.60 毫米(95% CI [-3.13, 8.23])和≤0.21 赫兹(95% CI [-0.05, 0.46]),与黄金标准设备相匹配。Mediapipe(而非疾病专用算法)能够分析涉及手部复杂构型变化的视频。此外,它还能提取具有诊断和预后相关性的震颤特征,而这正是传统震颤评分所无法提供的。总之,这表明当前的计算机视觉算法可以转化为基于视频的震颤分析的准确且高度易用的工具,其结果可与金标准震颤记录相媲美。
{"title":"Validation and application of computer vision algorithms for video-based tremor analysis","authors":"Maximilian U. Friedrich, Anna-Julia Roenn, Chiara Palmisano, Jane Alty, Steffen Paschen, Guenther Deuschl, Chi Wang Ip, Jens Volkmann, Muthuraman Muthuraman, Robert Peach, Martin M. Reich","doi":"10.1038/s41746-024-01153-1","DOIUrl":"10.1038/s41746-024-01153-1","url":null,"abstract":"Tremor is one of the most common neurological symptoms. Its clinical and neurobiological complexity necessitates novel approaches for granular phenotyping. Instrumented neurophysiological analyses have proven useful, but are highly resource-intensive and lack broad accessibility. In contrast, bedside scores are simple to administer, but lack the granularity to capture subtle but relevant tremor features. We utilise the open-source computer vision pose tracking algorithm Mediapipe to track hands in clinical video recordings and use the resulting time series to compute canonical tremor features. This approach is compared to marker-based 3D motion capture, wrist-worn accelerometry, clinical scoring and a second, specifically trained tremor-specific algorithm in two independent clinical cohorts. These cohorts consisted of 66 patients diagnosed with essential tremor, assessed in different task conditions and states of deep brain stimulation therapy. We find that Mediapipe-derived tremor metrics exhibit high convergent clinical validity to scores (Spearman’s ρ = 0.55–0.86, p≤ .01) as well as an accuracy of up to 2.60 mm (95% CI [−3.13, 8.23]) and ≤0.21 Hz (95% CI [−0.05, 0.46]) for tremor amplitude and frequency measurements, matching gold-standard equipment. Mediapipe, but not the disease-specific algorithm, was capable of analysing videos involving complex configurational changes of the hands. Moreover, it enabled the extraction of tremor features with diagnostic and prognostic relevance, a dimension which conventional tremor scores were unable to provide. Collectively, this demonstrates that current computer vision algorithms can be transformed into an accurate and highly accessible tool for video-based tremor analysis, yielding comparable results to gold standard tremor recordings.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41746-024-01153-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141436166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PatchSorter: a high throughput deep learning digital pathology tool for object labeling PatchSorter:用于物体标记的高通量深度学习数字病理学工具。
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-20 DOI: 10.1038/s41746-024-01150-4
Cédric Walker, Tasneem Talawalla, Robert Toth, Akhil Ambekar, Kien Rea, Oswin Chamian, Fan Fan, Sabina Berezowska, Sven Rottenberg, Anant Madabhushi, Marie Maillard, Laura Barisoni, Hugo Mark Horlings, Andrew Janowczyk
The discovery of patterns associated with diagnosis, prognosis, and therapy response in digital pathology images often requires intractable labeling of large quantities of histological objects. Here we release an open-source labeling tool, PatchSorter, which integrates deep learning with an intuitive web interface. Using >100,000 objects, we demonstrate a >7x improvement in labels per second over unaided labeling, with minimal impact on labeling accuracy, thus enabling high-throughput labeling of large datasets.
要发现数字病理图像中与诊断、预后和治疗反应相关的模式,往往需要对大量组织学对象进行难以处理的标记。在此,我们发布了一款开源标注工具 PatchSorter,它将深度学习与直观的网络界面相结合。通过使用 >100,000 个对象,我们证明了每秒标记的数量比无辅助标记提高了 >7 倍,而对标记准确性的影响却微乎其微,从而实现了大型数据集的高通量标记。
{"title":"PatchSorter: a high throughput deep learning digital pathology tool for object labeling","authors":"Cédric Walker, Tasneem Talawalla, Robert Toth, Akhil Ambekar, Kien Rea, Oswin Chamian, Fan Fan, Sabina Berezowska, Sven Rottenberg, Anant Madabhushi, Marie Maillard, Laura Barisoni, Hugo Mark Horlings, Andrew Janowczyk","doi":"10.1038/s41746-024-01150-4","DOIUrl":"10.1038/s41746-024-01150-4","url":null,"abstract":"The discovery of patterns associated with diagnosis, prognosis, and therapy response in digital pathology images often requires intractable labeling of large quantities of histological objects. Here we release an open-source labeling tool, PatchSorter, which integrates deep learning with an intuitive web interface. Using >100,000 objects, we demonstrate a >7x improvement in labels per second over unaided labeling, with minimal impact on labeling accuracy, thus enabling high-throughput labeling of large datasets.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11190251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141432450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of a smartphone-based deep-learning-enabled system to detect middle-ear conditions in otoscopic images 开发和验证基于智能手机的深度学习系统,以检测耳镜图像中的中耳状况。
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-20 DOI: 10.1038/s41746-024-01159-9
Constance Dubois, David Eigen, François Simon, Vincent Couloigner, Michael Gormish, Martin Chalumeau, Laurent Schmoll, Jérémie F. Cohen
Middle-ear conditions are common causes of primary care visits, hearing impairment, and inappropriate antibiotic use. Deep learning (DL) may assist clinicians in interpreting otoscopic images. This study included patients over 5 years old from an ambulatory ENT practice in Strasbourg, France, between 2013 and 2020. Digital otoscopic images were obtained using a smartphone-attached otoscope (Smart Scope, Karl Storz, Germany) and labeled by a senior ENT specialist across 11 diagnostic classes (reference standard). An Inception-v2 DL model was trained using 41,664 otoscopic images, and its diagnostic accuracy was evaluated by calculating class-specific estimates of sensitivity and specificity. The model was then incorporated into a smartphone app called i-Nside. The DL model was evaluated on a validation set of 3,962 images and a held-out test set comprising 326 images. On the validation set, all class-specific estimates of sensitivity and specificity exceeded 98%. On the test set, the DL model achieved a sensitivity of 99.0% (95% confidence interval: 94.5–100) and a specificity of 95.2% (91.5–97.6) for the binary classification of normal vs. abnormal images; wax plugs were detected with a sensitivity of 100% (94.6–100) and specificity of 97.7% (95.0–99.1); other class-specific estimates of sensitivity and specificity ranged from 33.3% to 92.3% and 96.0% to 100%, respectively. We present an end-to-end DL-enabled system able to achieve expert-level diagnostic accuracy for identifying normal tympanic aspects and wax plugs within digital otoscopic images. However, the system’s performance varied for other middle-ear conditions. Further prospective validation is necessary before wider clinical deployment.
中耳疾病是导致初级保健就诊、听力受损和抗生素使用不当的常见原因。深度学习(DL)可帮助临床医生解读耳镜图像。这项研究纳入了法国斯特拉斯堡一家门诊耳鼻喉科诊所在2013年至2020年间收治的5岁以上患者。数字耳镜图像通过智能手机连接的耳镜(Smart Scope,德国卡尔-斯托尔兹公司)获得,并由资深耳鼻喉科专家按照 11 个诊断类别(参考标准)进行标注。使用 41,664 张耳镜图像对 Inception-v2 DL 模型进行了训练,并通过计算特定类别的灵敏度和特异性估计值来评估其诊断准确性。随后,该模型被整合到一款名为 i-Nside 的智能手机应用程序中。DL 模型在由 3962 张图像组成的验证集和 326 张图像组成的保留测试集上进行了评估。在验证集上,所有类别的灵敏度和特异性估计值都超过了 98%。在测试集上,DL 模型对正常与异常图像进行二元分类的灵敏度为 99.0%(95% 置信区间:94.5-100),特异性为 95.2%(91.5-97.6);检测到蜡栓的灵敏度为 100%(94.6-100),特异性为 97.7%(95.0-99.1);其他类别的灵敏度和特异性估计值分别为 33.3% 到 92.3%,96.0% 到 100%。我们介绍的端到端 DL 支持系统能够达到专家级诊断准确度,在数字耳镜图像中识别正常鼓室和蜡栓。然而,该系统对其他中耳病症的表现却不尽相同。在更广泛的临床应用之前,有必要进行进一步的前瞻性验证。
{"title":"Development and validation of a smartphone-based deep-learning-enabled system to detect middle-ear conditions in otoscopic images","authors":"Constance Dubois, David Eigen, François Simon, Vincent Couloigner, Michael Gormish, Martin Chalumeau, Laurent Schmoll, Jérémie F. Cohen","doi":"10.1038/s41746-024-01159-9","DOIUrl":"10.1038/s41746-024-01159-9","url":null,"abstract":"Middle-ear conditions are common causes of primary care visits, hearing impairment, and inappropriate antibiotic use. Deep learning (DL) may assist clinicians in interpreting otoscopic images. This study included patients over 5 years old from an ambulatory ENT practice in Strasbourg, France, between 2013 and 2020. Digital otoscopic images were obtained using a smartphone-attached otoscope (Smart Scope, Karl Storz, Germany) and labeled by a senior ENT specialist across 11 diagnostic classes (reference standard). An Inception-v2 DL model was trained using 41,664 otoscopic images, and its diagnostic accuracy was evaluated by calculating class-specific estimates of sensitivity and specificity. The model was then incorporated into a smartphone app called i-Nside. The DL model was evaluated on a validation set of 3,962 images and a held-out test set comprising 326 images. On the validation set, all class-specific estimates of sensitivity and specificity exceeded 98%. On the test set, the DL model achieved a sensitivity of 99.0% (95% confidence interval: 94.5–100) and a specificity of 95.2% (91.5–97.6) for the binary classification of normal vs. abnormal images; wax plugs were detected with a sensitivity of 100% (94.6–100) and specificity of 97.7% (95.0–99.1); other class-specific estimates of sensitivity and specificity ranged from 33.3% to 92.3% and 96.0% to 100%, respectively. We present an end-to-end DL-enabled system able to achieve expert-level diagnostic accuracy for identifying normal tympanic aspects and wax plugs within digital otoscopic images. However, the system’s performance varied for other middle-ear conditions. Further prospective validation is necessary before wider clinical deployment.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141432448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Five million nights: temporal dynamics in human sleep phenotypes 五百万个夜晚:人类睡眠表型的时间动态变化
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-20 DOI: 10.1038/s41746-024-01125-5
Varun K. Viswanath, Wendy Hartogenesis, Stephan Dilchert, Leena Pandya, Frederick M. Hecht, Ashley E. Mason, Edward J. Wang, Benjamin L. Smarr
Sleep monitoring has become widespread with the rise of affordable wearable devices. However, converting sleep data into actionable change remains challenging as diverse factors can cause combinations of sleep parameters to differ both between people and within people over time. Researchers have attempted to combine sleep parameters to improve detecting similarities between nights of sleep. The cluster of similar combinations of sleep parameters from a night of sleep defines that night’s sleep phenotype. To date, quantitative models of sleep phenotype made from data collected from large populations have used cross-sectional data, which preclude longitudinal analyses that could better quantify differences within individuals over time. In analyses reported here, we used five million nights of wearable sleep data to test (a) whether an individual’s sleep phenotype changes over time and (b) whether these changes elucidate new information about acute periods of illness (e.g., flu, fever, COVID-19). We found evidence for 13 sleep phenotypes associated with sleep quality and that individuals transition between these phenotypes over time. Patterns of transitions significantly differ (i) between individuals (with vs. without a chronic health condition; chi-square test; p-value < 1e−100) and (ii) within individuals over time (before vs. during an acute condition; Chi-Square test; p-value < 1e−100). Finally, we found that the patterns of transitions carried more information about chronic and acute health conditions than did phenotype membership alone (longitudinal analyses yielded 2–10× as much information as cross-sectional analyses). These results support the use of temporal dynamics in the future development of longitudinal sleep analyses.
随着经济实惠的可穿戴设备的兴起,睡眠监测变得越来越普遍。然而,将睡眠数据转化为可操作的变化仍具有挑战性,因为各种因素会导致睡眠参数组合随时间推移而在人与人之间以及人与人之间产生差异。研究人员尝试将睡眠参数组合在一起,以更好地检测不同夜晚睡眠的相似性。一晚睡眠中相似的睡眠参数组合群定义了当晚的睡眠表型。迄今为止,根据从大量人群中收集到的数据建立的睡眠表型定量模型都使用了横断面数据,这就排除了纵向分析,而纵向分析可以更好地量化个体内部随时间变化而产生的差异。在本文报告的分析中,我们使用了五百万夜的可穿戴睡眠数据来测试:(a)个体的睡眠表型是否会随着时间的推移而发生变化;(b)这些变化是否阐明了急性疾病(如流感、发烧、COVID-19)的新信息。我们发现了与睡眠质量相关的 13 种睡眠表型的证据,而且个体会随着时间的推移在这些表型之间转换。过渡模式在以下两个方面存在明显差异:(i) 不同个体之间(有慢性疾病与无慢性疾病;Chi-square 检验;p-value < 1e-100);(ii) 不同个体内部随时间变化(急性病前与急性病期间;Chi-Square 检验;p-value < 1e-100)。最后,我们发现,与单独的表型成员资格相比,过渡模式包含了更多有关慢性和急性健康状况的信息(纵向分析产生的信息量是横截面分析的 2-10 倍)。这些结果支持在未来的纵向睡眠分析中使用时间动态分析。
{"title":"Five million nights: temporal dynamics in human sleep phenotypes","authors":"Varun K. Viswanath,&nbsp;Wendy Hartogenesis,&nbsp;Stephan Dilchert,&nbsp;Leena Pandya,&nbsp;Frederick M. Hecht,&nbsp;Ashley E. Mason,&nbsp;Edward J. Wang,&nbsp;Benjamin L. Smarr","doi":"10.1038/s41746-024-01125-5","DOIUrl":"10.1038/s41746-024-01125-5","url":null,"abstract":"Sleep monitoring has become widespread with the rise of affordable wearable devices. However, converting sleep data into actionable change remains challenging as diverse factors can cause combinations of sleep parameters to differ both between people and within people over time. Researchers have attempted to combine sleep parameters to improve detecting similarities between nights of sleep. The cluster of similar combinations of sleep parameters from a night of sleep defines that night’s sleep phenotype. To date, quantitative models of sleep phenotype made from data collected from large populations have used cross-sectional data, which preclude longitudinal analyses that could better quantify differences within individuals over time. In analyses reported here, we used five million nights of wearable sleep data to test (a) whether an individual’s sleep phenotype changes over time and (b) whether these changes elucidate new information about acute periods of illness (e.g., flu, fever, COVID-19). We found evidence for 13 sleep phenotypes associated with sleep quality and that individuals transition between these phenotypes over time. Patterns of transitions significantly differ (i) between individuals (with vs. without a chronic health condition; chi-square test; p-value &lt; 1e−100) and (ii) within individuals over time (before vs. during an acute condition; Chi-Square test; p-value &lt; 1e−100). Finally, we found that the patterns of transitions carried more information about chronic and acute health conditions than did phenotype membership alone (longitudinal analyses yielded 2–10× as much information as cross-sectional analyses). These results support the use of temporal dynamics in the future development of longitudinal sleep analyses.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41746-024-01125-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141430533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an effective predictive screening tool for prostate cancer using the ClarityDX machine learning platform 利用 ClarityDX 机器学习平台开发有效的前列腺癌预测性筛查工具。
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-20 DOI: 10.1038/s41746-024-01167-9
M. Eric Hyndman, Robert J. Paproski, Adam Kinnaird, Adrian Fairey, Leonard Marks, Christian P. Pavlovich, Sean A. Fletcher, Roman Zachoval, Vanda Adamcova, Jiri Stejskal, Armen Aprikian, Christopher J. D. Wallis, Desmond Pink, Catalina Vasquez, Perrin H. Beatty, John D. Lewis
The current prostate cancer (PCa) screen test, prostate-specific antigen (PSA), has a high sensitivity for PCa but low specificity for high-risk, clinically significant PCa (csPCa), resulting in overdiagnosis and overtreatment of non-csPCa. Early identification of csPCa while avoiding unnecessary biopsies in men with non-csPCa is challenging. We built an optimized machine learning platform (ClarityDX) and showed its utility in generating models predicting csPCa. Integrating the ClarityDX platform with blood-based biomarkers for clinically significant PCa and clinical biomarker data from a 3448-patient cohort, we developed a test to stratify patients’ risk of csPCa; called ClarityDX Prostate. When predicting high risk cancer in the validation cohort, ClarityDX Prostate showed 95% sensitivity, 35% specificity, 54% positive predictive value, and 91% negative predictive value, at a ≥ 25% threshold. Using ClarityDX Prostate at this threshold could avoid up to 35% of unnecessary prostate biopsies. ClarityDX Prostate showed higher accuracy for predicting the risk of csPCa than PSA alone and the tested model-based risk calculators. Using this test as a reflex test in men with elevated PSA levels may help patients and their healthcare providers decide if a prostate biopsy is necessary.
目前的前列腺癌(PCa)筛查测试--前列腺特异性抗原(PSA)--对 PCa 的灵敏度较高,但对高风险、有临床意义的 PCa(csPCa)的特异性较低,从而导致对非 csPCa 的过度诊断和过度治疗。在避免对患有非 csPCa 的男性进行不必要活检的同时,早期识别 csPCa 是一项挑战。我们建立了一个优化的机器学习平台(ClarityDX),并展示了其在生成预测 csPCa 模型方面的实用性。我们将 ClarityDX 平台与具有临床意义的 PCa 血液生物标记物以及来自 3448 例患者队列的临床生物标记物数据进行了整合,开发出了一种对患者的 csPCa 风险进行分层的测试方法,称为 ClarityDX Prostate。在预测验证队列中的高风险癌症时,ClarityDX Prostate 在≥ 25% 的阈值下显示出 95% 的灵敏度、35% 的特异性、54% 的阳性预测值和 91% 的阴性预测值。在此阈值下使用 ClarityDX Prostate 可以避免多达 35% 的不必要前列腺活检。在预测 csPCa 风险方面,ClarityDX Prostate 比单纯的 PSA 和经过测试的基于模型的风险计算器显示出更高的准确性。将该检测作为PSA水平升高的男性的反射性检测,可帮助患者及其医疗保健提供者决定是否有必要进行前列腺活检。
{"title":"Development of an effective predictive screening tool for prostate cancer using the ClarityDX machine learning platform","authors":"M. Eric Hyndman,&nbsp;Robert J. Paproski,&nbsp;Adam Kinnaird,&nbsp;Adrian Fairey,&nbsp;Leonard Marks,&nbsp;Christian P. Pavlovich,&nbsp;Sean A. Fletcher,&nbsp;Roman Zachoval,&nbsp;Vanda Adamcova,&nbsp;Jiri Stejskal,&nbsp;Armen Aprikian,&nbsp;Christopher J. D. Wallis,&nbsp;Desmond Pink,&nbsp;Catalina Vasquez,&nbsp;Perrin H. Beatty,&nbsp;John D. Lewis","doi":"10.1038/s41746-024-01167-9","DOIUrl":"10.1038/s41746-024-01167-9","url":null,"abstract":"The current prostate cancer (PCa) screen test, prostate-specific antigen (PSA), has a high sensitivity for PCa but low specificity for high-risk, clinically significant PCa (csPCa), resulting in overdiagnosis and overtreatment of non-csPCa. Early identification of csPCa while avoiding unnecessary biopsies in men with non-csPCa is challenging. We built an optimized machine learning platform (ClarityDX) and showed its utility in generating models predicting csPCa. Integrating the ClarityDX platform with blood-based biomarkers for clinically significant PCa and clinical biomarker data from a 3448-patient cohort, we developed a test to stratify patients’ risk of csPCa; called ClarityDX Prostate. When predicting high risk cancer in the validation cohort, ClarityDX Prostate showed 95% sensitivity, 35% specificity, 54% positive predictive value, and 91% negative predictive value, at a ≥ 25% threshold. Using ClarityDX Prostate at this threshold could avoid up to 35% of unnecessary prostate biopsies. ClarityDX Prostate showed higher accuracy for predicting the risk of csPCa than PSA alone and the tested model-based risk calculators. Using this test as a reflex test in men with elevated PSA levels may help patients and their healthcare providers decide if a prostate biopsy is necessary.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11190196/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141432449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head movement dynamics in dystonia: a multi-centre retrospective study using visual perceptive deep learning 肌张力障碍中的头部运动动态:利用视觉感知深度学习的多中心回顾性研究
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-18 DOI: 10.1038/s41746-024-01140-6
Robert Peach, Maximilian Friedrich, Lara Fronemann, Muthuraman Muthuraman, Sebastian R. Schreglmann, Daniel Zeller, Christoph Schrader, Joachim K. Krauss, Alfons Schnitzler, Matthias Wittstock, Ann-Kristin Helmers, Steffen Paschen, Andrea Kühn, Inger Marie Skogseid, Wilhelm Eisner, Joerg Mueller, Cordula Matthies, Martin Reich, Jens Volkmann, Chi Wang Ip
Dystonia is a neurological movement disorder characterised by abnormal involuntary movements and postures, particularly affecting the head and neck. However, current clinical assessment methods for dystonia rely on simplified rating scales which lack the ability to capture the intricate spatiotemporal features of dystonic phenomena, hindering clinical management and limiting understanding of the underlying neurobiology. To address this, we developed a visual perceptive deep learning framework that utilizes standard clinical videos to comprehensively evaluate and quantify disease states and the impact of therapeutic interventions, specifically deep brain stimulation. This framework overcomes the limitations of traditional rating scales and offers an efficient and accurate method that is rater-independent for evaluating and monitoring dystonia patients. To evaluate the framework, we leveraged semi-standardized clinical video data collected in three retrospective, longitudinal cohort studies across seven academic centres. We extracted static head angle excursions for clinical validation and derived kinematic variables reflecting naturalistic head dynamics to predict dystonia severity, subtype, and neuromodulation effects. The framework was also applied to a fully independent cohort of generalised dystonia patients for comparison between dystonia sub-types. Computer vision-derived measurements of head angle excursions showed a strong correlation with clinically assigned scores. Across comparisons, we identified consistent kinematic features from full video assessments encoding information critical to disease severity, subtype, and effects of neural circuit interventions, independent of static head angle deviations used in scoring. Our visual perceptive machine learning framework reveals kinematic pathosignatures of dystonia, potentially augmenting clinical management, facilitating scientific translation, and informing personalized precision neurology approaches.
肌张力障碍是一种神经系统运动障碍,其特征是异常的不自主运动和姿势,尤其影响头部和颈部。然而,目前肌张力障碍的临床评估方法依赖于简化的评分量表,无法捕捉肌张力障碍现象错综复杂的时空特征,从而阻碍了临床管理,限制了对潜在神经生物学的理解。为了解决这个问题,我们开发了一种视觉感知深度学习框架,利用标准临床视频全面评估和量化疾病状态以及治疗干预(特别是深部脑刺激)的影响。该框架克服了传统评分量表的局限性,为肌张力障碍患者的评估和监测提供了一种独立于评分者的高效、准确的方法。为了评估该框架,我们利用了在七个学术中心进行的三项回顾性纵向队列研究中收集的半标准化临床视频数据。我们提取了静态头部角度偏移进行临床验证,并得出了反映自然头部动态的运动学变量,以预测肌张力障碍的严重程度、亚型和神经调节效果。该框架还应用于完全独立的广泛性肌张力障碍患者队列,以比较肌张力障碍亚型。计算机视觉得出的头部角度偏移测量结果与临床评分有很强的相关性。在所有比较中,我们从完整的视频评估中发现了一致的运动学特征,这些特征编码了对疾病严重程度、亚型和神经回路干预效果至关重要的信息,与用于评分的静态头部角度偏差无关。我们的视觉感知机器学习框架揭示了肌张力障碍的运动学病理特征,有可能增强临床管理,促进科学转化,并为个性化精准神经学方法提供信息。
{"title":"Head movement dynamics in dystonia: a multi-centre retrospective study using visual perceptive deep learning","authors":"Robert Peach,&nbsp;Maximilian Friedrich,&nbsp;Lara Fronemann,&nbsp;Muthuraman Muthuraman,&nbsp;Sebastian R. Schreglmann,&nbsp;Daniel Zeller,&nbsp;Christoph Schrader,&nbsp;Joachim K. Krauss,&nbsp;Alfons Schnitzler,&nbsp;Matthias Wittstock,&nbsp;Ann-Kristin Helmers,&nbsp;Steffen Paschen,&nbsp;Andrea Kühn,&nbsp;Inger Marie Skogseid,&nbsp;Wilhelm Eisner,&nbsp;Joerg Mueller,&nbsp;Cordula Matthies,&nbsp;Martin Reich,&nbsp;Jens Volkmann,&nbsp;Chi Wang Ip","doi":"10.1038/s41746-024-01140-6","DOIUrl":"10.1038/s41746-024-01140-6","url":null,"abstract":"Dystonia is a neurological movement disorder characterised by abnormal involuntary movements and postures, particularly affecting the head and neck. However, current clinical assessment methods for dystonia rely on simplified rating scales which lack the ability to capture the intricate spatiotemporal features of dystonic phenomena, hindering clinical management and limiting understanding of the underlying neurobiology. To address this, we developed a visual perceptive deep learning framework that utilizes standard clinical videos to comprehensively evaluate and quantify disease states and the impact of therapeutic interventions, specifically deep brain stimulation. This framework overcomes the limitations of traditional rating scales and offers an efficient and accurate method that is rater-independent for evaluating and monitoring dystonia patients. To evaluate the framework, we leveraged semi-standardized clinical video data collected in three retrospective, longitudinal cohort studies across seven academic centres. We extracted static head angle excursions for clinical validation and derived kinematic variables reflecting naturalistic head dynamics to predict dystonia severity, subtype, and neuromodulation effects. The framework was also applied to a fully independent cohort of generalised dystonia patients for comparison between dystonia sub-types. Computer vision-derived measurements of head angle excursions showed a strong correlation with clinically assigned scores. Across comparisons, we identified consistent kinematic features from full video assessments encoding information critical to disease severity, subtype, and effects of neural circuit interventions, independent of static head angle deviations used in scoring. Our visual perceptive machine learning framework reveals kinematic pathosignatures of dystonia, potentially augmenting clinical management, facilitating scientific translation, and informing personalized precision neurology approaches.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41746-024-01140-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141334435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reply to: Association of accelerometer-measured physical activity intensity, sedentary time, and exercise time with incident Parkinson’s disease: need more evidence 回复:加速计测量的体力活动强度、久坐时间和运动时间与帕金森病发病的关系:需要更多证据。
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-18 DOI: 10.1038/s41746-024-01155-z
Mengyi Liu, Xianhui Qin
{"title":"Reply to: Association of accelerometer-measured physical activity intensity, sedentary time, and exercise time with incident Parkinson’s disease: need more evidence","authors":"Mengyi Liu,&nbsp;Xianhui Qin","doi":"10.1038/s41746-024-01155-z","DOIUrl":"10.1038/s41746-024-01155-z","url":null,"abstract":"","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141420092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From wearable sensor data to digital biomarker development: ten lessons learned and a framework proposal 从可穿戴传感器数据到数字生物标记开发:十条经验教训和一个框架建议。
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-18 DOI: 10.1038/s41746-024-01151-3
Paola Daniore, Vasileios Nittas, Christina Haag, Jürgen Bernard, Roman Gonzenbach, Viktor von Wyl
Wearable sensor technologies are becoming increasingly relevant in health research, particularly in the context of chronic disease management. They generate real-time health data that can be translated into digital biomarkers, which can provide insights into our health and well-being. Scientific methods to collect, interpret, analyze, and translate health data from wearables to digital biomarkers vary, and systematic approaches to guide these processes are currently lacking. This paper is based on an observational, longitudinal cohort study, BarKA-MS, which collected wearable sensor data on the physical rehabilitation of people living with multiple sclerosis (MS). Based on our experience with BarKA-MS, we provide and discuss ten lessons we learned in relation to digital biomarker development across key study phases. We then summarize these lessons into a guiding framework (DACIA) that aims to informs the use of wearable sensor data for digital biomarker development and chronic disease management for future research and teaching.
可穿戴传感器技术与健康研究的关系日益密切,尤其是在慢性病管理方面。它们产生的实时健康数据可以转化为数字生物标记,从而为我们的健康和福祉提供洞察力。从可穿戴设备收集、解释、分析健康数据并将其转化为数字生物标记的科学方法各不相同,目前还缺乏系统的方法来指导这些过程。本文基于一项观察性纵向队列研究--BarKA-MS,该研究收集了多发性硬化症(MS)患者身体康复方面的可穿戴传感器数据。根据我们在 BarKA-MS 项目中的经验,我们提供并讨论了在关键研究阶段数字生物标志物开发方面的十条经验。然后,我们将这些经验总结为一个指导框架(DACIA),旨在为今后的研究和教学中将可穿戴传感器数据用于数字生物标志物开发和慢性疾病管理提供参考。
{"title":"From wearable sensor data to digital biomarker development: ten lessons learned and a framework proposal","authors":"Paola Daniore,&nbsp;Vasileios Nittas,&nbsp;Christina Haag,&nbsp;Jürgen Bernard,&nbsp;Roman Gonzenbach,&nbsp;Viktor von Wyl","doi":"10.1038/s41746-024-01151-3","DOIUrl":"10.1038/s41746-024-01151-3","url":null,"abstract":"Wearable sensor technologies are becoming increasingly relevant in health research, particularly in the context of chronic disease management. They generate real-time health data that can be translated into digital biomarkers, which can provide insights into our health and well-being. Scientific methods to collect, interpret, analyze, and translate health data from wearables to digital biomarkers vary, and systematic approaches to guide these processes are currently lacking. This paper is based on an observational, longitudinal cohort study, BarKA-MS, which collected wearable sensor data on the physical rehabilitation of people living with multiple sclerosis (MS). Based on our experience with BarKA-MS, we provide and discuss ten lessons we learned in relation to digital biomarker development across key study phases. We then summarize these lessons into a guiding framework (DACIA) that aims to informs the use of wearable sensor data for digital biomarker development and chronic disease management for future research and teaching.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141420037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital health technologies need regulation and reimbursement that enable flexible interactions and groupings 数字医疗技术需要监管和报销,以实现灵活的互动和分组
IF 12.4 1区 医学 Q1 Computer Science Pub Date : 2024-06-18 DOI: 10.1038/s41746-024-01147-z
Rebecca Mathias, Peter McCulloch, Anastasia Chalkidou, Stephen Gilbert
Digital Health Technologies (DHTs) are being applied in a widening range of scenarios in medicine. We describe the emerging phenomenon of the grouping of individual DHTs, with a clinical use case and regulatory approval in their own right, into packages to perform specific clinical tasks in defined settings. Example groupings include suites of devices for remote monitoring, or for smart clinics. In this first article of a two-article series, we describe challenges in implementation and limitations in frameworks for the regulation, health technology assessment, and reimbursement of these device suites and linked novel care pathways.
数字健康技术(DHT)在医学领域的应用范围越来越广。我们描述了一种新出现的现象,即把具有临床用例并获得监管部门批准的单个 DHT 组合成套件,以便在规定的环境中执行特定的临床任务。分组实例包括用于远程监控或智能诊所的成套设备。本文是两篇系列文章中的第一篇,我们将介绍这些成套设备和相关新型护理路径在实施过程中遇到的挑战以及在监管、卫生技术评估和报销框架方面存在的局限性。
{"title":"Digital health technologies need regulation and reimbursement that enable flexible interactions and groupings","authors":"Rebecca Mathias,&nbsp;Peter McCulloch,&nbsp;Anastasia Chalkidou,&nbsp;Stephen Gilbert","doi":"10.1038/s41746-024-01147-z","DOIUrl":"10.1038/s41746-024-01147-z","url":null,"abstract":"Digital Health Technologies (DHTs) are being applied in a widening range of scenarios in medicine. We describe the emerging phenomenon of the grouping of individual DHTs, with a clinical use case and regulatory approval in their own right, into packages to perform specific clinical tasks in defined settings. Example groupings include suites of devices for remote monitoring, or for smart clinics. In this first article of a two-article series, we describe challenges in implementation and limitations in frameworks for the regulation, health technology assessment, and reimbursement of these device suites and linked novel care pathways.","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":null,"pages":null},"PeriodicalIF":12.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41746-024-01147-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141334122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
NPJ Digital Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1