首页 > 最新文献

Biomedical Physics & Engineering Express最新文献

英文 中文
Learning the anatomical topology consistency driven by Wasserstein distance for weakly supervised 3D pancreas registration in multi-phase CT images. 学习基于Wasserstein距离驱动的多相CT弱监督三维胰腺配准的解剖拓扑一致性。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-04 DOI: 10.1088/2057-1976/ae3966
Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie

Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wiseL1orL2loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.

对比增强CT (CECT)和非对比CT (NCCT)图像之间胰腺的准确和自动配准对于胰腺癌的诊断和治疗至关重要。然而,现有的基于深度学习的方法仍然有限,因为模式之间固有的强度差异会损害基于强度的相似性度量,而且胰腺的小尺寸、模糊的边界和复杂的环境会使基于分割的度量陷入局部最优。为了解决这些挑战,我们提出了一个弱监督注册框架,其中包含了一个新的混合损失函数。这种损失利用沃瑟斯坦距离来加强CECT和NCCT之间三维胰腺配准的解剖拓扑一致性。我们使用距离变换来建立胰腺的小,不确定和复杂的解剖拓扑分布。与传统的体素L1或L2丢失不同,Wasserstein距离直接测量胰腺扭曲和固定解剖拓扑结构之间的相似性。在7种胰腺肿瘤类型(PDAC、IPMN、MCN、SCN、SPT、CP、PNET)患者的975张配对CECT-NCCT图像数据集上进行的实验表明,我们的方法优于最先进的弱监督方法,Dice评分提高了3.2%,假阳性分割率降低了28.54%,Hausdorff距离降低了0.89%。源代码将在https://github.com/ZouLiwen-1999/WSMorph上公开提供。
{"title":"Learning the anatomical topology consistency driven by Wasserstein distance for weakly supervised 3D pancreas registration in multi-phase CT images.","authors":"Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie","doi":"10.1088/2057-1976/ae3966","DOIUrl":"10.1088/2057-1976/ae3966","url":null,"abstract":"<p><p>Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wise<i>L</i><sub>1</sub>or<i>L</i><sub>2</sub>loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interobserver image registration variability impacts on stereotactic arrhythmia radioablation (STAR) target margins. 观察者间图像配准可变性对立体定向心律失常放射消融(STAR)靶边界的影响。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-04 DOI: 10.1088/2057-1976/ae3b44
Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald

Background and purpose. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).Materials and methods.STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.Results.A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.Conclusion.Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.

目的:确定心脏计算机断层扫描(CT)图像配准的观察者间变异性,并评估在立体定向心律失常放射消融术(STAR)中观察到的变异性所需的边缘。方法:连续15例患者在心脏ct上划定STAR靶点。要求10名专家观察员将心脏CT图像严格配准到相应的规划CT图像。注册都开始与一个完全自动化的注册步骤,其次是手动调整。目标从心脏CT转移到计划CT,使用每个注册以及每个患者的一个共识注册。测量了共识目标包含每个观察者和完全自动化目标所需的余量。结果:本研究共评估了150例注册患者。手动注册需要平均(标准偏差)5分55秒(2分10秒)来执行。在没有人工干预的情况下,自动登记需要扩大6毫米,以实现97%患者95%的重叠。对于手动注册,扩大4毫米,97%的患者和观察者实现95%的重叠。剩下的3%需要膨胀4 ~ 9mm。在88%的病例中,3毫米的扩张达到95%的重叠。与其他患者相比,一些患者需要更大的扩张,在这些更困难的病例中,小的靶体积是常见的。没有观察到屏气和目标位置对观察者之间的变异性有影响。一些观察者需要比其他人更大的扩张,而那些需要最大边缘的人在每个病人身上都不一样。结论:心脏CT与计划CT的配准对STAR定位过程的不确定性贡献了约3mm。因此,在心脏CT上进行目标划定的工作流程应明确考虑目标边缘评估中的这种不确定性。
{"title":"Interobserver image registration variability impacts on stereotactic arrhythmia radioablation (STAR) target margins.","authors":"Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald","doi":"10.1088/2057-1976/ae3b44","DOIUrl":"10.1088/2057-1976/ae3b44","url":null,"abstract":"<p><p><i>Background and purpose</i>. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).<i>Materials and methods.</i>STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.<i>Results.</i>A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.<i>Conclusion.</i>Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Functional Near-Infrared Spectroscopy Signal Variability: The Role of Processing Pipelines and Task Complexity. 研究功能性近红外光谱信号变异性:处理管道和任务复杂性的作用。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae4105
Joshua Dugdale, Garrett Scott Black, Jordan Alexander Borrell

Functional near-infrared spectroscopy (fNIRS) is a portable, non-invasive brain imaging method with growing applications in neurorehabilitation. However, signal variability, driven in part by differences in data processing pipelines, remains a major barrier to its clinical adoption. This study compares the robustness of two common processing approaches, General Linear Model (GLM) and Block Averaging (BA), in detecting cortical activation across task complexities. Eighteen neurotypical, healthy adults completed a simple hand grasp task and a more complex gross manual dexterity task while fNIRS data were recorded and analyzed using the BA and GLM pipelines. Results revealed significant effects of both pipeline and task complexity on oxygenated and deoxygenated hemoglobin amplitudes. BA produced significantly larger responses than GLM, and complex tasks elicited significantly greater activation than simple tasks. Notably, only the BA-Complex subgroup showed significant differences from all other conditions, suggesting BA more effectively detects task-related hemodynamic changes. These findings emphasize the need for careful analysis pipeline selection to reduce variability and enhance fNIRS reliability in neurorehabilitation research.

功能近红外光谱(fNIRS)是一种便携式、无创的脑成像方法,在神经康复领域的应用越来越广泛。然而,部分由数据处理管道差异驱动的信号变异性仍然是临床应用的主要障碍。本研究比较了两种常见的处理方法,一般线性模型(GLM)和块平均(BA)在检测跨任务复杂性的皮层激活方面的鲁棒性。18名神经正常的健康成人完成了简单的手抓任务和更复杂的总手灵巧任务,同时使用BA和GLM管道记录和分析了fNIRS数据。结果显示,管道和任务复杂性对氧合血红蛋白和脱氧血红蛋白振幅均有显著影响。BA比GLM产生更大的反应,复杂任务比简单任务产生更大的激活。值得注意的是,只有BA复合物亚组与其他所有情况有显著差异,这表明BA更有效地检测与任务相关的血流动力学变化。这些发现强调了在神经康复研究中需要仔细分析管道选择以减少可变性并提高fNIRS的可靠性。
{"title":"Investigating Functional Near-Infrared Spectroscopy Signal Variability: The Role of Processing Pipelines and Task Complexity.","authors":"Joshua Dugdale, Garrett Scott Black, Jordan Alexander Borrell","doi":"10.1088/2057-1976/ae4105","DOIUrl":"https://doi.org/10.1088/2057-1976/ae4105","url":null,"abstract":"<p><p>Functional near-infrared spectroscopy (fNIRS) is a portable, non-invasive brain imaging method with growing applications in neurorehabilitation. However, signal variability, driven in part by differences in data processing pipelines, remains a major barrier to its clinical adoption. This study compares the robustness of two common processing approaches, General Linear Model (GLM) and Block Averaging (BA), in detecting cortical activation across task complexities. Eighteen neurotypical, healthy adults completed a simple hand grasp task and a more complex gross manual dexterity task while fNIRS data were recorded and analyzed using the BA and GLM pipelines. Results revealed significant effects of both pipeline and task complexity on oxygenated and deoxygenated hemoglobin amplitudes. BA produced significantly larger responses than GLM, and complex tasks elicited significantly greater activation than simple tasks. Notably, only the BA-Complex subgroup showed significant differences from all other conditions, suggesting BA more effectively detects task-related hemodynamic changes. These findings emphasize the need for careful analysis pipeline selection to reduce variability and enhance fNIRS reliability in neurorehabilitation research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents. 无线入耳式脑电系统在青少年听觉脑机接口中的应用。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3b45
Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau

In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.

对于非侵入性脑机接口(BCI)应用,耳内脑电图(EEG)系统比基于头皮的脑电图系统提供了几个实际优势。然而,制造入耳式脑电图系统的困难限制了它们在脑机接口用例中的可访问性。在这项研究中,我们开发了一种便携式,低成本的无线入耳式脑电图设备,使用市售组件。将5名青少年参与者的耳内脑电图信号(参考左侧乳突)与在α调制任务、各种伪像诱导任务和听觉词流BCI范式中同时收集的头皮脑电图进行比较。频谱分析证实,在5名参与者中,有3名参与者的耳内脑电图系统可以捕捉到闭眼放松期间显著增加的α活动,所有参与者的信噪比为2.34。耳内脑电图信号最容易受到水平头部运动、咳嗽和发声伪影的影响,但对眨眼等眼部伪影相对不敏感。对于听觉流范式,分类器仅对5名参与者中的1名从耳内脑电图信号中解码呈现的刺激。参与流的分类未超过偶然级别。对比图显示了有看护流和无看护流之间的差异,显示耳内脑电反应的幅度相对于头皮脑电反应的幅度减小。需要对硬件进行改进,以放大入耳信号和测量电极-皮肤阻抗,以提高入耳脑电图在脑机接口应用中的可行性。
{"title":"Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents.","authors":"Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau","doi":"10.1088/2057-1976/ae3b45","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3b45","url":null,"abstract":"<p><p>In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iMCN: information compression-based multimodal confidence-guided fusion network for cancer survival prediction. 基于信息压缩的多模态置信度引导融合网络用于癌症生存预测。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3b46
Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao

The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.

基于深度学习的计算病理学和基因组学的快速发展已经证明了有效整合全幻灯片图像(wsi)和基因组数据用于癌症生存预测的重大前景。然而,病理和基因组特征之间的实质性异质性使得探索复杂的跨模式关系和构建全面的患者表征具有挑战性。为了解决这个问题,我们提出了基于信息压缩的多模态置信度引导融合网络(iMCN)。该框架是围绕两个关键模块构建的。首先,自适应病理信息压缩(APIC)模块采用可学习的信息中心对图像区域进行动态聚类,去除冗余信息,同时保持区别性的生存相关模式。其次,信心引导的多模态融合(CMF)模块利用学习的子网络来估计每个模态表示的置信度,允许动态加权融合,在每种情况下优先考虑最可靠的特征。在TCGA-LUAD和TCGA-BRCA队列中,iMCN的平均一致性指数(C-index)分别为0.691和0.740,比现有的最先进的方法提高了1.65%。定性地说,该模型生成可解释的热图,定位特定形态结构(如肿瘤细胞巢)和功能基因组途径(如肿瘤发生)之间的高关联区域,为基因组-病理联系提供生物学见解。总之,iMCN通过引入信息压缩和基于置信度的融合的原则框架,显著推进了多模态生存分析。此外,相关分析显示,组织异质性对不同癌症类型的最佳保留率影响不同,异质性较高的肿瘤(如LUAD)从积极的信息压缩中获益更多。除了预测性能之外,该模型阐明组织形态和分子生物学之间相互作用的能力增强了其作为转化性癌症研究工具的价值。
{"title":"iMCN: information compression-based multimodal confidence-guided fusion network for cancer survival prediction.","authors":"Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao","doi":"10.1088/2057-1976/ae3b46","DOIUrl":"10.1088/2057-1976/ae3b46","url":null,"abstract":"<p><p>The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CCE-Net: A Lightweight Context Contrast Enhancement Network and Its Application in Medical Image Segmentation. CCE-Net:一种轻量级上下文对比度增强网络及其在医学图像分割中的应用。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae4108
Xiaojing Hou, Yonghong Wu

Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.

高效、准确的图像分割模型在医学图像分割中起着至关重要的作用,但传统模型计算成本高,限制了临床应用。本文基于金字塔视觉变换和卷积神经网络,提出了一种轻量级的上下文对比增强网络(CCE-Net),该网络通过上下文特征协同机制和特征对比度增强策略保证了高效推理和准确分割。局部上下文融合增强模块旨在通过跨层上下文融合获得更具体的局部细节信息,弥合编码器和解码器之间的语义差距。提出深度特征多尺度提取模块,充分提取模型瓶颈层最深层特征的综合信息,为解码器提供更准确的全局上下文特征。Detail Contrast Enhancement Decoder模块通过自适应双分支特征融合和频域对比度增强操作,有效解决图像细节缺失和边缘模糊等固有问题。实验表明,CCE-Net只需要5.40M参数和0.80G FLOPs,在Synapse和ACDC数据集上的平均Dice系数分别为82.25%和91.88%,比主流模型的参数降低了37%-62%,推动了轻量级医学AI模型从实验室研究向临床实践的转变。
{"title":"CCE-Net: A Lightweight Context Contrast Enhancement Network and Its Application in Medical Image Segmentation.","authors":"Xiaojing Hou, Yonghong Wu","doi":"10.1088/2057-1976/ae4108","DOIUrl":"10.1088/2057-1976/ae4108","url":null,"abstract":"<p><p>Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monte Carlo derivation of beam quality correction factors in proton beams: a comparison of Geant4 versions. 质子束中光束质量修正因子的蒙特卡罗推导:Geant4版本的比较。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3571
Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert

Objective.In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (kQ) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of thekQfactors.Approach.The chamber-specific proton contributions (fQ) of thekQfactors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.Main results.Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in thekQvalues up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.Significance.Although significant deviations in the MC calculatedfQvalues were observed between the two Geant4 versions, the dominant uncertainty of theWairvalues currently allows to achieve the agreement at thekQlevel. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable forkQcalculation.

目的:在修订的TRS-398操作规范(CoP)中,将蒙特卡罗(MC)结果添加到现有的实验数据中,以导出质子束电离室的推荐光束质量校正因子(kQ)。虽然这些结果的一部分是从Geant4模拟工具的v10.3和v10.4版本中获得的,但本文表明,使用更新的版本(如v11.2)可能会影响kQ因子的值。方法:使用两个不同版本的代码,即Geant4-v.10.3和Geant4-v11.2,推导了四个电离室的kQ因子的室特异性质子贡献(fQ)。进行了总吸收剂量值的比较,以及初级和次级粒子的剂量贡献的比较。主要结果:与Geant4-v10.3相比,使用Geant4-v11.2得到的每个入射粒子的吸收剂量值更大,特别是在高质子束能量在150 MeV和250 MeV之间的剂量对空气,导致kQ值偏差高达1%。这些偏差主要是由于二次氦离子的物理性质的变化,其中Geant4版本之间的显著偏差在电离室的入口窗口或外壳内最为严格。意义:尽管在两个Geant4版本中观察到MC计算的fQ值存在显著偏差,但Wair值的主要不确定性目前允许在kQ水平上实现一致。由于这些值也与TRS-398 CoP中提供的当前数据一致,因此目前无法区分Geant4-v10.3和Geant4-v11.2,因此它们都适用于kQ计算。
{"title":"Monte Carlo derivation of beam quality correction factors in proton beams: a comparison of Geant4 versions.","authors":"Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert","doi":"10.1088/2057-1976/ae3571","DOIUrl":"10.1088/2057-1976/ae3571","url":null,"abstract":"<p><p><i>Objective.</i>In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (<i>k</i><sub><i>Q</i></sub>) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of the<i>k</i><sub><i>Q</i></sub>factors.<i>Approach.</i>The chamber-specific proton contributions (<i>f</i><sub><i>Q</i></sub>) of the<i>k</i><sub><i>Q</i></sub>factors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.<i>Main results.</i>Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in the<i>k</i><sub><i>Q</i></sub>values up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.<i>Significance.</i>Although significant deviations in the MC calculated<i>f</i><sub><i>Q</i></sub>values were observed between the two Geant4 versions, the dominant uncertainty of the<i>W</i><sub>air</sub>values currently allows to achieve the agreement at the<i>k</i><sub><i>Q</i></sub>level. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable for<i>k</i><sub><i>Q</i></sub>calculation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145931958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced x-ray knee osteoarthritis classification: a multi-classification approach using MambaOut and latent diffusion model. 增强x线膝关节骨关节炎分类:使用MambaOut和潜伏扩散模型的多分类方法。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3b43
Xin Wang, Yupeng Fu, Xiaodong Cai, Huimin Lu, Yuncong Feng, Rui Xu

Knee Osteoarthritis (KOA) is a prevalent degenerative joint disease affecting millions worldwide. Accurate classification of KOA severity is crucial for effective diagnosis and treatment planning. This study introduces a novel multi-classification algorithm for x-ray KOA based on MambaOut and Latent Diffusion Model (LDM). MambaOut, an emerging network architecture, achieves superior classification performance compared to fine-tuning the mainstream Convolutional Neural Networks (CNNs) for KOA classification. To address sample imbalance across KL grades, we propose an AI-generated model using LDM to synthesize new data. This approach enhances minority-class samples by optimizing the autoencoder's loss function and incorporating pathological labels into the LDM framework. Our approach achieves an average accuracy of 86.3%, an average precision of 85.3%, an F1 score of 0.855, and a mean absolute error reduced to 14.7% in the four-classification task, outperforming recent advanced methods. This study not only advances KOA classification techniques but also highlights the potential of integrating advanced neural architectures with generative models for medical image analysis.

膝骨关节炎(KOA)是一种普遍的退行性关节疾病,影响全球数百万人。准确的KOA严重程度分级对于有效的诊断和治疗计划至关重要。提出了一种基于MambaOut和潜在扩散模型(Latent Diffusion Model, LDM)的x射线KOA多分类算法。MambaOut是一种新兴的网络架构,相比于对主流卷积神经网络(cnn)进行KOA分类的微调,它取得了更好的分类性能。为了解决跨吉隆坡等级的样本不平衡问题,我们提出了一个使用LDM合成新数据的人工智能生成模型。这种方法通过优化自编码器的损失函数和将病理标签纳入LDM框架来增强少数类样本。在四类分类任务中,我们的方法平均准确率为86.3%,平均精密度为85.3%,F1得分为0.855,平均绝对误差降至14.7%,优于目前的先进方法。这项研究不仅推进了KOA分类技术,而且强调了将先进的神经结构与生成模型集成到医学图像分析中的潜力。
{"title":"Enhanced x-ray knee osteoarthritis classification: a multi-classification approach using MambaOut and latent diffusion model.","authors":"Xin Wang, Yupeng Fu, Xiaodong Cai, Huimin Lu, Yuncong Feng, Rui Xu","doi":"10.1088/2057-1976/ae3b43","DOIUrl":"10.1088/2057-1976/ae3b43","url":null,"abstract":"<p><p>Knee Osteoarthritis (KOA) is a prevalent degenerative joint disease affecting millions worldwide. Accurate classification of KOA severity is crucial for effective diagnosis and treatment planning. This study introduces a novel multi-classification algorithm for x-ray KOA based on MambaOut and Latent Diffusion Model (LDM). MambaOut, an emerging network architecture, achieves superior classification performance compared to fine-tuning the mainstream Convolutional Neural Networks (CNNs) for KOA classification. To address sample imbalance across KL grades, we propose an AI-generated model using LDM to synthesize new data. This approach enhances minority-class samples by optimizing the autoencoder's loss function and incorporating pathological labels into the LDM framework. Our approach achieves an average accuracy of 86.3%, an average precision of 85.3%, an F1 score of 0.855, and a mean absolute error reduced to 14.7% in the four-classification task, outperforming recent advanced methods. This study not only advances KOA classification techniques but also highlights the potential of integrating advanced neural architectures with generative models for medical image analysis.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying respiratory motion effects on dosimetry in hepatic radioembolization using experimental phantom measurements. 用实验性幻像测量定量呼吸运动对肝放射栓塞剂量学的影响。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-02 DOI: 10.1088/2057-1976/ae4030
Josephine La Macchia, Alessandro Desy, Claire Cohalan, Taehyung Peter Kim, Shirin A Enger

Objective: In radioembolization, SPECT/CT planning scans are often acquired during free breathing, which can introduce motion-related blurring and misregistration between SPECT and CT, leading to dosimetric inaccuracies. This study quantifies the impact of respiratory motion on absorbed dose metrics-tumor-to-normal tissue (T/N) ratio, dose volume histograms, and mean dose-using several voxel-based dosimetry methods. This study supports standardization efforts through experimental measurements using a motion-enabled phantom. Approach: Motion effects in pre-therapy imaging were evaluated using a Jaszczak phantom filled with technetium-99m, simulating activity in lesion and background volumes. SPECT/CT scans were acquired with varying cranial-caudal motion amplitudes from the central position ±0, ±5, ±6.5, ±10, ±12.5, and ±15 mm. The impact of motion-related misregistration during scanning on dosimetry was also examined. Five dosimetry methods, including Monte Carlo simulation with uniform reference activity (MC REF), Monte Carlo simulation based on SPECT images (MC SPECT), Simplicity™ (Boston Scientific), local deposition method, and voxel-S-value convolution. Absorbed dose metrics of mean dose, dose volume histogram dosimetric indices (D50, D70, D90), and T/N ratio were obtained to quantify motion effects and evaluate clinical suitability. Main results: Mean absorbed dose values for the lesion and background were consistent across methods within uncertainties, though discrepancies were noted in non-lesion low-density regions. Respiratory motion reduced lesion dose by 16-25% and increased background dose by 13-32%, although the latter represented only a 1-2 Gy change. These shifts led to a 28-43% decrease in the T/N ratio at ±12.5 mm motion amplitude. Misregistration due to motion also significantly impacted dosimetric accuracy. Significance: The study demonstrated agreement between five dosimetry methods and revealed that respiratory motion can lead to substantial underestimation of the lesion dose and T/N ratio. Since T/N ratio is critical for patient selection and activity prescription, accounting for respiratory motion is essential for accurate radioembolization dosimetry.

目的:在放射栓塞治疗中,通常在自由呼吸期间获得SPECT/CT计划扫描,这可能导致SPECT和CT之间的运动相关模糊和错配,导致剂量测定不准确。本研究量化了呼吸运动对吸收剂量指标的影响——肿瘤与正常组织(T/N)比、剂量体积直方图和平均剂量——使用几种基于体素的剂量测定方法。本研究通过使用运动激活模体的实验测量来支持标准化工作。方法:使用充满锝-99m的Jaszczak模体来评估治疗前成像中的运动效果,模拟病变和背景体积的活动。从中心位置±0,±5,±6.5,±10,±12.5和±15 mm,获得不同颅尾运动幅度的SPECT/CT扫描。扫描过程中运动相关的错配对剂量学的影响也进行了研究。五种剂量测定方法,包括具有均匀参考活度的蒙特卡罗模拟(MC REF),基于SPECT图像的蒙特卡罗模拟(MC SPECT), Simplicity™(Boston Scientific),局部沉积法和体素- s值卷积。获得了平均剂量、剂量体积直方图剂量指标(D50、D70、D90)和T/N比的吸收剂量指标,以量化运动效应和评估临床适用性。主要结果:在不确定范围内,不同方法的病变和背景平均吸收剂量值是一致的,尽管在非病变低密度区域存在差异。呼吸运动使病变剂量减少16-25%,使本底剂量增加13-32%,尽管后者仅代表1-2 Gy的变化。在±12.5 mm运动幅度下,这些变化导致T/N比下降28-43%。运动引起的配准错误也显著影响剂量测定的准确性。意义:该研究证明了五种剂量测定方法之间的一致性,并揭示了呼吸运动可导致病变剂量和T/N比的严重低估。由于T/N比率对患者选择和活动处方至关重要,因此考虑呼吸运动对于准确的放射栓塞剂量测定至关重要。
{"title":"Quantifying respiratory motion effects on dosimetry in hepatic radioembolization using experimental phantom measurements.","authors":"Josephine La Macchia, Alessandro Desy, Claire Cohalan, Taehyung Peter Kim, Shirin A Enger","doi":"10.1088/2057-1976/ae4030","DOIUrl":"https://doi.org/10.1088/2057-1976/ae4030","url":null,"abstract":"<p><strong>Objective: </strong>In radioembolization, SPECT/CT planning scans are often acquired during free breathing, which can introduce motion-related blurring and misregistration between SPECT and CT, leading to dosimetric inaccuracies. This study quantifies the impact of respiratory motion on absorbed dose metrics-tumor-to-normal tissue (T/N) ratio, dose volume histograms, and mean dose-using several voxel-based dosimetry methods. This study supports standardization efforts through experimental measurements using a motion-enabled phantom.&#xD;&#xD;Approach: Motion effects in pre-therapy imaging were evaluated using a Jaszczak phantom filled with technetium-99m, simulating activity in lesion and background volumes. SPECT/CT scans were acquired with varying cranial-caudal motion amplitudes from the central position ±0, ±5, ±6.5, ±10, ±12.5, and ±15 mm. The impact of motion-related misregistration during scanning on dosimetry was also examined. Five dosimetry methods, including Monte Carlo simulation with uniform reference activity (MC REF), Monte Carlo simulation based on SPECT images (MC SPECT), Simplicity™ (Boston Scientific), local deposition method, and voxel-S-value convolution. Absorbed dose metrics of mean dose, dose volume histogram dosimetric indices (D50, D70, D90), and T/N ratio were obtained to quantify motion effects and evaluate clinical suitability.&#xD;&#xD;Main results: Mean absorbed dose values for the lesion and background were consistent across methods within uncertainties, though discrepancies were noted in non-lesion low-density regions. Respiratory motion reduced lesion dose by 16-25% and increased background dose by 13-32%, although the latter represented only a 1-2 Gy change. These shifts led to a 28-43% decrease in the T/N ratio at ±12.5 mm motion amplitude. Misregistration due to motion also significantly impacted dosimetric accuracy.&#xD;Significance: The study demonstrated agreement between five dosimetry methods and revealed that respiratory motion can lead to substantial underestimation of the lesion dose and T/N ratio. Since T/N ratio is critical for patient selection and activity prescription, accounting for respiratory motion is essential for accurate radioembolization dosimetry.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal and comorbidity-aware representation of longitudinal patient trajectories from electronic health records. 来自电子健康记录的纵向患者轨迹的时间和共病意识表征。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-30 DOI: 10.1088/2057-1976/ae38de
M Sreenivasan, S Madhavendranath, Anu Mary Chacko

Electronic health records (EHRs) capture longitudinal multi-visit patient journeys but are difficult to analyze due to temporal irregularity, multimorbidity, and heterogeneous coding. This study introduces a temporal and comorbidity-aware trajectory representation that restructures admissions into ordered symbolic visit states while preserving diagnostic progression, secondary comorbidities, procedure categories, demographics, outcomes, and inter-visit intervals. These symbolic states are subsequently encoded as fixed-length numerical vectors suitable for computational analysis. Validation was conducted in two stages: Stage I assessed construction fidelity using coverage metrics, comorbidity preservation, diagnostic transition structures, and exact inter-visit gap encoding and Stage II assessed analytical utility through clustering experiments using different clustering approacheslike sequence similarity, Gaussian Mixture Models (GMM), and a temporal LSTM autoencoder (TS-LSTM). Proof of concept was done by encoding subset of patient cohorts from the MIMIC-IV database consisting of 2,280 patients with 8,849 admissions having complete primary diagnosis coverage and near-complete secondary coverage. Stage 1 assessment consisting of cohort-level coverage metrics confirmed that the transformation preserved essential clinical information and key properties of longitudinal EHRs. In Stage 2, clustering experiments validated the analytical utility of the representation across sequence-based, Gaussian mixture, and temporal LSTM autoencoder approaches. Ablation studies further demonstrated that both multimorbidity depth and inter-visit gap encoding are critical to maintaining cluster separability and temporal fidelity. The findings show that explicit encoding of comorbidity and timing improves interpretability and subgroup coherence. Although evaluated on a single dataset, the use of standardised ICD-10 EHR structure supports the assumption that the framework can generalise across healthcare settings; future work will incorporate multimodal data and external validation.

电子健康记录(EHRs)捕获纵向多次就诊的患者旅程,但由于时间不规则、多病症和异构编码而难以分析。本研究引入了一种时间和共病感知轨迹表示,将入院重新构建为有序的象征性就诊状态,同时保留诊断进展、继发共病、手术类别、人口统计学、结果和两次就诊间隔。这些符号状态随后被编码为适合于计算分析的定长数值向量。验证分两个阶段进行:第一阶段使用覆盖度量、共病保存、诊断过渡结构和精确访问间隙编码来评估构建保真度;第二阶段使用不同的聚类方法,如序列相似性、高斯混合模型(GMM)和时序LSTM自动编码器(TS-LSTM),通过聚类实验来评估分析实用性。概念验证是通过对来自MIMIC-IV数据库的患者队列子集进行编码来完成的,该数据库由2,280名患者组成,其中8,849名入院患者具有完整的初级诊断覆盖率和近乎完整的次要诊断覆盖率。由队列覆盖指标组成的第一阶段评估证实,这种转变保留了纵向电子病历的基本临床信息和关键属性。在第二阶段,聚类实验验证了跨序列表示、高斯混合和时间LSTM自编码器方法的分析效用。消融研究进一步表明,多病深度和访问间隙编码对于保持聚类可分离性和时间保真度至关重要。研究结果表明,共病和时间的显式编码提高了可解释性和亚组一致性。尽管在单一数据集上进行了评估,但使用标准化ICD-10 EHR结构支持了该框架可以在整个医疗保健环境中推广的假设;未来的工作将包括多模态数据和外部验证。
{"title":"Temporal and comorbidity-aware representation of longitudinal patient trajectories from electronic health records.","authors":"M Sreenivasan, S Madhavendranath, Anu Mary Chacko","doi":"10.1088/2057-1976/ae38de","DOIUrl":"10.1088/2057-1976/ae38de","url":null,"abstract":"<p><p>Electronic health records (EHRs) capture longitudinal multi-visit patient journeys but are difficult to analyze due to temporal irregularity, multimorbidity, and heterogeneous coding. This study introduces a temporal and comorbidity-aware trajectory representation that restructures admissions into ordered symbolic visit states while preserving diagnostic progression, secondary comorbidities, procedure categories, demographics, outcomes, and inter-visit intervals. These symbolic states are subsequently encoded as fixed-length numerical vectors suitable for computational analysis. Validation was conducted in two stages: Stage I assessed construction fidelity using coverage metrics, comorbidity preservation, diagnostic transition structures, and exact inter-visit gap encoding and Stage II assessed analytical utility through clustering experiments using different clustering approacheslike sequence similarity, Gaussian Mixture Models (GMM), and a temporal LSTM autoencoder (TS-LSTM). Proof of concept was done by encoding subset of patient cohorts from the MIMIC-IV database consisting of 2,280 patients with 8,849 admissions having complete primary diagnosis coverage and near-complete secondary coverage. Stage 1 assessment consisting of cohort-level coverage metrics confirmed that the transformation preserved essential clinical information and key properties of longitudinal EHRs. In Stage 2, clustering experiments validated the analytical utility of the representation across sequence-based, Gaussian mixture, and temporal LSTM autoencoder approaches. Ablation studies further demonstrated that both multimorbidity depth and inter-visit gap encoding are critical to maintaining cluster separability and temporal fidelity. The findings show that explicit encoding of comorbidity and timing improves interpretability and subgroup coherence. Although evaluated on a single dataset, the use of standardised ICD-10 EHR structure supports the assumption that the framework can generalise across healthcare settings; future work will incorporate multimodal data and external validation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Physics & Engineering Express
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1