首页 > 最新文献

Biomedical Physics & Engineering Express最新文献

英文 中文
Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents. 无线入耳式脑电系统在青少年听觉脑机接口中的应用。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3b45
Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau

In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.

对于非侵入性脑机接口(BCI)应用,耳内脑电图(EEG)系统比基于头皮的脑电图系统提供了几个实际优势。然而,制造入耳式脑电图系统的困难限制了它们在脑机接口用例中的可访问性。在这项研究中,我们开发了一种便携式,低成本的无线入耳式脑电图设备,使用市售组件。将5名青少年参与者的耳内脑电图信号(参考左侧乳突)与在α调制任务、各种伪像诱导任务和听觉词流BCI范式中同时收集的头皮脑电图进行比较。频谱分析证实,在5名参与者中,有3名参与者的耳内脑电图系统可以捕捉到闭眼放松期间显著增加的α活动,所有参与者的信噪比为2.34。耳内脑电图信号最容易受到水平头部运动、咳嗽和发声伪影的影响,但对眨眼等眼部伪影相对不敏感。对于听觉流范式,分类器仅对5名参与者中的1名从耳内脑电图信号中解码呈现的刺激。参与流的分类未超过偶然级别。对比图显示了有看护流和无看护流之间的差异,显示耳内脑电反应的幅度相对于头皮脑电反应的幅度减小。需要对硬件进行改进,以放大入耳信号和测量电极-皮肤阻抗,以提高入耳脑电图在脑机接口应用中的可行性。
{"title":"Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents.","authors":"Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau","doi":"10.1088/2057-1976/ae3b45","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3b45","url":null,"abstract":"<p><p>In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iMCN: information compression-based multimodal confidence-guided fusion network for cancer survival prediction. 基于信息压缩的多模态置信度引导融合网络用于癌症生存预测。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3b46
Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao

The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.

基于深度学习的计算病理学和基因组学的快速发展已经证明了有效整合全幻灯片图像(wsi)和基因组数据用于癌症生存预测的重大前景。然而,病理和基因组特征之间的实质性异质性使得探索复杂的跨模式关系和构建全面的患者表征具有挑战性。为了解决这个问题,我们提出了基于信息压缩的多模态置信度引导融合网络(iMCN)。该框架是围绕两个关键模块构建的。首先,自适应病理信息压缩(APIC)模块采用可学习的信息中心对图像区域进行动态聚类,去除冗余信息,同时保持区别性的生存相关模式。其次,信心引导的多模态融合(CMF)模块利用学习的子网络来估计每个模态表示的置信度,允许动态加权融合,在每种情况下优先考虑最可靠的特征。在TCGA-LUAD和TCGA-BRCA队列中,iMCN的平均一致性指数(C-index)分别为0.691和0.740,比现有的最先进的方法提高了1.65%。定性地说,该模型生成可解释的热图,定位特定形态结构(如肿瘤细胞巢)和功能基因组途径(如肿瘤发生)之间的高关联区域,为基因组-病理联系提供生物学见解。总之,iMCN通过引入信息压缩和基于置信度的融合的原则框架,显著推进了多模态生存分析。此外,相关分析显示,组织异质性对不同癌症类型的最佳保留率影响不同,异质性较高的肿瘤(如LUAD)从积极的信息压缩中获益更多。除了预测性能之外,该模型阐明组织形态和分子生物学之间相互作用的能力增强了其作为转化性癌症研究工具的价值。
{"title":"iMCN: information compression-based multimodal confidence-guided fusion network for cancer survival prediction.","authors":"Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao","doi":"10.1088/2057-1976/ae3b46","DOIUrl":"10.1088/2057-1976/ae3b46","url":null,"abstract":"<p><p>The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CCE-Net: A Lightweight Context Contrast Enhancement Network and Its Application in Medical Image Segmentation. CCE-Net:一种轻量级上下文对比度增强网络及其在医学图像分割中的应用。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae4108
Xiaojing Hou, Yonghong Wu

Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.

高效、准确的图像分割模型在医学图像分割中起着至关重要的作用,但传统模型计算成本高,限制了临床应用。本文基于金字塔视觉变换和卷积神经网络,提出了一种轻量级的上下文对比增强网络(CCE-Net),该网络通过上下文特征协同机制和特征对比度增强策略保证了高效推理和准确分割。局部上下文融合增强模块旨在通过跨层上下文融合获得更具体的局部细节信息,弥合编码器和解码器之间的语义差距。提出深度特征多尺度提取模块,充分提取模型瓶颈层最深层特征的综合信息,为解码器提供更准确的全局上下文特征。Detail Contrast Enhancement Decoder模块通过自适应双分支特征融合和频域对比度增强操作,有效解决图像细节缺失和边缘模糊等固有问题。实验表明,CCE-Net只需要5.40M参数和0.80G FLOPs,在Synapse和ACDC数据集上的平均Dice系数分别为82.25%和91.88%,比主流模型的参数降低了37%-62%,推动了轻量级医学AI模型从实验室研究向临床实践的转变。
{"title":"CCE-Net: A Lightweight Context Contrast Enhancement Network and Its Application in Medical Image Segmentation.","authors":"Xiaojing Hou, Yonghong Wu","doi":"10.1088/2057-1976/ae4108","DOIUrl":"10.1088/2057-1976/ae4108","url":null,"abstract":"<p><p>Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monte Carlo derivation of beam quality correction factors in proton beams: a comparison of Geant4 versions. 质子束中光束质量修正因子的蒙特卡罗推导:Geant4版本的比较。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3571
Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert

Objective.In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (kQ) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of thekQfactors.Approach.The chamber-specific proton contributions (fQ) of thekQfactors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.Main results.Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in thekQvalues up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.Significance.Although significant deviations in the MC calculatedfQvalues were observed between the two Geant4 versions, the dominant uncertainty of theWairvalues currently allows to achieve the agreement at thekQlevel. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable forkQcalculation.

目的:在修订的TRS-398操作规范(CoP)中,将蒙特卡罗(MC)结果添加到现有的实验数据中,以导出质子束电离室的推荐光束质量校正因子(kQ)。虽然这些结果的一部分是从Geant4模拟工具的v10.3和v10.4版本中获得的,但本文表明,使用更新的版本(如v11.2)可能会影响kQ因子的值。方法:使用两个不同版本的代码,即Geant4-v.10.3和Geant4-v11.2,推导了四个电离室的kQ因子的室特异性质子贡献(fQ)。进行了总吸收剂量值的比较,以及初级和次级粒子的剂量贡献的比较。主要结果:与Geant4-v10.3相比,使用Geant4-v11.2得到的每个入射粒子的吸收剂量值更大,特别是在高质子束能量在150 MeV和250 MeV之间的剂量对空气,导致kQ值偏差高达1%。这些偏差主要是由于二次氦离子的物理性质的变化,其中Geant4版本之间的显著偏差在电离室的入口窗口或外壳内最为严格。意义:尽管在两个Geant4版本中观察到MC计算的fQ值存在显著偏差,但Wair值的主要不确定性目前允许在kQ水平上实现一致。由于这些值也与TRS-398 CoP中提供的当前数据一致,因此目前无法区分Geant4-v10.3和Geant4-v11.2,因此它们都适用于kQ计算。
{"title":"Monte Carlo derivation of beam quality correction factors in proton beams: a comparison of Geant4 versions.","authors":"Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert","doi":"10.1088/2057-1976/ae3571","DOIUrl":"10.1088/2057-1976/ae3571","url":null,"abstract":"<p><p><i>Objective.</i>In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (<i>k</i><sub><i>Q</i></sub>) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of the<i>k</i><sub><i>Q</i></sub>factors.<i>Approach.</i>The chamber-specific proton contributions (<i>f</i><sub><i>Q</i></sub>) of the<i>k</i><sub><i>Q</i></sub>factors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.<i>Main results.</i>Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in the<i>k</i><sub><i>Q</i></sub>values up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.<i>Significance.</i>Although significant deviations in the MC calculated<i>f</i><sub><i>Q</i></sub>values were observed between the two Geant4 versions, the dominant uncertainty of the<i>W</i><sub>air</sub>values currently allows to achieve the agreement at the<i>k</i><sub><i>Q</i></sub>level. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable for<i>k</i><sub><i>Q</i></sub>calculation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145931958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced x-ray knee osteoarthritis classification: a multi-classification approach using MambaOut and latent diffusion model. 增强x线膝关节骨关节炎分类:使用MambaOut和潜伏扩散模型的多分类方法。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-03 DOI: 10.1088/2057-1976/ae3b43
Xin Wang, Yupeng Fu, Xiaodong Cai, Huimin Lu, Yuncong Feng, Rui Xu

Knee Osteoarthritis (KOA) is a prevalent degenerative joint disease affecting millions worldwide. Accurate classification of KOA severity is crucial for effective diagnosis and treatment planning. This study introduces a novel multi-classification algorithm for x-ray KOA based on MambaOut and Latent Diffusion Model (LDM). MambaOut, an emerging network architecture, achieves superior classification performance compared to fine-tuning the mainstream Convolutional Neural Networks (CNNs) for KOA classification. To address sample imbalance across KL grades, we propose an AI-generated model using LDM to synthesize new data. This approach enhances minority-class samples by optimizing the autoencoder's loss function and incorporating pathological labels into the LDM framework. Our approach achieves an average accuracy of 86.3%, an average precision of 85.3%, an F1 score of 0.855, and a mean absolute error reduced to 14.7% in the four-classification task, outperforming recent advanced methods. This study not only advances KOA classification techniques but also highlights the potential of integrating advanced neural architectures with generative models for medical image analysis.

膝骨关节炎(KOA)是一种普遍的退行性关节疾病,影响全球数百万人。准确的KOA严重程度分级对于有效的诊断和治疗计划至关重要。提出了一种基于MambaOut和潜在扩散模型(Latent Diffusion Model, LDM)的x射线KOA多分类算法。MambaOut是一种新兴的网络架构,相比于对主流卷积神经网络(cnn)进行KOA分类的微调,它取得了更好的分类性能。为了解决跨吉隆坡等级的样本不平衡问题,我们提出了一个使用LDM合成新数据的人工智能生成模型。这种方法通过优化自编码器的损失函数和将病理标签纳入LDM框架来增强少数类样本。在四类分类任务中,我们的方法平均准确率为86.3%,平均精密度为85.3%,F1得分为0.855,平均绝对误差降至14.7%,优于目前的先进方法。这项研究不仅推进了KOA分类技术,而且强调了将先进的神经结构与生成模型集成到医学图像分析中的潜力。
{"title":"Enhanced x-ray knee osteoarthritis classification: a multi-classification approach using MambaOut and latent diffusion model.","authors":"Xin Wang, Yupeng Fu, Xiaodong Cai, Huimin Lu, Yuncong Feng, Rui Xu","doi":"10.1088/2057-1976/ae3b43","DOIUrl":"10.1088/2057-1976/ae3b43","url":null,"abstract":"<p><p>Knee Osteoarthritis (KOA) is a prevalent degenerative joint disease affecting millions worldwide. Accurate classification of KOA severity is crucial for effective diagnosis and treatment planning. This study introduces a novel multi-classification algorithm for x-ray KOA based on MambaOut and Latent Diffusion Model (LDM). MambaOut, an emerging network architecture, achieves superior classification performance compared to fine-tuning the mainstream Convolutional Neural Networks (CNNs) for KOA classification. To address sample imbalance across KL grades, we propose an AI-generated model using LDM to synthesize new data. This approach enhances minority-class samples by optimizing the autoencoder's loss function and incorporating pathological labels into the LDM framework. Our approach achieves an average accuracy of 86.3%, an average precision of 85.3%, an F1 score of 0.855, and a mean absolute error reduced to 14.7% in the four-classification task, outperforming recent advanced methods. This study not only advances KOA classification techniques but also highlights the potential of integrating advanced neural architectures with generative models for medical image analysis.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantifying respiratory motion effects on dosimetry in hepatic radioembolization using experimental phantom measurements. 用实验性幻像测量定量呼吸运动对肝放射栓塞剂量学的影响。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-02-02 DOI: 10.1088/2057-1976/ae4030
Josephine La Macchia, Alessandro Desy, Claire Cohalan, Taehyung Peter Kim, Shirin A Enger

Objective: In radioembolization, SPECT/CT planning scans are often acquired during free breathing, which can introduce motion-related blurring and misregistration between SPECT and CT, leading to dosimetric inaccuracies. This study quantifies the impact of respiratory motion on absorbed dose metrics-tumor-to-normal tissue (T/N) ratio, dose volume histograms, and mean dose-using several voxel-based dosimetry methods. This study supports standardization efforts through experimental measurements using a motion-enabled phantom. Approach: Motion effects in pre-therapy imaging were evaluated using a Jaszczak phantom filled with technetium-99m, simulating activity in lesion and background volumes. SPECT/CT scans were acquired with varying cranial-caudal motion amplitudes from the central position ±0, ±5, ±6.5, ±10, ±12.5, and ±15 mm. The impact of motion-related misregistration during scanning on dosimetry was also examined. Five dosimetry methods, including Monte Carlo simulation with uniform reference activity (MC REF), Monte Carlo simulation based on SPECT images (MC SPECT), Simplicity™ (Boston Scientific), local deposition method, and voxel-S-value convolution. Absorbed dose metrics of mean dose, dose volume histogram dosimetric indices (D50, D70, D90), and T/N ratio were obtained to quantify motion effects and evaluate clinical suitability. Main results: Mean absorbed dose values for the lesion and background were consistent across methods within uncertainties, though discrepancies were noted in non-lesion low-density regions. Respiratory motion reduced lesion dose by 16-25% and increased background dose by 13-32%, although the latter represented only a 1-2 Gy change. These shifts led to a 28-43% decrease in the T/N ratio at ±12.5 mm motion amplitude. Misregistration due to motion also significantly impacted dosimetric accuracy. Significance: The study demonstrated agreement between five dosimetry methods and revealed that respiratory motion can lead to substantial underestimation of the lesion dose and T/N ratio. Since T/N ratio is critical for patient selection and activity prescription, accounting for respiratory motion is essential for accurate radioembolization dosimetry.

目的:在放射栓塞治疗中,通常在自由呼吸期间获得SPECT/CT计划扫描,这可能导致SPECT和CT之间的运动相关模糊和错配,导致剂量测定不准确。本研究量化了呼吸运动对吸收剂量指标的影响——肿瘤与正常组织(T/N)比、剂量体积直方图和平均剂量——使用几种基于体素的剂量测定方法。本研究通过使用运动激活模体的实验测量来支持标准化工作。方法:使用充满锝-99m的Jaszczak模体来评估治疗前成像中的运动效果,模拟病变和背景体积的活动。从中心位置±0,±5,±6.5,±10,±12.5和±15 mm,获得不同颅尾运动幅度的SPECT/CT扫描。扫描过程中运动相关的错配对剂量学的影响也进行了研究。五种剂量测定方法,包括具有均匀参考活度的蒙特卡罗模拟(MC REF),基于SPECT图像的蒙特卡罗模拟(MC SPECT), Simplicity™(Boston Scientific),局部沉积法和体素- s值卷积。获得了平均剂量、剂量体积直方图剂量指标(D50、D70、D90)和T/N比的吸收剂量指标,以量化运动效应和评估临床适用性。主要结果:在不确定范围内,不同方法的病变和背景平均吸收剂量值是一致的,尽管在非病变低密度区域存在差异。呼吸运动使病变剂量减少16-25%,使本底剂量增加13-32%,尽管后者仅代表1-2 Gy的变化。在±12.5 mm运动幅度下,这些变化导致T/N比下降28-43%。运动引起的配准错误也显著影响剂量测定的准确性。意义:该研究证明了五种剂量测定方法之间的一致性,并揭示了呼吸运动可导致病变剂量和T/N比的严重低估。由于T/N比率对患者选择和活动处方至关重要,因此考虑呼吸运动对于准确的放射栓塞剂量测定至关重要。
{"title":"Quantifying respiratory motion effects on dosimetry in hepatic radioembolization using experimental phantom measurements.","authors":"Josephine La Macchia, Alessandro Desy, Claire Cohalan, Taehyung Peter Kim, Shirin A Enger","doi":"10.1088/2057-1976/ae4030","DOIUrl":"https://doi.org/10.1088/2057-1976/ae4030","url":null,"abstract":"<p><strong>Objective: </strong>In radioembolization, SPECT/CT planning scans are often acquired during free breathing, which can introduce motion-related blurring and misregistration between SPECT and CT, leading to dosimetric inaccuracies. This study quantifies the impact of respiratory motion on absorbed dose metrics-tumor-to-normal tissue (T/N) ratio, dose volume histograms, and mean dose-using several voxel-based dosimetry methods. This study supports standardization efforts through experimental measurements using a motion-enabled phantom.&#xD;&#xD;Approach: Motion effects in pre-therapy imaging were evaluated using a Jaszczak phantom filled with technetium-99m, simulating activity in lesion and background volumes. SPECT/CT scans were acquired with varying cranial-caudal motion amplitudes from the central position ±0, ±5, ±6.5, ±10, ±12.5, and ±15 mm. The impact of motion-related misregistration during scanning on dosimetry was also examined. Five dosimetry methods, including Monte Carlo simulation with uniform reference activity (MC REF), Monte Carlo simulation based on SPECT images (MC SPECT), Simplicity™ (Boston Scientific), local deposition method, and voxel-S-value convolution. Absorbed dose metrics of mean dose, dose volume histogram dosimetric indices (D50, D70, D90), and T/N ratio were obtained to quantify motion effects and evaluate clinical suitability.&#xD;&#xD;Main results: Mean absorbed dose values for the lesion and background were consistent across methods within uncertainties, though discrepancies were noted in non-lesion low-density regions. Respiratory motion reduced lesion dose by 16-25% and increased background dose by 13-32%, although the latter represented only a 1-2 Gy change. These shifts led to a 28-43% decrease in the T/N ratio at ±12.5 mm motion amplitude. Misregistration due to motion also significantly impacted dosimetric accuracy.&#xD;Significance: The study demonstrated agreement between five dosimetry methods and revealed that respiratory motion can lead to substantial underestimation of the lesion dose and T/N ratio. Since T/N ratio is critical for patient selection and activity prescription, accounting for respiratory motion is essential for accurate radioembolization dosimetry.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal and comorbidity-aware representation of longitudinal patient trajectories from electronic health records. 来自电子健康记录的纵向患者轨迹的时间和共病意识表征。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-30 DOI: 10.1088/2057-1976/ae38de
M Sreenivasan, S Madhavendranath, Anu Mary Chacko

Electronic health records (EHRs) capture longitudinal multi-visit patient journeys but are difficult to analyze due to temporal irregularity, multimorbidity, and heterogeneous coding. This study introduces a temporal and comorbidity-aware trajectory representation that restructures admissions into ordered symbolic visit states while preserving diagnostic progression, secondary comorbidities, procedure categories, demographics, outcomes, and inter-visit intervals. These symbolic states are subsequently encoded as fixed-length numerical vectors suitable for computational analysis. Validation was conducted in two stages: Stage I assessed construction fidelity using coverage metrics, comorbidity preservation, diagnostic transition structures, and exact inter-visit gap encoding and Stage II assessed analytical utility through clustering experiments using different clustering approacheslike sequence similarity, Gaussian Mixture Models (GMM), and a temporal LSTM autoencoder (TS-LSTM). Proof of concept was done by encoding subset of patient cohorts from the MIMIC-IV database consisting of 2,280 patients with 8,849 admissions having complete primary diagnosis coverage and near-complete secondary coverage. Stage 1 assessment consisting of cohort-level coverage metrics confirmed that the transformation preserved essential clinical information and key properties of longitudinal EHRs. In Stage 2, clustering experiments validated the analytical utility of the representation across sequence-based, Gaussian mixture, and temporal LSTM autoencoder approaches. Ablation studies further demonstrated that both multimorbidity depth and inter-visit gap encoding are critical to maintaining cluster separability and temporal fidelity. The findings show that explicit encoding of comorbidity and timing improves interpretability and subgroup coherence. Although evaluated on a single dataset, the use of standardised ICD-10 EHR structure supports the assumption that the framework can generalise across healthcare settings; future work will incorporate multimodal data and external validation.

电子健康记录(EHRs)捕获纵向多次就诊的患者旅程,但由于时间不规则、多病症和异构编码而难以分析。本研究引入了一种时间和共病感知轨迹表示,将入院重新构建为有序的象征性就诊状态,同时保留诊断进展、继发共病、手术类别、人口统计学、结果和两次就诊间隔。这些符号状态随后被编码为适合于计算分析的定长数值向量。验证分两个阶段进行:第一阶段使用覆盖度量、共病保存、诊断过渡结构和精确访问间隙编码来评估构建保真度;第二阶段使用不同的聚类方法,如序列相似性、高斯混合模型(GMM)和时序LSTM自动编码器(TS-LSTM),通过聚类实验来评估分析实用性。概念验证是通过对来自MIMIC-IV数据库的患者队列子集进行编码来完成的,该数据库由2,280名患者组成,其中8,849名入院患者具有完整的初级诊断覆盖率和近乎完整的次要诊断覆盖率。由队列覆盖指标组成的第一阶段评估证实,这种转变保留了纵向电子病历的基本临床信息和关键属性。在第二阶段,聚类实验验证了跨序列表示、高斯混合和时间LSTM自编码器方法的分析效用。消融研究进一步表明,多病深度和访问间隙编码对于保持聚类可分离性和时间保真度至关重要。研究结果表明,共病和时间的显式编码提高了可解释性和亚组一致性。尽管在单一数据集上进行了评估,但使用标准化ICD-10 EHR结构支持了该框架可以在整个医疗保健环境中推广的假设;未来的工作将包括多模态数据和外部验证。
{"title":"Temporal and comorbidity-aware representation of longitudinal patient trajectories from electronic health records.","authors":"M Sreenivasan, S Madhavendranath, Anu Mary Chacko","doi":"10.1088/2057-1976/ae38de","DOIUrl":"10.1088/2057-1976/ae38de","url":null,"abstract":"<p><p>Electronic health records (EHRs) capture longitudinal multi-visit patient journeys but are difficult to analyze due to temporal irregularity, multimorbidity, and heterogeneous coding. This study introduces a temporal and comorbidity-aware trajectory representation that restructures admissions into ordered symbolic visit states while preserving diagnostic progression, secondary comorbidities, procedure categories, demographics, outcomes, and inter-visit intervals. These symbolic states are subsequently encoded as fixed-length numerical vectors suitable for computational analysis. Validation was conducted in two stages: Stage I assessed construction fidelity using coverage metrics, comorbidity preservation, diagnostic transition structures, and exact inter-visit gap encoding and Stage II assessed analytical utility through clustering experiments using different clustering approacheslike sequence similarity, Gaussian Mixture Models (GMM), and a temporal LSTM autoencoder (TS-LSTM). Proof of concept was done by encoding subset of patient cohorts from the MIMIC-IV database consisting of 2,280 patients with 8,849 admissions having complete primary diagnosis coverage and near-complete secondary coverage. Stage 1 assessment consisting of cohort-level coverage metrics confirmed that the transformation preserved essential clinical information and key properties of longitudinal EHRs. In Stage 2, clustering experiments validated the analytical utility of the representation across sequence-based, Gaussian mixture, and temporal LSTM autoencoder approaches. Ablation studies further demonstrated that both multimorbidity depth and inter-visit gap encoding are critical to maintaining cluster separability and temporal fidelity. The findings show that explicit encoding of comorbidity and timing improves interpretability and subgroup coherence. Although evaluated on a single dataset, the use of standardised ICD-10 EHR structure supports the assumption that the framework can generalise across healthcare settings; future work will incorporate multimodal data and external validation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of fine-grained learning rate configuration on the performance of medical image segmentation models. 细粒度学习率配置对医学图像分割模型性能的影响。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-30 DOI: 10.1088/2057-1976/ae3830
Fang Wang, Ji Li, Rui Zhang, Jing Hu, Gaimei Gao

Research on deep learning for medical image segmentation has shifted from single-modality networks to multimodal data fusion. Updating the parameters of such deep learning models is crucial for accurate segmentation predictions. Although existing optimizers can perform global parameter updates, the fine-grained initialization of learning rates across different network hierarchies and its influence on segmentation performance has not been sufficiently explored. To address this, we conducted a series of experiments showing that the initialization of a differentiated learning rate across network layers directly affected the performance of medical image segmentation models. To determine the optimal initial learning rate for each network level, we summarized a general statistical relationship between early-stage training results and the model's final optimal performance. In this paper, we proposed a fine-grained learning rate configuration algorithm. To verify the effectiveness of the proposed algorithm, we evaluated 10 segmentation models on three benchmark datasets: the colon polyp segmentation dataset CVC-ClinicDB, the gastrointestinal polyp dataset Kvasir-SEG, and the breast tumor segmentation dataset BUSI. The models that achieved the most significant improvement in mIoU on these three datasets were H-vmunet, MSRUNet, and H-vmunet, with increases of 3.87%, 4.67%, and 6.22%, respectively. Additionally, we validated the generalization and transferability of the proposed algorithm using a thyroid nodule segmentation dataset and a skin lesion segmentation dataset. Finally, a series of analyses, including segmentation result analysis, feature map visualization, training process analysis, computational overhead analysis, and clinical relevance analysis, confirmed the effectiveness of the proposed method. The core code is publicly available athttps://github.com/Lambda-Wave/PaperCoreCode.

医学图像分割的深度学习研究已经从单模态网络转向多模态数据融合。更新这些深度学习模型的参数对于准确的分割预测至关重要。虽然现有的优化器可以执行全局参数更新,但对不同网络层次上学习率的细粒度初始化及其对分割性能的影响尚未得到充分的探讨。为了解决这个问题,我们进行了一系列实验,表明跨网络层的差异化学习率初始化直接影响医学图像分割模型的性能。为了确定每个网络级别的最优初始学习率,我们总结了早期训练结果与模型最终最优性能之间的一般统计关系。本文提出了一种细粒度学习率配置算法。为了验证所提出算法的有效性,我们在三个基准数据集上评估了10个分割模型:结肠息肉分割数据集CVC-ClinicDB,胃肠道息肉数据集Kvasir-SEG和乳腺肿瘤分割数据集BUSI。在这三个数据集上,mIoU改善最显著的模型是H-vmunet、MSRUNet和H-vmunet,分别提高了3.87%、4.67%和6.22%。此外,我们使用甲状腺结节分割数据集和皮肤病变分割数据集验证了所提出算法的泛化和可移植性。最后,通过分割结果分析、特征图可视化、训练过程分析、计算开销分析、临床相关性分析等一系列分析,验证了该方法的有效性。核心代码可在https://github.com/Lambda-Wave/PaperCoreCode上公开获得。
{"title":"Impact of fine-grained learning rate configuration on the performance of medical image segmentation models.","authors":"Fang Wang, Ji Li, Rui Zhang, Jing Hu, Gaimei Gao","doi":"10.1088/2057-1976/ae3830","DOIUrl":"10.1088/2057-1976/ae3830","url":null,"abstract":"<p><p>Research on deep learning for medical image segmentation has shifted from single-modality networks to multimodal data fusion. Updating the parameters of such deep learning models is crucial for accurate segmentation predictions. Although existing optimizers can perform global parameter updates, the fine-grained initialization of learning rates across different network hierarchies and its influence on segmentation performance has not been sufficiently explored. To address this, we conducted a series of experiments showing that the initialization of a differentiated learning rate across network layers directly affected the performance of medical image segmentation models. To determine the optimal initial learning rate for each network level, we summarized a general statistical relationship between early-stage training results and the model's final optimal performance. In this paper, we proposed a fine-grained learning rate configuration algorithm. To verify the effectiveness of the proposed algorithm, we evaluated 10 segmentation models on three benchmark datasets: the colon polyp segmentation dataset CVC-ClinicDB, the gastrointestinal polyp dataset Kvasir-SEG, and the breast tumor segmentation dataset BUSI. The models that achieved the most significant improvement in mIoU on these three datasets were H-vmunet, MSRUNet, and H-vmunet, with increases of 3.87%, 4.67%, and 6.22%, respectively. Additionally, we validated the generalization and transferability of the proposed algorithm using a thyroid nodule segmentation dataset and a skin lesion segmentation dataset. Finally, a series of analyses, including segmentation result analysis, feature map visualization, training process analysis, computational overhead analysis, and clinical relevance analysis, confirmed the effectiveness of the proposed method. The core code is publicly available athttps://github.com/Lambda-Wave/PaperCoreCode.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model uncertainty estimates for deep learning mammographic density prediction using ordinal and classification approaches. 使用顺序和分类方法进行深度学习乳房x线摄影密度预测的模型不确定性估计。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-30 DOI: 10.1088/2057-1976/ae39e2
Steven Squires, Grey Kuling, D Gareth Evans, Anne L Martel, Susan M Astley

Purpose. Mammographic density is associated with the risk of developing breast cancer and can be predicted using deep learning methods. Model uncertainty estimates are not produced by standard regression approaches but would be valuable for clinical and research purposes. Our objective is to produce deep learning models with in-built uncertainty estimates without degrading predictive performance.Approach. We analysed data from over 150,000 mammogram images with associated continuous density scores from expert readers in the Predicting Risk Of Cancer At Screening (PROCAS) study. We re-designated the continuous density scores to 100 density classes then trained classification and ordinal deep learning models. Distributions and distribution-free methods were applied to extract predictions and uncertainties. A deep learning regression model was trained on the continuous density scores to act as a direct comparison.Results. The root mean squared error (RMSE) between expert assigned density labels and predictions of the standard regression model were 8.42 (8.34-8.51) while the RMSE for the classification and ordinal classification were 8.37 (8.28-8.46) and 8.44 (8.35-8.53) respectively. The average uncertainties produced by the models were higher when the density scores from pairs of expert readers density scores differ more, when different mammogram views of the same views are more variable, and when two separately trained models show higher variation.Conclusions. Using either a classification or ordinal approach we can produce model uncertainty estimates without loss of predictive performance.

目的:乳房x线摄影密度与患乳腺癌的风险相关,可以使用深度学习方法进行预测。模型不确定性估计不是由标准回归方法产生的,但对临床和研究目的有价值。我们的目标是在不降低预测性能的情况下,建立具有内置不确定性估计的深度学习模型。方法:我们分析了来自专家读者在预测癌症筛查风险(PROCAS)研究中的超过15万张乳房x光片图像的数据,以及相关的连续密度评分。我们将连续密度分数重新指定为100个密度类,然后训练分类和有序深度学习模型。应用分布和无分布方法提取预测和不确定性。结果:专家分配的密度标签与标准回归模型预测值的均方根误差(RMSE)分别为8.42(8.34-8.51),分类和有序分类的RMSE分别为8.37(8.28-8.46)和8.44(8.35-8.53)。当对专家读者的密度评分差异较大,同一视图的不同乳房x光片视图变化较大,以及两个单独训练的模型变化较大时,模型产生的平均不确定性较高。结论:使用分类方法或顺序方法都可以在不损失预测性能的情况下产生模型不确定性估计。
{"title":"Model uncertainty estimates for deep learning mammographic density prediction using ordinal and classification approaches.","authors":"Steven Squires, Grey Kuling, D Gareth Evans, Anne L Martel, Susan M Astley","doi":"10.1088/2057-1976/ae39e2","DOIUrl":"10.1088/2057-1976/ae39e2","url":null,"abstract":"<p><p><i>Purpose</i>. Mammographic density is associated with the risk of developing breast cancer and can be predicted using deep learning methods. Model uncertainty estimates are not produced by standard regression approaches but would be valuable for clinical and research purposes. Our objective is to produce deep learning models with in-built uncertainty estimates without degrading predictive performance.<i>Approach</i>. We analysed data from over 150,000 mammogram images with associated continuous density scores from expert readers in the Predicting Risk Of Cancer At Screening (PROCAS) study. We re-designated the continuous density scores to 100 density classes then trained classification and ordinal deep learning models. Distributions and distribution-free methods were applied to extract predictions and uncertainties. A deep learning regression model was trained on the continuous density scores to act as a direct comparison.<i>Results</i>. The root mean squared error (RMSE) between expert assigned density labels and predictions of the standard regression model were 8.42 (8.34-8.51) while the RMSE for the classification and ordinal classification were 8.37 (8.28-8.46) and 8.44 (8.35-8.53) respectively. The average uncertainties produced by the models were higher when the density scores from pairs of expert readers density scores differ more, when different mammogram views of the same views are more variable, and when two separately trained models show higher variation.<i>Conclusions</i>. Using either a classification or ordinal approach we can produce model uncertainty estimates without loss of predictive performance.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146003008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microdosimetric analysis of proton boron capture therapy using microdosimetric kinetic model. 质子硼捕获疗法的微剂量动力学模型分析。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2026-01-30 DOI: 10.1088/2057-1976/ae3965
Abdur Rahim, Tatsuhiko Sato, Hiroshi Fukuda, Mehrdad Shahmohammadi Beni, Hiroshi Watabe

Objective. Proton boron capture therapy (PBCT) is a novel approach that utilizes alpha particles generated through the proton induced capture reaction with11B. Early studies reported substantial dose enhancements of 50%-96% near the Bragg peak, suggesting a promising therapeutic advantage. However, subsequent investigations have raised critical concerns regarding the practical feasibility of PBCT, primarily due to the relatively low reaction cross section in the Bragg peak region and the need for clinically unrealistic boron concentrations. The aim of this study is to evaluate Relative Biological Effectiveness (RBE) enhancement in PBCT using microdosimetric analysis across a wide range of boron concentrations.Approach. In the present work, we have employed Monte Carlo model using Particle and Heavy Ion Transport code System (PHITS) package combined with the Microdosimetric Kinetic Model to quantify both physical and biological dose enhancements at varying concentrations of11B.Main Results. Microdosimetric analysis revealed that the total dose is dominated by protons, although alpha particles dominate in regions of higher linear energy deposition. The resulting RBE enhancement factors were 1.0011, 1.0080, and 1.1275 (for Human Salivary Gland (HSG) cell type) for 100, 1000, and 10,000 ppm boron concentrations, respectively. While the enhancements at lower concentrations are negligible, a modest increase is observed at very high boron levels.Significance. Based on the resulting RBE enhancement factors, it can be concluded that although alpha particles generated via thep+11B → 3αreaction contribute high-Linear Energy Transfer (LET) energy at the cellular level, the overall biological dose enhancement remains rather minimal. These results indicate that under clinically achievable boron concentrations, the therapeutic benefit of PBCT may be limited.

目的:质子硼融合治疗是一种利用质子诱导与11b融合反应产生的α ;粒子的新方法。早期的研究报道了布拉格峰附近5096%的剂量增强,这表明了一种有希望的治疗优势。然而,随后的研究对PBFT的实际可行性提出了关键的担忧,主要是由于Bragg峰区域的反应截面相对较低,并且需要临床上不现实的硼浓度。本研究的目的是通过广泛的硼浓度范围内的微剂量分析来评估RBE在PBFT中的增强作用。在目前的工作中,我们采用蒙特卡罗模型,使用PHITS包结合微剂量动力学模型来量化不同浓度下11b的物理和生物剂量增强。微剂量分析显示,总剂量是由质子决定的,尽管α粒子在较高的线性能量沉积区域占主导地位。结果表明,当硼浓度为100、1000和10,000 ppm时,RBE增强因子分别为1.0011、1.0080和1.1275 (HSG细胞类型)。虽然在较低浓度下的增强可以忽略不计,但在非常高的硼水平下观察到适度的增加。根据得到的RBE增强因子,可以得出结论,尽管通过p + 11b→3α反应产生的α粒子在细胞水平上贡献了高let能量,但总体生物剂量en- ;增强仍然很小。这些结果表明,在临床可达到的硼浓度下,PBFT的治疗效果可能有限。
{"title":"Microdosimetric analysis of proton boron capture therapy using microdosimetric kinetic model.","authors":"Abdur Rahim, Tatsuhiko Sato, Hiroshi Fukuda, Mehrdad Shahmohammadi Beni, Hiroshi Watabe","doi":"10.1088/2057-1976/ae3965","DOIUrl":"10.1088/2057-1976/ae3965","url":null,"abstract":"<p><p><i>Objective</i>. Proton boron capture therapy (PBCT) is a novel approach that utilizes alpha particles generated through the proton induced capture reaction with<sup>11</sup>B. Early studies reported substantial dose enhancements of 50%-96% near the Bragg peak, suggesting a promising therapeutic advantage. However, subsequent investigations have raised critical concerns regarding the practical feasibility of PBCT, primarily due to the relatively low reaction cross section in the Bragg peak region and the need for clinically unrealistic boron concentrations. The aim of this study is to evaluate Relative Biological Effectiveness (RBE) enhancement in PBCT using microdosimetric analysis across a wide range of boron concentrations.<i>Approach</i>. In the present work, we have employed Monte Carlo model using Particle and Heavy Ion Transport code System (PHITS) package combined with the Microdosimetric Kinetic Model to quantify both physical and biological dose enhancements at varying concentrations of<sup>11</sup>B.<i>Main Results</i>. Microdosimetric analysis revealed that the total dose is dominated by protons, although alpha particles dominate in regions of higher linear energy deposition. The resulting RBE enhancement factors were 1.0011, 1.0080, and 1.1275 (for Human Salivary Gland (HSG) cell type) for 100, 1000, and 10,000 ppm boron concentrations, respectively. While the enhancements at lower concentrations are negligible, a modest increase is observed at very high boron levels.<i>Significance</i>. Based on the resulting RBE enhancement factors, it can be concluded that although alpha particles generated via the<i>p</i>+<sup>11</sup>B → 3<i>α</i>reaction contribute high-Linear Energy Transfer (LET) energy at the cellular level, the overall biological dose enhancement remains rather minimal. These results indicate that under clinically achievable boron concentrations, the therapeutic benefit of PBCT may be limited.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Physics & Engineering Express
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1