首页 > 最新文献

Biomedical Physics & Engineering Express最新文献

英文 中文
Electroencephalogram features reflect effort corresponding to graded finger extension: implications for hemiparetic stroke. 脑电图特征反映了与手指伸展程度相对应的努力:对偏瘫中风的影响。
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-07 DOI: 10.1088/2057-1976/adabeb
Chase Haddix, Madison Bates, Sarah Garcia-Pava, Elizabeth Salmon Powell, Lumy Sawaki, Sridhar Sunderam

Brain-computer interfaces (BCIs) offer disabled individuals the means to interact with devices by decoding the electroencephalogram (EEG). However, decoding intent in fine motor tasks can be challenging, especially in stroke survivors with cortical lesions. Here, we attempt to decode graded finger extension from the EEG in stroke patients with left-hand paresis and healthy controls. Participants extended their fingers to one of four levels: low, medium, high, or 'no-go' (none), while hand, muscle (electromyography: EMG), and brain (EEG) activity were monitored. Event-related desynchronization (ERD) was measured as the change in 8-30 Hz EEG power during movement. Classifiers were trained on EEG features, EMG power, or both (EEG+EMG) to decode finger extension, and accuracy assessed via four-fold cross-validation for each hand of each participant. Mean accuracy exceeded chance (25%) for controls (n = 11) at 62% for EMG, 60% for EEG, and 71% for EEG+EMG on the left hand; and 67%, 60%, and 74%, respectively, on the right hand. Accuracies were similar on the unimpaired right hand for the stroke group (n = 3): 61%, 68%, and 78%, respectively. But on the paretic left hand, EMG only discriminated no-go from movement above chance (41%); in contrast, EEG gave 65% accuracy (68% for EEG+EMG), comparable to the non-paretic hand. The median ERD was significant (p < 0.01) over the cortical hand area in both groups and increased with each level of finger extension. But while the ERD favored the hemisphere contralateral to the active hand as expected, it was ipsilateral for the left hand of stroke due to the lesion in the right hemisphere, which may explain its discriminative ability. Hence, the ERD captures effort in finger extension regardless of success or failure at the task; and harnessing residual EMG improves the correlation. This marker could be leveraged in rehabilitative protocols that focus on fine motor control.

脑机接口(bci)通过解码脑电图(EEG)为残疾人提供与设备交互的手段。然而,在精细运动任务中解码意图是具有挑战性的,特别是在有皮层病变的中风幸存者中。在这里,我们试图解码脑卒中患者左手麻痹和健康对照的分级手指延伸。参与者将手指伸到四个水平中的一个:低、中、高或“不”(没有),同时监测手、肌肉(肌电图)和大脑(脑电图)的活动。运动过程中8 ~ 30 Hz脑电功率的变化测量事件相关去同步(ERD)。分类器在ERD、肌电图功率或两者(EEG+EMG)上进行训练,以解码手指延伸,并通过对每个参与者的每只手进行四次交叉验证来评估准确性。对照组(n=11)的平均准确率超过机会(25%),肌电图为62%,脑电图为60%,脑电图+肌电图为71%;右边分别是67% 60% 74%中风组未受损右手的准确度相似(n=3):分别为61%、68%和78%。但在父母的左手,肌电图只区分不走和机会以上的运动(41%);相比之下,脑电图的准确率为65%(脑电图+肌电图为68%),与非双亲手相当。两组手部皮质区平均ERD值均显著(p < 0.01),且随手指伸度的增加而增加。但是,尽管ERD如预期的那样倾向于活动手的对侧半球,但由于右半球的病变,它对中风的左手是同侧的,这可能解释了它的辨别能力。因此,ERD捕捉手指伸展的努力,而不管任务的成功或失败;利用残馀肌电信号可以提高相关性。这种标记物可以用于专注于精细运动控制的康复方案。
{"title":"Electroencephalogram features reflect effort corresponding to graded finger extension: implications for hemiparetic stroke.","authors":"Chase Haddix, Madison Bates, Sarah Garcia-Pava, Elizabeth Salmon Powell, Lumy Sawaki, Sridhar Sunderam","doi":"10.1088/2057-1976/adabeb","DOIUrl":"10.1088/2057-1976/adabeb","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs) offer disabled individuals the means to interact with devices by decoding the electroencephalogram (EEG). However, decoding intent in fine motor tasks can be challenging, especially in stroke survivors with cortical lesions. Here, we attempt to decode graded finger extension from the EEG in stroke patients with left-hand paresis and healthy controls. Participants extended their fingers to one of four levels: low, medium, high, or 'no-go' (none), while hand, muscle (electromyography: EMG), and brain (EEG) activity were monitored. Event-related desynchronization (ERD) was measured as the change in 8-30 Hz EEG power during movement. Classifiers were trained on EEG features, EMG power, or both (EEG+EMG) to decode finger extension, and accuracy assessed via four-fold cross-validation for each hand of each participant. Mean accuracy exceeded chance (25%) for controls (n = 11) at 62% for EMG, 60% for EEG, and 71% for EEG+EMG on the left hand; and 67%, 60%, and 74%, respectively, on the right hand. Accuracies were similar on the unimpaired right hand for the stroke group (n = 3): 61%, 68%, and 78%, respectively. But on the paretic left hand, EMG only discriminated no-go from movement above chance (41%); in contrast, EEG gave 65% accuracy (68% for EEG+EMG), comparable to the non-paretic hand. The median ERD was significant (p < 0.01) over the cortical hand area in both groups and increased with each level of finger extension. But while the ERD favored the hemisphere contralateral to the active hand as expected, it was ipsilateral for the left hand of stroke due to the lesion in the right hemisphere, which may explain its discriminative ability. Hence, the ERD captures effort in finger extension regardless of success or failure at the task; and harnessing residual EMG improves the correlation. This marker could be leveraged in rehabilitative protocols that focus on fine motor control.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142999518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning aided determination of the optimal number of detectors for photoacoustic tomography.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-07 DOI: 10.1088/2057-1976/adaf29
Sudeep Mondal, Subhadip Paul, Navjot Singh, Pankaj Warbal, Zartab Khanam, Ratan K Saha

Photoacoustic tomography (PAT) is a non-destructive, non-ionizing, and rapidly expanding hybrid biomedical imaging technique, yet it faces challenges in obtaining clear images due to limited data from detectors or angles. As a result, the methodology suffers from significant streak artifacts and low-quality images. The integration of deep learning (DL), specifically convolutional neural networks (CNNs), has recently demonstrated powerful performance in various fields of PAT. This work introduces a post-processing-based CNN architecture named residual-dense UNet (RDUNet) to address the stride artifacts in reconstructed PA images. The framework adopts the benefits of residual and dense blocks to form high-resolution reconstructed images. The network is trained with two different types of datasets to learn the relationship between the reconstructed images and their corresponding ground truths (GTs). In the first protocol, RDUNet (identified as RDUNet I) underwent training on heterogeneous simulated images featuring three distinct phantom types. Subsequently, in the second protocol, RDUNet (referred to as RDUNet II) was trained on a heterogeneous composition of 81% simulated data and 19% experimental data. The motivation behind this is to allow the network to adapt to diverse experimental challenges. The RDUNet algorithm was validated by performing numerical and experimental studies involving single-disk, T-shape, and vasculature phantoms. The performance of this protocol was compared with the famous backprojection (BP) and the traditional UNet algorithms. This study shows that RDUNet can substantially reduce the number of detectors from 100 to 25 for simulated testing images and 30 for experimental scenarios.

{"title":"Deep learning aided determination of the optimal number of detectors for photoacoustic tomography.","authors":"Sudeep Mondal, Subhadip Paul, Navjot Singh, Pankaj Warbal, Zartab Khanam, Ratan K Saha","doi":"10.1088/2057-1976/adaf29","DOIUrl":"10.1088/2057-1976/adaf29","url":null,"abstract":"<p><p>Photoacoustic tomography (PAT) is a non-destructive, non-ionizing, and rapidly expanding hybrid biomedical imaging technique, yet it faces challenges in obtaining clear images due to limited data from detectors or angles. As a result, the methodology suffers from significant streak artifacts and low-quality images. The integration of deep learning (DL), specifically convolutional neural networks (CNNs), has recently demonstrated powerful performance in various fields of PAT. This work introduces a post-processing-based CNN architecture named residual-dense UNet (RDUNet) to address the stride artifacts in reconstructed PA images. The framework adopts the benefits of residual and dense blocks to form high-resolution reconstructed images. The network is trained with two different types of datasets to learn the relationship between the reconstructed images and their corresponding ground truths (GTs). In the first protocol, RDUNet (identified as RDUNet I) underwent training on heterogeneous simulated images featuring three distinct phantom types. Subsequently, in the second protocol, RDUNet (referred to as RDUNet II) was trained on a heterogeneous composition of 81% simulated data and 19% experimental data. The motivation behind this is to allow the network to adapt to diverse experimental challenges. The RDUNet algorithm was validated by performing numerical and experimental studies involving single-disk, T-shape, and vasculature phantoms. The performance of this protocol was compared with the famous backprojection (BP) and the traditional UNet algorithms. This study shows that RDUNet can substantially reduce the number of detectors from 100 to 25 for simulated testing images and 30 for experimental scenarios.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143057812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated detection of traumatic bleeding in CT images using 3D U-Net# and multi-organ segmentation.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-06 DOI: 10.1088/2057-1976/adae14
Rizki Nurfauzi, Ayaka Baba, Taka-Aki Nakada, Toshiya Nakaguchi, Yukihiro Nomura

Traumatic injury remains a leading cause of death worldwide, with traumatic bleeding being one of its most critical and fatal consequences. The use of whole-body computed tomography (WBCT) in trauma management has rapidly expanded. However, interpreting WBCT images within the limited time available before treatment is particularly challenging for acute care physicians. Our group has previously developed an automated bleeding detection method in WBCT images. However, further reduction of false positives (FPs) is necessary for clinical application. To address this issue, we propose a novel automated detection for traumatic bleeding in CT images using deep learning and multi-organ segmentation; Methods: The proposed method integrates a three-dimensional U-Net# model for bleeding detection with an FP reduction approach based on multi-organ segmentation. The multi-organ segmentation method targets the bone, kidney, and vascular regions, where FPs are primarily found during the bleeding detection process. We evaluated the proposed method using a dataset of delayed-phase contrast-enhanced trauma CT images collected from four institutions; Results: Our method detected 70.0% of bleedings with 76.2 FPs/case. The processing time for our method was 6.3 ± 1.4 min. Compared with our previous ap-proach, the proposed method significantly reduced the number of FPs while maintaining detection sensitivity.

{"title":"Automated detection of traumatic bleeding in CT images using 3D U-Net# and multi-organ segmentation.","authors":"Rizki Nurfauzi, Ayaka Baba, Taka-Aki Nakada, Toshiya Nakaguchi, Yukihiro Nomura","doi":"10.1088/2057-1976/adae14","DOIUrl":"10.1088/2057-1976/adae14","url":null,"abstract":"<p><p>Traumatic injury remains a leading cause of death worldwide, with traumatic bleeding being one of its most critical and fatal consequences. The use of whole-body computed tomography (WBCT) in trauma management has rapidly expanded. However, interpreting WBCT images within the limited time available before treatment is particularly challenging for acute care physicians. Our group has previously developed an automated bleeding detection method in WBCT images. However, further reduction of false positives (FPs) is necessary for clinical application. To address this issue, we propose a novel automated detection for traumatic bleeding in CT images using deep learning and multi-organ segmentation; Methods: The proposed method integrates a three-dimensional U-Net# model for bleeding detection with an FP reduction approach based on multi-organ segmentation. The multi-organ segmentation method targets the bone, kidney, and vascular regions, where FPs are primarily found during the bleeding detection process. We evaluated the proposed method using a dataset of delayed-phase contrast-enhanced trauma CT images collected from four institutions; Results: Our method detected 70.0% of bleedings with 76.2 FPs/case. The processing time for our method was 6.3 ± 1.4 min. Compared with our previous ap-proach, the proposed method significantly reduced the number of FPs while maintaining detection sensitivity.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143032209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparison of different machine learning classifiers in predicting xerostomia and sticky saliva due to head and neck radiotherapy using a multi-objective, multimodal radiomics model.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-06 DOI: 10.1088/2057-1976/adafac
Benyamin Khajetash, Ghasem Hajianfar, Amin Talebi, Beth Ghavidel, Seied Rabi Mahdavi, Yang Lei, Meysam Tavakoli

Background and Purpose. Although radiotherapy techniques are a primary treatment for head and neck cancer (HNC), they are still associated with substantial toxicity and side effects. Machine learning (ML) based radiomics models for predicting toxicity mostly rely on features extracted from pre-treatment imaging data. This study aims to compare different models in predicting radiation-induced xerostomia and sticky saliva in both early and late stages HNC patients using CT and MRI image features along with demographics and dosimetric information.Materials and Methods.A cohort of 85 HNC patients who underwent radiation treatment was evaluated. We built different ML-based classifiers to build a multi-objective, multimodal radiomics model by extracting 346 different features from patient data. The models were trained and tested for prediction, utilizing Relief feature selection method and eight classifiers consisting eXtreme Gradient Boosting (XGBoost), Multilayer Perceptron (MLP), Support Vector Machines (SVM), Random Forest (RF), K-Nearest Neighbor (KNN), Naive Bayes (NB), Logistic Regression (LR), and Decision Tree (DT). The performance of the models was evaluated using sensitivity, specificity, area under the curve (AUC), and accuracy metrics.Results.Using a combination of demographics, dosimetric, and image features, the SVM model obtained the best performance with AUC of 0.77 and 0.81 for predicting early sticky saliva and xerostomia, respectively. Also, SVM and MLP classifiers achieved a noteworthy AUC of 0.85 and 0.64 for predicting late sticky saliva and xerostomia, respectively.Conclusion. This study highlights the potential of baseline CT and MRI image features, combined with dosimetric data and patient demographics, to predict radiation-induced xerostomia and sticky saliva. The use of ML techniques provides valuable insights for personalized treatment planning to mitigate toxicity effects during radiation therapy for HNC patients.

{"title":"A comparison of different machine learning classifiers in predicting xerostomia and sticky saliva due to head and neck radiotherapy using a multi-objective, multimodal radiomics model.","authors":"Benyamin Khajetash, Ghasem Hajianfar, Amin Talebi, Beth Ghavidel, Seied Rabi Mahdavi, Yang Lei, Meysam Tavakoli","doi":"10.1088/2057-1976/adafac","DOIUrl":"10.1088/2057-1976/adafac","url":null,"abstract":"<p><p><i>Background and Purpose</i>. Although radiotherapy techniques are a primary treatment for head and neck cancer (HNC), they are still associated with substantial toxicity and side effects. Machine learning (ML) based radiomics models for predicting toxicity mostly rely on features extracted from pre-treatment imaging data. This study aims to compare different models in predicting radiation-induced xerostomia and sticky saliva in both early and late stages HNC patients using CT and MRI image features along with demographics and dosimetric information.<i>Materials and Methods.</i>A cohort of 85 HNC patients who underwent radiation treatment was evaluated. We built different ML-based classifiers to build a multi-objective, multimodal radiomics model by extracting 346 different features from patient data. The models were trained and tested for prediction, utilizing Relief feature selection method and eight classifiers consisting eXtreme Gradient Boosting (XGBoost), Multilayer Perceptron (MLP), Support Vector Machines (SVM), Random Forest (RF), K-Nearest Neighbor (KNN), Naive Bayes (NB), Logistic Regression (LR), and Decision Tree (DT). The performance of the models was evaluated using sensitivity, specificity, area under the curve (AUC), and accuracy metrics.<i>Results.</i>Using a combination of demographics, dosimetric, and image features, the SVM model obtained the best performance with AUC of 0.77 and 0.81 for predicting early sticky saliva and xerostomia, respectively. Also, SVM and MLP classifiers achieved a noteworthy AUC of 0.85 and 0.64 for predicting late sticky saliva and xerostomia, respectively.<i>Conclusion</i>. This study highlights the potential of baseline CT and MRI image features, combined with dosimetric data and patient demographics, to predict radiation-induced xerostomia and sticky saliva. The use of ML techniques provides valuable insights for personalized treatment planning to mitigate toxicity effects during radiation therapy for HNC patients.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143063423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full fine-tuning strategy for endoscopic foundation models with expanded learnable offset parameters.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-06 DOI: 10.1088/2057-1976/adaec3
Minghan Dong, Xiangwei Zheng, Xia Zhang, Xingyu Zhang, Mingzhe Zhang

In the medical field, endoscopic video analysis is crucial for disease diagnosis and minimally invasive surgery. The Endoscopic Foundation Models (Endo-FM) utilize large-scale self-supervised pre-training on endoscopic video data and leverage video transformer models to capture long-range spatiotemporal dependencies. However, detecting complex lesions such as gastrointestinal metaplasia (GIM) in endoscopic videos remains challenging due to unclear boundaries and indistinct features, and Endo-FM has not demonstrated good performance. To this end, we propose a fully fine-tuning strategy with an Extended Learnable Offset Parameter (ELOP), which improves model performance by introducing learnable offset parameters in the input space. Specifically, we propose a novel loss function that combines cross-entropy loss and focal loss through a weighted sum, enabling the model to better focus on hard-to-classify samples during training. We validated ELOP on a private GIM dataset from a local grade-A tertiary hospital and a public polyp detection dataset. Experimental results show that ELOP significantly improves the detection accuracy, achieving accuracy improvements of 6.25 % and 3.75%respectively compared to the original Endo-FM. In summary, ELOP provides an excellent solution for detecting complex lesions in endoscopic videos, achieving more precise diagnoses.

{"title":"Full fine-tuning strategy for endoscopic foundation models with expanded learnable offset parameters.","authors":"Minghan Dong, Xiangwei Zheng, Xia Zhang, Xingyu Zhang, Mingzhe Zhang","doi":"10.1088/2057-1976/adaec3","DOIUrl":"10.1088/2057-1976/adaec3","url":null,"abstract":"<p><p>In the medical field, endoscopic video analysis is crucial for disease diagnosis and minimally invasive surgery. The Endoscopic Foundation Models (Endo-FM) utilize large-scale self-supervised pre-training on endoscopic video data and leverage video transformer models to capture long-range spatiotemporal dependencies. However, detecting complex lesions such as gastrointestinal metaplasia (GIM) in endoscopic videos remains challenging due to unclear boundaries and indistinct features, and Endo-FM has not demonstrated good performance. To this end, we propose a fully fine-tuning strategy with an Extended Learnable Offset Parameter (ELOP), which improves model performance by introducing learnable offset parameters in the input space. Specifically, we propose a novel loss function that combines cross-entropy loss and focal loss through a weighted sum, enabling the model to better focus on hard-to-classify samples during training. We validated ELOP on a private GIM dataset from a local grade-A tertiary hospital and a public polyp detection dataset. Experimental results show that ELOP significantly improves the detection accuracy, achieving accuracy improvements of 6.25 % and 3.75%respectively compared to the original Endo-FM. In summary, ELOP provides an excellent solution for detecting complex lesions in endoscopic videos, achieving more precise diagnoses.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143051468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Magnetic vector field mapping of the stimulated abductor digiti minimi muscle with optically pumped magnetometers.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-06 DOI: 10.1088/2057-1976/adaec5
Marlen Kruse, Simon Nordenström, Stefan Hartwig, Justus Marquetand, Victor Lebedev, Thomas Middelmann, Philip J Broser

Objective.Mapping the myomagnetic field of a straight and easily accessible muscle after electrical stimulation using triaxial optically pumped magnetometers (OPMs) to assess potential benefits for magnetomyography (MMG).Approach.Six triaxial OPMs were arranged in two rows with three sensors each along the abductor digiti minimi (ADM) muscle. The upper row of sensors was inclined by 45° with respect to the lower row and all sensors were aligned closely to the skin surface without direct contact. Then, the electromagnetic muscle activity was electrically evoked utilizing stepwise increasing currents at the cubital tunnel at the ulnar nerve. Evoked myomagnetic activity was recorded with 18 channels, three per sensor. As the measurements were performed in PTB's magnetically shielded room (BMSR-2) no averaging and only moderate filtering was applied.Main results.The myomagnetic vector field could be successfully mapped. The obtained spatial structure with a radial symmetry corresponds to the expectations from the ADM's parallel muscle architecture. The temporal evolution exhibits an up to four-phasic shape. Implications for future experiments are derived and needs for sensor performance improvements are identified. Significance.The use of an OPM array with small (∼3 mm edge length) sensing voxels enabled the mapping of the magnetic vector field of the ADM. This allowed visualization of the spatiotemporal evolution of the muscle's evoked magnetic field and gives implications for future experiments. In the future, high density OPM grids may enable high-accuracy determination of muscle parameters such as innervation zone position, pennation angle, and propagation velocities.

{"title":"Magnetic vector field mapping of the stimulated abductor digiti minimi muscle with optically pumped magnetometers.","authors":"Marlen Kruse, Simon Nordenström, Stefan Hartwig, Justus Marquetand, Victor Lebedev, Thomas Middelmann, Philip J Broser","doi":"10.1088/2057-1976/adaec5","DOIUrl":"10.1088/2057-1976/adaec5","url":null,"abstract":"<p><p><i>Objective.</i>Mapping the myomagnetic field of a straight and easily accessible muscle after electrical stimulation using triaxial optically pumped magnetometers (OPMs) to assess potential benefits for magnetomyography (MMG).<i>Approach.</i>Six triaxial OPMs were arranged in two rows with three sensors each along the abductor digiti minimi (ADM) muscle. The upper row of sensors was inclined by 45° with respect to the lower row and all sensors were aligned closely to the skin surface without direct contact. Then, the electromagnetic muscle activity was electrically evoked utilizing stepwise increasing currents at the cubital tunnel at the ulnar nerve. Evoked myomagnetic activity was recorded with 18 channels, three per sensor. As the measurements were performed in PTB's magnetically shielded room (BMSR-2) no averaging and only moderate filtering was applied.<i>Main results.</i>The myomagnetic vector field could be successfully mapped. The obtained spatial structure with a radial symmetry corresponds to the expectations from the ADM's parallel muscle architecture. The temporal evolution exhibits an up to four-phasic shape. Implications for future experiments are derived and needs for sensor performance improvements are identified<i>. Significance.</i>The use of an OPM array with small (∼3 mm edge length) sensing voxels enabled the mapping of the magnetic vector field of the ADM. This allowed visualization of the spatiotemporal evolution of the muscle's evoked magnetic field and gives implications for future experiments. In the future, high density OPM grids may enable high-accuracy determination of muscle parameters such as innervation zone position, pennation angle, and propagation velocities.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143051475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental and GEANT4 simulated FEPE of NaI(Tl) detector for linear sources.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-05 DOI: 10.1088/2057-1976/adaec8
Shahid Mansoor, Azhar Hussain Malik, Khizar Hayat Satti, Misbah Ijaz, Muhammad Tariq Siddique

The current study investigated the geometry, design and solid angle impacts on full energy peak efficiency (FEPE) of NaI(Tl) detectors for a line source. A line source is fabricated using99mTc solution filled in a borosilicate glass tube of inner diameter 3 mm, tube wall thickness 2.5 mm and length 12.7 cm. The FEPE is measured for the fabricated linear source using 2″×2″ NaI(Tl) cylindrical detector at various source-detector distances. The experimental setup is simulated in GEANT4 and the computed FEPE values are compared with experimental values. The absolute error of 5% is observed between computed and measured FEPE values. Utilizing the advantages of MC simulations, the impact of numerous source parameters such as source length, diameter, source-detector distance, glass tube thickness and lead shield effects on FEPE are investigated to optimize the fabrication process of linear sources. A case study for current investigation has been analyzed by considering absolute FEPE of the NaI(Tl) system for a syringe filled with radioactive solution. This study provides an insight for the fabrication of standard linear sources by analyzing different source parameters and hence, may serve as a guideline to prepare standard linear sources for the calibration of radiation detectors.

{"title":"Experimental and GEANT4 simulated FEPE of NaI(Tl) detector for linear sources.","authors":"Shahid Mansoor, Azhar Hussain Malik, Khizar Hayat Satti, Misbah Ijaz, Muhammad Tariq Siddique","doi":"10.1088/2057-1976/adaec8","DOIUrl":"10.1088/2057-1976/adaec8","url":null,"abstract":"<p><p>The current study investigated the geometry, design and solid angle impacts on full energy peak efficiency (FEPE) of NaI(Tl) detectors for a line source. A line source is fabricated using<sup>99m</sup>Tc solution filled in a borosilicate glass tube of inner diameter 3 mm, tube wall thickness 2.5 mm and length 12.7 cm. The FEPE is measured for the fabricated linear source using 2″×2″ NaI(Tl) cylindrical detector at various source-detector distances. The experimental setup is simulated in GEANT4 and the computed FEPE values are compared with experimental values. The absolute error of 5% is observed between computed and measured FEPE values. Utilizing the advantages of MC simulations, the impact of numerous source parameters such as source length, diameter, source-detector distance, glass tube thickness and lead shield effects on FEPE are investigated to optimize the fabrication process of linear sources. A case study for current investigation has been analyzed by considering absolute FEPE of the NaI(Tl) system for a syringe filled with radioactive solution. This study provides an insight for the fabrication of standard linear sources by analyzing different source parameters and hence, may serve as a guideline to prepare standard linear sources for the calibration of radiation detectors.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143051464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bilateral Network with Text Guided Aggregation Architecture for Lung Infection Image Segmentation.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-05 DOI: 10.1088/2057-1976/adb290
Xiang Pan, Hanxiao Mei, Jianwei Zheng, Herong Zheng

Lung image segmentation is a crucial problem for autonomous understanding of the potential illness. However, existing approaches lead to a considerable decrease in accuracy for lung infection areas with varied shapes and sizes. Recently, researchers aimed to improve segmentation accuracy by combining diagnostic reports based on text prompts and image vision information. However, limited by the network structure, these methods are inefficient and ineffective. To address this issue, this paper proposes a Bilateral Network with Text Guided Aggregation Architecture (BNTGAA) to fully fuse local and global information for text and image vision. This proposed architecture involves (i) a global fusion branch with a Hadamard product to align text and vision feature representation and (ii) a multi-scale cross-fusion branch with positional coding and skip connection, performing text-guided segmentation in different resolutions. (iii) The global fusion and multi-scale cross-fusion branches are combined to feed a mamba module for efficient segmentation. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture performs better both in accuracy and efficiency. Our architecture outperforms the current best methods on the QaTa-COVID19 dataset, improving mIoU and Dice scores by 3.08% and 2.35%, respectively. Meanwhile, our architecture surpasses the computational speed of existing multimodal networks. Finally, the architecture has a quick convergence and generality. It can exceed the performance of the current best methods even if it is trained with only 50% of the dataset.

{"title":"Bilateral Network with Text Guided Aggregation Architecture for Lung Infection Image Segmentation.","authors":"Xiang Pan, Hanxiao Mei, Jianwei Zheng, Herong Zheng","doi":"10.1088/2057-1976/adb290","DOIUrl":"https://doi.org/10.1088/2057-1976/adb290","url":null,"abstract":"<p><p>Lung image segmentation is a crucial problem for autonomous understanding of the potential illness. However, existing approaches lead to a considerable decrease in accuracy for lung infection areas with varied shapes and sizes. Recently, researchers aimed to improve segmentation accuracy by combining diagnostic reports based on text prompts and image vision information. However, limited by the network structure, these methods are inefficient and ineffective. To address this issue, this paper proposes a Bilateral Network with Text Guided Aggregation Architecture (BNTGAA) to fully fuse local and global information for text and image vision. This proposed architecture involves (i) a global fusion branch with a Hadamard product to align text and vision feature representation and (ii) a multi-scale cross-fusion branch with positional coding and skip connection, performing text-guided segmentation in different resolutions. (iii) The global fusion and multi-scale cross-fusion branches are combined to feed a mamba module for efficient segmentation. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture performs better both in accuracy and efficiency. Our architecture outperforms the current best methods on the QaTa-COVID19 dataset, improving mIoU and Dice scores by 3.08% and 2.35%, respectively. Meanwhile, our architecture surpasses the computational speed of existing multimodal networks. Finally, the architecture has a quick convergence and generality. It can exceed the performance of the current best methods even if it is trained with only 50% of the dataset.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143254550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing transformer-based network via advanced decoder design for medical image segmentation.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-05 DOI: 10.1088/2057-1976/adaec7
Weibin Yang, Zhiqi Dong, Mingyuan Xu, Longwei Xu, Dehua Geng, Yusong Li, Pengwei Wang

U-Net is widely used in medical image segmentation due to its simple and flexible architecture design. To address the challenges of scale and complexity in medical tasks, several variants of U-Net have been proposed. In particular, methods based on Vision Transformer (ViT), represented by Swin UNETR, have gained widespread attention in recent years. However, these improvements often focus on the encoder, overlooking the crucial role of the decoder in optimizing segmentation details. This design imbalance limits the potential for further enhancing segmentation performance. To address this issue, we analyze the roles of various decoder components, including upsampling method, skip connection, and feature extraction module, as well as the shortcomings of existing methods. Consequently, we propose Swin DER (i.e.,SwinUNETRDecoderEnhanced andRefined), by specifically optimizing the design of these three components. Swin DER performs upsampling using learnable interpolation algorithm called offset coordinate neighborhood weighted up sampling (Onsampling) and replaces traditional skip connection with spatial-channel parallel attention gate (SCP AG). Additionally, Swin DER introduces deformable convolution along with attention mechanism in the feature extraction module of the decoder. Our model design achieves excellent results, surpassing other state-of-the-art methods on both the Synapse dataset and the MSD brain tumor segmentation task. Code is available at:https://github.com/WillBeanYang/Swin-DER.

{"title":"Optimizing transformer-based network via advanced decoder design for medical image segmentation.","authors":"Weibin Yang, Zhiqi Dong, Mingyuan Xu, Longwei Xu, Dehua Geng, Yusong Li, Pengwei Wang","doi":"10.1088/2057-1976/adaec7","DOIUrl":"10.1088/2057-1976/adaec7","url":null,"abstract":"<p><p>U-Net is widely used in medical image segmentation due to its simple and flexible architecture design. To address the challenges of scale and complexity in medical tasks, several variants of U-Net have been proposed. In particular, methods based on Vision Transformer (ViT), represented by Swin UNETR, have gained widespread attention in recent years. However, these improvements often focus on the encoder, overlooking the crucial role of the decoder in optimizing segmentation details. This design imbalance limits the potential for further enhancing segmentation performance. To address this issue, we analyze the roles of various decoder components, including upsampling method, skip connection, and feature extraction module, as well as the shortcomings of existing methods. Consequently, we propose Swin DER (i.e.,<b>Swin</b>UNETR<b>D</b>ecoder<b>E</b>nhanced and<b>R</b>efined), by specifically optimizing the design of these three components. Swin DER performs upsampling using learnable interpolation algorithm called offset coordinate neighborhood weighted up sampling (Onsampling) and replaces traditional skip connection with spatial-channel parallel attention gate (SCP AG). Additionally, Swin DER introduces deformable convolution along with attention mechanism in the feature extraction module of the decoder. Our model design achieves excellent results, surpassing other state-of-the-art methods on both the Synapse dataset and the MSD brain tumor segmentation task. Code is available at:<i>https://github.com/WillBeanYang/Swin-DER</i>.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143051479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
External delay and dispersion correction of automatically sampled arterial blood with dual flow rates.
IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-02-03 DOI: 10.1088/2057-1976/adae13
Benjamin Brender, Lubna Burki, Josefina Jeon, Alvina Ng, Nikta Yussefian, Carme Uribe, Emily Murrell, Isabelle Boileau, Kimberly L Desmond, Lucas Narciso

Objective. Arterial sampling for PET imaging often involves continuously measuring the radiotracer activity concentration in blood using an automatic blood sampling system (ABSS). We proposed and validated an external delay and dispersion correction procedure needed when a change in flow rate occurs during data acquisition. We also measured the external dispersion constant of [11C]CURB, [18F]FDG, [18F]FEPPA, and [18F]SynVesT-1.Approach. External delay and dispersion constants were measured for the flow rates of 350, 300, 180, and 150 ml h-1, using 1-minute-long rectangular inputs (n= 10;18F-fluoride in saline). Resulting constants were used to validate the external delay and dispersion corrections (n= 6;18F-fluoride in saline; flow rate change: 350 to 150 ml h-1and 300 to 180 ml h-1); constants were modelled to transition linearly between flow rates. Corrected curves were assessed using the percent area-under-the-curve (AUC) ratio and a modified model selection criterion (MSC). External delay and dispersion constants were measured for various radiotracers using a blood analog (i.e., similar viscoelastic properties).Main results. ABSS outputs were successfully corrected for external delay and dispersion using our proposed method accounting for a change in flow rate. AUC ratio reduced from ∼10% for the uncorrected 350-150 ml h-1output (∼6% for the 300-180 ml h-1) to < 1% after correction when compared to true input (511 keV energy window); approx. 5-fold increase in MSC. Assuming an internal dispersion constant of 5 s, the dispersion constant (internal + external) for [11C]CURB, [18F]FDG, [18F]FEPPA, and [18F]SynVesT-1 was 13, 9, 16, and 10 s, respectively.Significance. This study presented an external delay and dispersion correction procedure needed when a change in flow rate occurs during ABSS data acquisition. Additionally, this is the first study to measure the external delay and dispersion constants using a blood analog solution, a suitable alternative to blood when estimating external dispersion.

{"title":"External delay and dispersion correction of automatically sampled arterial blood with dual flow rates.","authors":"Benjamin Brender, Lubna Burki, Josefina Jeon, Alvina Ng, Nikta Yussefian, Carme Uribe, Emily Murrell, Isabelle Boileau, Kimberly L Desmond, Lucas Narciso","doi":"10.1088/2057-1976/adae13","DOIUrl":"10.1088/2057-1976/adae13","url":null,"abstract":"<p><p><i>Objective</i>. Arterial sampling for PET imaging often involves continuously measuring the radiotracer activity concentration in blood using an automatic blood sampling system (ABSS). We proposed and validated an external delay and dispersion correction procedure needed when a change in flow rate occurs during data acquisition. We also measured the external dispersion constant of [<sup>11</sup>C]CURB, [<sup>18</sup>F]FDG, [<sup>18</sup>F]FEPPA, and [<sup>18</sup>F]SynVesT-1.<i>Approach</i>. External delay and dispersion constants were measured for the flow rates of 350, 300, 180, and 150 ml h<sup>-1</sup>, using 1-minute-long rectangular inputs (<i>n</i>= 10;<sup>18</sup>F-fluoride in saline). Resulting constants were used to validate the external delay and dispersion corrections (<i>n</i>= 6;<sup>18</sup>F-fluoride in saline; flow rate change: 350 to 150 ml h<sup>-1</sup>and 300 to 180 ml h<sup>-1</sup>); constants were modelled to transition linearly between flow rates. Corrected curves were assessed using the percent area-under-the-curve (AUC) ratio and a modified model selection criterion (MSC). External delay and dispersion constants were measured for various radiotracers using a blood analog (i.e., similar viscoelastic properties).<i>Main results</i>. ABSS outputs were successfully corrected for external delay and dispersion using our proposed method accounting for a change in flow rate. AUC ratio reduced from ∼10% for the uncorrected 350-150 ml h<sup>-1</sup>output (∼6% for the 300-180 ml h<sup>-1</sup>) to < 1% after correction when compared to true input (511 keV energy window); approx. 5-fold increase in MSC. Assuming an internal dispersion constant of 5 s, the dispersion constant (internal + external) for [<sup>11</sup>C]CURB, [<sup>18</sup>F]FDG, [<sup>18</sup>F]FEPPA, and [<sup>18</sup>F]SynVesT-1 was 13, 9, 16, and 10 s, respectively.<i>Significance</i>. This study presented an external delay and dispersion correction procedure needed when a change in flow rate occurs during ABSS data acquisition. Additionally, this is the first study to measure the external delay and dispersion constants using a blood analog solution, a suitable alternative to blood when estimating external dispersion.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143032212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Physics & Engineering Express
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1