Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01685-0
Ramandeep Singh, Parikshith Chavakula, Joy Chatterjee, Anuj Saini, Deepak Joshi, Ashish Suri
Accurate prediction of human motor actions is essential for developing intuitive, responsive, and adaptive human-machine interaction systems. This study investigates the use of force myography (FMG) to predict knob-turning activity with varying torque values and arm angles. Participants performed knob-turning activities on three spiral springs with different torque values and at four arm angles. A convolution neural network, long short-term memory hybrid classification approach was employed to classify the FMG data and predict torque and arm angle with an overall accuracy of 95.87 ± 2.59% and 94.06 ± 2.44%, respectively. The study also shows that the presence of subcutaneous fat did not significantly affect the classification of torque and arm angle ([Formula: see text], Mann-Whitney U test). These findings demonstrate the potential of FMG as an effective method for accurately predicting activities of daily life involving tasks with varying torque and arm angles.
{"title":"Turning a knob: deep learning-based prediction of torque and arm angles using force myography.","authors":"Ramandeep Singh, Parikshith Chavakula, Joy Chatterjee, Anuj Saini, Deepak Joshi, Ashish Suri","doi":"10.1007/s13246-025-01685-0","DOIUrl":"https://doi.org/10.1007/s13246-025-01685-0","url":null,"abstract":"<p><p>Accurate prediction of human motor actions is essential for developing intuitive, responsive, and adaptive human-machine interaction systems. This study investigates the use of force myography (FMG) to predict knob-turning activity with varying torque values and arm angles. Participants performed knob-turning activities on three spiral springs with different torque values and at four arm angles. A convolution neural network, long short-term memory hybrid classification approach was employed to classify the FMG data and predict torque and arm angle with an overall accuracy of 95.87 ± 2.59% and 94.06 ± 2.44%, respectively. The study also shows that the presence of subcutaneous fat did not significantly affect the classification of torque and arm angle ([Formula: see text], Mann-Whitney U test). These findings demonstrate the potential of FMG as an effective method for accurately predicting activities of daily life involving tasks with varying torque and arm angles.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145757998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01684-1
Jingjing M Dougherty, Chris J Beltran
To evaluate a proof-of-concept three-dimensional surface reconstruction technique using a hybrid LiDAR and RGB sensor system with an open-source, GPU-accelerated pipeline. The goal is to generate photorealistic digital twins of phantom surfaces for integration into radiotherapy collision avoidance workflows. A portable Intel RealSense sensor was used to acquire synchronized depth and color images. Sensor performance, including depth accuracy, fill rate, and planar root mean square error, was evaluated to determine practical scan range. A reconstruction pipeline was implemented using the Open3D library with a voxel-based framework, signed distance function integration, ray casting, and color and depth-based simultaneous localization and mapping for pose tracking. Surface meshes were generated using the Marching Cubes algorithm. Validation involved scanning rectangular box phantoms and an anthropomorphic Rando phantom in a single circular motion. Reconstructed models were registered to CT-derived meshes using manual point picking and iterative closest point alignment. Accuracy was assessed using cloud-to-mesh distance metrics and compared to Poisson surface reconstruction. Highest accuracy was observed within the 0.3 to 2.0 m range. Dimensional differences for box models were within five millimeters. The Rando phantom showed a registration error of 1.8 mm and 100% theoretical overlap with the CT reference. Global mean signed distance was minus 0.32 mm with a standard deviation of 3.85 mm. This technique has strong potential to enables accurate, realistic surface modeling using low-cost, open-source tools and supports future integration into radiotherapy digital twin systems.
{"title":"Hybrid LiDAR-RGB 3D surface reconstruction for collision avoidance in radiotherapy: a proof‑of‑concept phantom study.","authors":"Jingjing M Dougherty, Chris J Beltran","doi":"10.1007/s13246-025-01684-1","DOIUrl":"https://doi.org/10.1007/s13246-025-01684-1","url":null,"abstract":"<p><p>To evaluate a proof-of-concept three-dimensional surface reconstruction technique using a hybrid LiDAR and RGB sensor system with an open-source, GPU-accelerated pipeline. The goal is to generate photorealistic digital twins of phantom surfaces for integration into radiotherapy collision avoidance workflows. A portable Intel RealSense sensor was used to acquire synchronized depth and color images. Sensor performance, including depth accuracy, fill rate, and planar root mean square error, was evaluated to determine practical scan range. A reconstruction pipeline was implemented using the Open3D library with a voxel-based framework, signed distance function integration, ray casting, and color and depth-based simultaneous localization and mapping for pose tracking. Surface meshes were generated using the Marching Cubes algorithm. Validation involved scanning rectangular box phantoms and an anthropomorphic Rando phantom in a single circular motion. Reconstructed models were registered to CT-derived meshes using manual point picking and iterative closest point alignment. Accuracy was assessed using cloud-to-mesh distance metrics and compared to Poisson surface reconstruction. Highest accuracy was observed within the 0.3 to 2.0 m range. Dimensional differences for box models were within five millimeters. The Rando phantom showed a registration error of 1.8 mm and 100% theoretical overlap with the CT reference. Global mean signed distance was minus 0.32 mm with a standard deviation of 3.85 mm. This technique has strong potential to enables accurate, realistic surface modeling using low-cost, open-source tools and supports future integration into radiotherapy digital twin systems.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s13246-025-01679-y
S Krishnendu, Maheshwari Biradar
In medical imaging, particularly in enhancing computed tomography (CT) scan images, improving image quality while preserving diagnostic content is critical for detecting different types of abnormalities, especially in cases such as tumors, inflammatory conditions, or vascular issues. This paper proposes a novel image enhancement pipeline that integrates several image enhancement techniques into a sequential workflow that is specifically designed for abdominal CT scan images. The proposed pipeline combines windowing, contrast-limited adaptive histogram equalization, denoising via non-local means, and unsharp masking to concurrently address several issues affecting the quality of the images. Unlike existing methods, the proposed combinational approach improves contrast, suppresses noise, and sharpens structural detail, guaranteeing the balance between the enhancement and the diagnostic integrity. The workflow was evaluated on datasets from The Cancer Imaging Archive and the Medical Segmentation Decathlon. The proposed approach is assessed using key image quality metrics, yielding an average Peak Signal-to-Noise Ratio of 31.79 dB, Universal Image Quality Index of 0.96, Feature Similarity Index of 0.93, Absolute Mean Brightness Error of 7.12, and Edge Content of 7.78. These results indicate significant improvements in contrast enhancement, noise reduction, and the preservation of structural details. We performed an additional qualitative analysis by generating histograms and saliency maps that further confirm the method's effectiveness in enhancing the diagnostic quality of the CT images for both clinical and research purposes.
{"title":"Enhancing diagnostic information in abdominal computed tomography (CT) images through optimized image enhancement techniques.","authors":"S Krishnendu, Maheshwari Biradar","doi":"10.1007/s13246-025-01679-y","DOIUrl":"https://doi.org/10.1007/s13246-025-01679-y","url":null,"abstract":"<p><p>In medical imaging, particularly in enhancing computed tomography (CT) scan images, improving image quality while preserving diagnostic content is critical for detecting different types of abnormalities, especially in cases such as tumors, inflammatory conditions, or vascular issues. This paper proposes a novel image enhancement pipeline that integrates several image enhancement techniques into a sequential workflow that is specifically designed for abdominal CT scan images. The proposed pipeline combines windowing, contrast-limited adaptive histogram equalization, denoising via non-local means, and unsharp masking to concurrently address several issues affecting the quality of the images. Unlike existing methods, the proposed combinational approach improves contrast, suppresses noise, and sharpens structural detail, guaranteeing the balance between the enhancement and the diagnostic integrity. The workflow was evaluated on datasets from The Cancer Imaging Archive and the Medical Segmentation Decathlon. The proposed approach is assessed using key image quality metrics, yielding an average Peak Signal-to-Noise Ratio of 31.79 dB, Universal Image Quality Index of 0.96, Feature Similarity Index of 0.93, Absolute Mean Brightness Error of 7.12, and Edge Content of 7.78. These results indicate significant improvements in contrast enhancement, noise reduction, and the preservation of structural details. We performed an additional qualitative analysis by generating histograms and saliency maps that further confirm the method's effectiveness in enhancing the diagnostic quality of the CT images for both clinical and research purposes.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145758004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To compare dose to the organ at risk (OAR) and target coverage of carbon-ion beam, protons, and photons for patients with head and neck cancer. Treatment plans for carbon-ion pencil beam scanning (C-PBS) (64 Gy (RBE) in 16 fractions), proton pencil beam scanning (P-PBS), and volumetric modulated arc therapy (VMAT) (70 Gy in 35 fractions for P-PBS and VMAT) were generated and compared using different dose constraints per treatment modality. Dose metrics (e.g. D95,V20) were analyzed. Statistical significance was assessed by the Wilcoxon signed-rank test. Also, we investigated howmany normal tissues were irradiated above the constraint after achieving the planning goals (pass rate) in the OARs. C-PBS outperformed P-PBS and VMAT in PTV coverage (p = 0.01 for both); however, P-PBS and VMAT did not differ substantially from each another (p = 0.35). C-PBS was superior in limiting the dose to the OAR. The pass rates for C-PBS, P-PBS, and VMAT were 94%, 81%, and 69%, respectively. C-PBS demonstrated superior performance compared to VMAT and P-PBS in terms of dose conformation to the target volume and normal tissue sparing, and achieved the highest pass rate in meeting dose constraints.
{"title":"Carbon-ions, protons or photons for head and neck cancer radiotherapy-an in silico planning study.","authors":"Hyun-Cheol Kang, Shinichiro Mori, Tapesh Bhattacharyya, Wataru Furuichi, Naoki Tohyama, Akihiro Nomoto, Nobuyuki Kanematsu, Hiroaki Ikawa, Masashi Koto, Shigeru Yamada","doi":"10.1007/s13246-025-01677-0","DOIUrl":"https://doi.org/10.1007/s13246-025-01677-0","url":null,"abstract":"<p><p>To compare dose to the organ at risk (OAR) and target coverage of carbon-ion beam, protons, and photons for patients with head and neck cancer. Treatment plans for carbon-ion pencil beam scanning (C-PBS) (64 Gy (RBE) in 16 fractions), proton pencil beam scanning (P-PBS), and volumetric modulated arc therapy (VMAT) (70 Gy in 35 fractions for P-PBS and VMAT) were generated and compared using different dose constraints per treatment modality. Dose metrics (e.g. D95,V20) were analyzed. Statistical significance was assessed by the Wilcoxon signed-rank test. Also, we investigated howmany normal tissues were irradiated above the constraint after achieving the planning goals (pass rate) in the OARs. C-PBS outperformed P-PBS and VMAT in PTV coverage (p = 0.01 for both); however, P-PBS and VMAT did not differ substantially from each another (p = 0.35). C-PBS was superior in limiting the dose to the OAR. The pass rates for C-PBS, P-PBS, and VMAT were 94%, 81%, and 69%, respectively. C-PBS demonstrated superior performance compared to VMAT and P-PBS in terms of dose conformation to the target volume and normal tissue sparing, and achieved the highest pass rate in meeting dose constraints.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145655877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-10DOI: 10.1007/s13246-025-01637-8
Zhen Hui Chen, Hans Lynggaard Riis, Rohen White, Thomas Milan, Pejman Rowshanfarzad
{"title":"A comprehensive investigation of the radiation isocentre spatial variability in linear accelerators: implications for commissioning, QA, and clinical protocols.","authors":"Zhen Hui Chen, Hans Lynggaard Riis, Rohen White, Thomas Milan, Pejman Rowshanfarzad","doi":"10.1007/s13246-025-01637-8","DOIUrl":"10.1007/s13246-025-01637-8","url":null,"abstract":"","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1979-1993"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-24DOI: 10.1007/s13246-025-01599-x
Michael John James Douglass
Access to medical imaging data is crucial for research, training, and treatment planning in medical imaging and radiation therapy. However, ethical constraints and time-consuming approval processes often limit the availability of such data for research. This study introduces DICOMator, an open-source Blender add-on designed to address this challenge by enabling the creation of synthetic CT datasets from 3D mesh objects. DICOMator aims to provide researchers and medical professionals with a flexible tool for generating customisable and semi-realistic synthetic CT data, including 4D CT datasets from user defined static or animated 3D mesh objects. The add-on leverages Blender's powerful 3D modelling environment, utilising its mesh manipulation, animation and rendering capabilities to create synthetic data ranging from simple phantoms to accurate anatomical models. DICOMator incorporates various features to simulate common CT imaging artefacts, bridging the gap between 3D modelling and medical imaging. DICOMator voxelises 3D mesh objects, assigns appropriate Hounsfield Unit values, and applies artefact simulations. These simulations include detector noise, metal artefacts and partial volume effects. By incorporating these artefacts, DICOMator produces synthetic CT data that more closely resembles real CT scans. The resulting data is then exported in DICOM format, ensuring compatibility with existing medical imaging workflows and treatment planning systems. To demonstrate DICOMator's capabilities, three synthetic CT datasets were created: a simple lung phantom to illustrate basic functionality, a more realistic cranial CT scan to demonstrate dose calculations and CT image registration on synthetic data in treatment planning systems. Finally, a thoracic 4D CT scan featuring multiple breathing phases was created to demonstrate the dynamic imaging capabilities and the quantitative accuracy of the synthetic datasets. These examples were chosen to highlight DICOMator's versatility in generating diverse and complex synthetic CT data suitable for various research and educational purposes, from basic quality assurance to advanced motion management studies. DICOMator offers a promising solution to the limitations of patient CT data availability in medical physics research. By providing a user-friendly interface for creating customisable synthetic datasets from 3D meshes, it has the potential to accelerate research, validate treatment planning tools such as deformable image registration, and enhance educational resources in the field of radiation oncology medical physics. Future developments may include incorporation of other imaging modalities, such as MRI or PET, further expanding its utility in multi-modal imaging research.
{"title":"An open-source tool for converting 3D mesh volumes into synthetic DICOM CT images for medical physics research.","authors":"Michael John James Douglass","doi":"10.1007/s13246-025-01599-x","DOIUrl":"10.1007/s13246-025-01599-x","url":null,"abstract":"<p><p>Access to medical imaging data is crucial for research, training, and treatment planning in medical imaging and radiation therapy. However, ethical constraints and time-consuming approval processes often limit the availability of such data for research. This study introduces DICOMator, an open-source Blender add-on designed to address this challenge by enabling the creation of synthetic CT datasets from 3D mesh objects. DICOMator aims to provide researchers and medical professionals with a flexible tool for generating customisable and semi-realistic synthetic CT data, including 4D CT datasets from user defined static or animated 3D mesh objects. The add-on leverages Blender's powerful 3D modelling environment, utilising its mesh manipulation, animation and rendering capabilities to create synthetic data ranging from simple phantoms to accurate anatomical models. DICOMator incorporates various features to simulate common CT imaging artefacts, bridging the gap between 3D modelling and medical imaging. DICOMator voxelises 3D mesh objects, assigns appropriate Hounsfield Unit values, and applies artefact simulations. These simulations include detector noise, metal artefacts and partial volume effects. By incorporating these artefacts, DICOMator produces synthetic CT data that more closely resembles real CT scans. The resulting data is then exported in DICOM format, ensuring compatibility with existing medical imaging workflows and treatment planning systems. To demonstrate DICOMator's capabilities, three synthetic CT datasets were created: a simple lung phantom to illustrate basic functionality, a more realistic cranial CT scan to demonstrate dose calculations and CT image registration on synthetic data in treatment planning systems. Finally, a thoracic 4D CT scan featuring multiple breathing phases was created to demonstrate the dynamic imaging capabilities and the quantitative accuracy of the synthetic datasets. These examples were chosen to highlight DICOMator's versatility in generating diverse and complex synthetic CT data suitable for various research and educational purposes, from basic quality assurance to advanced motion management studies. DICOMator offers a promising solution to the limitations of patient CT data availability in medical physics research. By providing a user-friendly interface for creating customisable synthetic datasets from 3D meshes, it has the potential to accelerate research, validate treatment planning tools such as deformable image registration, and enhance educational resources in the field of radiation oncology medical physics. Future developments may include incorporation of other imaging modalities, such as MRI or PET, further expanding its utility in multi-modal imaging research.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1525-1538"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).
{"title":"Dosimetric evaluation of synthetic kilo-voltage CT images generated from megavoltage CT for head and neck tomotherapy using a conditional GAN network.","authors":"Yazdan Choghazardi, Mohamad Bagher Tavakoli, Iraj Abedi, Mahnaz Roayaei, Simin Hemati, Ahmad Shanei","doi":"10.1007/s13246-025-01603-4","DOIUrl":"10.1007/s13246-025-01603-4","url":null,"abstract":"<p><p>The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1589-1600"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The impairment of inter-muscular coordination and changes in frequency components are two major pathological symptoms associated with knee injuries; however, an effective method to simultaneously quantify these changes has yet to be developed. Moreover, there is a need to propose a reliable automated system for identifying knee injuries to eliminate human errors and enhance reliability and consistency. Hence, this study introduces two novel inter-muscular coordination features: Dynamic Time Warping (DTW) and Dynamic Frequency Warping (DFW), which integrate time and frequency characteristics with a dynamic matching procedure. The support vector machine classifier and two types of dynamic neural network classifiers have also been used to evaluate the effectiveness of the proposed features. The proposed system has been tested using a public dataset that includes five channels of electromyogram (EMG) signals from 33 uninjured subjects and 28 individuals with various types of knee injuries. The experimental results have demonstrated the superiority of DFW and cascade forward neural network, achieving an accuracy rate of 92.03% for detection and 94.42% for categorization of different types of knee injuries. The reliability of the proposed feature has been confirmed in identifying knee injuries using both inter-limb and intra-limb EMG channels. This highlights the potential to offer a trade-off between high detection performance and cost-effective procedures by utilizing fewer channels.
肌间协调功能障碍和频率成分改变是膝关节损伤的两大病理症状;然而,一种同时量化这些变化的有效方法还有待开发。此外,有必要提出一种可靠的自动化系统来识别膝关节损伤,以消除人为错误,提高可靠性和一致性。因此,本研究引入了两种新的肌肉间协调特征:动态时间扭曲(Dynamic Time Warping, DTW)和动态频率扭曲(Dynamic Frequency Warping, DFW),它们将时间和频率特征与动态匹配过程相结合。支持向量机分类器和两种动态神经网络分类器也被用来评估所提出特征的有效性。该系统已经使用公共数据集进行了测试,该数据集包括来自33名未受伤受试者和28名不同类型膝盖损伤个体的5个肌电图(EMG)信号通道。实验结果证明了DFW和级联前向神经网络的优越性,对不同类型膝关节损伤的检测准确率为92.03%,分类准确率为94.42%。所提出的特征的可靠性已被证实在识别膝关节损伤时使用了肢间和肢内肌电图通道。这突出了通过使用更少的通道在高检测性能和成本效益之间提供权衡的潜力。
{"title":"Integrating frequency and dynamic characteristics of EMG signals as a new inter-muscular coordination feature.","authors":"Shaghayegh Hassanzadeh Khanmiri, Peyvand Ghaderyan, Alireza Hashemi Oskouei","doi":"10.1007/s13246-025-01620-3","DOIUrl":"10.1007/s13246-025-01620-3","url":null,"abstract":"<p><p>The impairment of inter-muscular coordination and changes in frequency components are two major pathological symptoms associated with knee injuries; however, an effective method to simultaneously quantify these changes has yet to be developed. Moreover, there is a need to propose a reliable automated system for identifying knee injuries to eliminate human errors and enhance reliability and consistency. Hence, this study introduces two novel inter-muscular coordination features: Dynamic Time Warping (DTW) and Dynamic Frequency Warping (DFW), which integrate time and frequency characteristics with a dynamic matching procedure. The support vector machine classifier and two types of dynamic neural network classifiers have also been used to evaluate the effectiveness of the proposed features. The proposed system has been tested using a public dataset that includes five channels of electromyogram (EMG) signals from 33 uninjured subjects and 28 individuals with various types of knee injuries. The experimental results have demonstrated the superiority of DFW and cascade forward neural network, achieving an accuracy rate of 92.03% for detection and 94.42% for categorization of different types of knee injuries. The reliability of the proposed feature has been confirmed in identifying knee injuries using both inter-limb and intra-limb EMG channels. This highlights the potential to offer a trade-off between high detection performance and cost-effective procedures by utilizing fewer channels.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1775-1789"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-26DOI: 10.1007/s13246-025-01609-y
Lamiaa A Amar, Ahmed M Otifi, Shimaa A Mohamed
The prevalence of Attention-Deficit/Hyperactivity Disorder among children is rising, emphasizing the need for early and accurate diagnostic methods to address associated academic and behavioral challenges. Electroencephalography-based analysis has emerged as a promising noninvasive approach for detecting Attention-Deficit/Hyperactivity Disorder; however, utilizing the full range of electroencephalography channels often results in high computational complexity and an increased risk of model overfitting. This study presents a comparative investigation between a proposed multi-headed deep learning framework and a traditional baseline single-model approach for classifying Attention-Deficit/Hyperactivity Disorder using electroencephalography signals. Electroencephalography data were collected from 79 participants (42 healthy adults and 37 diagnosed with Attention-Deficit/Hyperactivity Disorder) across four cognitive states: resting with eyes open, resting with eyes closed, performing cognitive tasks, and listening to omniarmonic sounds. To reduce complexity, signals from only five strategically selected electroencephalography channels were used. The multi-headed approach employed parallel deep learning branches-comprising combinations of Bidirectional Long Short-Term Memory, Long Short-Term Memory, and Gated Recurrent Unit architectures-to capture inter-channel relationships and extract richer temporal features. Comparative analysis revealed that the combination of Long Short-Term Memory and Bidirectional Long Short-Term Memory within the multi-headed framework achieved the highest classification accuracy of 89.87%, significantly outperforming all baseline configurations. These results demonstrate the effectiveness of integrating multiple deep learning architectures and highlight the potential of multi-headed models for enhancing electroencephalography-based Attention-Deficit/Hyperactivity Disorder diagnosis.
{"title":"Comparative study of multi-headed and baseline deep learning models for ADHD classification from EEG signals.","authors":"Lamiaa A Amar, Ahmed M Otifi, Shimaa A Mohamed","doi":"10.1007/s13246-025-01609-y","DOIUrl":"10.1007/s13246-025-01609-y","url":null,"abstract":"<p><p>The prevalence of Attention-Deficit/Hyperactivity Disorder among children is rising, emphasizing the need for early and accurate diagnostic methods to address associated academic and behavioral challenges. Electroencephalography-based analysis has emerged as a promising noninvasive approach for detecting Attention-Deficit/Hyperactivity Disorder; however, utilizing the full range of electroencephalography channels often results in high computational complexity and an increased risk of model overfitting. This study presents a comparative investigation between a proposed multi-headed deep learning framework and a traditional baseline single-model approach for classifying Attention-Deficit/Hyperactivity Disorder using electroencephalography signals. Electroencephalography data were collected from 79 participants (42 healthy adults and 37 diagnosed with Attention-Deficit/Hyperactivity Disorder) across four cognitive states: resting with eyes open, resting with eyes closed, performing cognitive tasks, and listening to omniarmonic sounds. To reduce complexity, signals from only five strategically selected electroencephalography channels were used. The multi-headed approach employed parallel deep learning branches-comprising combinations of Bidirectional Long Short-Term Memory, Long Short-Term Memory, and Gated Recurrent Unit architectures-to capture inter-channel relationships and extract richer temporal features. Comparative analysis revealed that the combination of Long Short-Term Memory and Bidirectional Long Short-Term Memory within the multi-headed framework achieved the highest classification accuracy of 89.87%, significantly outperforming all baseline configurations. These results demonstrate the effectiveness of integrating multiple deep learning architectures and highlight the potential of multi-headed models for enhancing electroencephalography-based Attention-Deficit/Hyperactivity Disorder diagnosis.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1657-1665"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}