首页 > 最新文献

IEEE Journal of Translational Engineering in Health and Medicine-Jtehm最新文献

英文 中文
Applying Machine Learning and Point-Set Registration to Automatically Measure the Severity of Spinal Curvature on Radiographs 应用机器学习和点集配准自动测量x光片脊柱弯曲程度
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-14 DOI: 10.1109/JTEHM.2023.3332618
Jason Wong;Marek Reformat;Edmond Lou
Objective: Measuring the severity of the lateral spinal curvature, or Cobb angle, is critical for monitoring and making treatment decisions for children with adolescent idiopathic scoliosis (AIS). However, manual measurement is time-consuming and subject to human error. Therefore, clinicians seek an automated measurement method to streamline workflow and improve accuracy. This paper reports on a novel machine learning algorithm of cascaded convolutional neural networks (CNN) to measure the Cobb angle on spinal radiographs automatically. Methods: The developed method consisted of spinal column segmentation using a CNN, vertebra localization and segmentation using iterative vertebra body location coupled with another CNN, point-set registration to correct vertebra segmentations, and Cobb angle measurement using the final segmentations. Measurement performance was evaluated with the circular mean absolute error (CMAE) and percentage within clinical acceptance ( $le 5^{circ }$ ) between automatic and manual measurements. Analysis was separated by curve severity to identify any potential systematic biases using independent samples Student’s t-tests. Results: The method detected 346 of the 352 manually measured Cobb angles (98%), with a CMAE of 2.8° and 91% of measurements within the 5° clinical acceptance. No statistically significant differences were found between the CMAEs of mild ( $ < 25^{circ }$ ), moderate (25°-45°), and severe ( $ge 45^{circ }$ ) groups. The average measurement time per radiograph was 17.7±10.2s, improving upon the estimated average of 30s it takes an experienced rater to measure. Additionally, the algorithm outputs segmentations with the measurement, allowing clinicians to interpret measurement results. Discussion/Conclusion: The developed method measured Cobb angles on radiographs automatically with high accuracy, quick measurement time, and interpretability, suggesting clinical feasibility.
目的:测量侧侧脊柱弯曲或Cobb角的严重程度对于监测和制定青少年特发性脊柱侧凸(AIS)的治疗决策至关重要。然而,手动测量非常耗时,而且容易出现人为错误。因此,临床医生寻求一种自动化的测量方法来简化工作流程并提高准确性。本文报道了一种新的级联卷积神经网络(CNN)机器学习算法,用于自动测量脊柱x线片上的Cobb角。方法:所开发的方法包括使用CNN进行脊柱分割,使用迭代椎体定位与另一个CNN进行椎体定位和分割,使用点集配准校正椎体分割,以及使用最终分割的Cobb角测量。采用自动测量和手动测量的循环平均绝对误差(CMAE)和临床可接受百分比($le 5^{circ }$)来评估测量性能。分析以曲线严重程度分开,使用独立样本学生t检验来识别任何潜在的系统偏差。结果:该方法检测出352个人工测得的Cobb角中的346个(98%), with a CMAE of 2.8° and 91% of measurements within the 5° clinical acceptance. No statistically significant differences were found between the CMAEs of mild ( $ < 25^{circ }$ ), moderate (25°-45°), and severe ( $ge 45^{circ }$ ) groups. The average measurement time per radiograph was 17.7±10.2s, improving upon the estimated average of 30s it takes an experienced rater to measure. Additionally, the algorithm outputs segmentations with the measurement, allowing clinicians to interpret measurement results. Discussion/Conclusion: The developed method measured Cobb angles on radiographs automatically with high accuracy, quick measurement time, and interpretability, suggesting clinical feasibility.
{"title":"Applying Machine Learning and Point-Set Registration to Automatically Measure the Severity of Spinal Curvature on Radiographs","authors":"Jason Wong;Marek Reformat;Edmond Lou","doi":"10.1109/JTEHM.2023.3332618","DOIUrl":"10.1109/JTEHM.2023.3332618","url":null,"abstract":"Objective: Measuring the severity of the lateral spinal curvature, or Cobb angle, is critical for monitoring and making treatment decisions for children with adolescent idiopathic scoliosis (AIS). However, manual measurement is time-consuming and subject to human error. Therefore, clinicians seek an automated measurement method to streamline workflow and improve accuracy. This paper reports on a novel machine learning algorithm of cascaded convolutional neural networks (CNN) to measure the Cobb angle on spinal radiographs automatically. Methods: The developed method consisted of spinal column segmentation using a CNN, vertebra localization and segmentation using iterative vertebra body location coupled with another CNN, point-set registration to correct vertebra segmentations, and Cobb angle measurement using the final segmentations. Measurement performance was evaluated with the circular mean absolute error (CMAE) and percentage within clinical acceptance (\u0000<inline-formula> <tex-math>$le 5^{circ }$ </tex-math></inline-formula>\u0000) between automatic and manual measurements. Analysis was separated by curve severity to identify any potential systematic biases using independent samples Student’s t-tests. Results: The method detected 346 of the 352 manually measured Cobb angles (98%), with a CMAE of 2.8° and 91% of measurements within the 5° clinical acceptance. No statistically significant differences were found between the CMAEs of mild (\u0000<inline-formula> <tex-math>$ &lt; 25^{circ }$ </tex-math></inline-formula>\u0000), moderate (25°-45°), and severe (\u0000<inline-formula> <tex-math>$ge 45^{circ }$ </tex-math></inline-formula>\u0000) groups. The average measurement time per radiograph was 17.7±10.2s, improving upon the estimated average of 30s it takes an experienced rater to measure. Additionally, the algorithm outputs segmentations with the measurement, allowing clinicians to interpret measurement results. Discussion/Conclusion: The developed method measured Cobb angles on radiographs automatically with high accuracy, quick measurement time, and interpretability, suggesting clinical feasibility.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"151-161"},"PeriodicalIF":3.4,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10318103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135704997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery AR 引导手术干预的注册理智检查:头面部手术的经验
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-13 DOI: 10.1109/JTEHM.2023.3332088
Sara Condino;Fabrizio Cutolo;Marina Carbone;Laura Cercenelli;Giovanni Badiali;Nicola Montemurro;Vincenzo Ferrari
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
实现并保持适当的图像配准精度是图像引导手术的一项公开挑战。这项研究基于对虚拟三维地标模型的视觉检查,探索并评估了增强现实导航(AR-RSC)的配准正确性检查方法的有效性。我们通过招募 36 名受试者来分析 AR-RSC 的灵敏度和特异性,以评估一组 114 幅 AR 图像的配准准确性,这些图像是在 AR 指导的正颌干预过程中获取的相机图像生成的。为了模拟不同的配准误差,人为地在图像集中添加了已知幅度的平移或旋转误差,最大误差为±1.5毫米/±15.5°。本研究分析了 AR-RSC 在不同情况下的性能:(1) 用于错位评估的虚拟模型(例如,我们实验中的托槽、门牙和龈缘模型);(2) 套准误差的类型(平移/旋转);(3) 用户使用 AR 技术的经验水平。结果显示1)AR-RSC 的灵敏度和特异性取决于虚拟模型(在全球范围内,托槽的真阳性率中位数高达 79.2%,门牙的真阴性率中位数高达 64.3%),2)有些误差成分更难通过视觉识别,3)用户经验水平不会影响该方法。总之,建议的 AR-RSC 也在手术室中进行了测试,可以作为在干预过程中监控和优化套准准确性的有效方法,但应特别注意选择用于目测套准准确性的 AR 数据。
{"title":"Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery","authors":"Sara Condino;Fabrizio Cutolo;Marina Carbone;Laura Cercenelli;Giovanni Badiali;Nicola Montemurro;Vincenzo Ferrari","doi":"10.1109/JTEHM.2023.3332088","DOIUrl":"10.1109/JTEHM.2023.3332088","url":null,"abstract":"Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"258-267"},"PeriodicalIF":3.4,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10315237","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135659363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Letter to the Editor: “How Can Biomedical Engineers Help Empower Individuals With Intellectual Disabilities? The Potential Benefits and Challenges of AI Technologies to Support Inclusivity and Transform Lives” 致编辑的信:"生物医学工程师如何帮助增强智障人士的能力?人工智能技术在支持包容性和改变生活方面的潜在益处和挑战"。
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-09 DOI: 10.1109/JTEHM.2023.3331977
Alessandro Di Nuovo
The rapid advancement of Artificial Intelligence (AI) is transforming healthcare and daily life, offering great opportunities but also posing ethical and societal challenges. To ensure AI benefits all individuals, including those with intellectual disabilities, the focus should be on adaptive technology that can adapt to the unique needs of the user. Biomedical engineers have an interdisciplinary background that helps them to lead multidisciplinary teams in the development of human-centered AI solutions. These solutions can personalize learning, enhance communication, and improve accessibility for individuals with intellectual disabilities. Furthermore, AI can aid in healthcare research, diagnostics, and therapy. The ethical use of AI in healthcare and the collaboration of AI with human expertise must be emphasized. Public funding for inclusive research is encouraged, promoting equity and economic growth while empowering those with intellectual disabilities in society.
人工智能(AI)的飞速发展正在改变医疗保健和日常生活,在带来巨大机遇的同时,也带来了伦理和社会挑战。为确保人工智能惠及包括智障人士在内的所有人,重点应放在能够适应用户独特需求的自适应技术上。生物医学工程师拥有跨学科背景,这有助于他们领导多学科团队开发以人为本的人工智能解决方案。这些解决方案可以实现个性化学习、加强交流,并改善智障人士的无障碍环境。此外,人工智能还有助于医疗保健研究、诊断和治疗。必须强调在医疗保健领域使用人工智能的道德性,以及人工智能与人类专业知识的合作。鼓励为包容性研究提供公共资金,促进公平和经济增长,同时增强社会中智障人士的权能。
{"title":"Letter to the Editor: “How Can Biomedical Engineers Help Empower Individuals With Intellectual Disabilities? The Potential Benefits and Challenges of AI Technologies to Support Inclusivity and Transform Lives”","authors":"Alessandro Di Nuovo","doi":"10.1109/JTEHM.2023.3331977","DOIUrl":"10.1109/JTEHM.2023.3331977","url":null,"abstract":"The rapid advancement of Artificial Intelligence (AI) is transforming healthcare and daily life, offering great opportunities but also posing ethical and societal challenges. To ensure AI benefits all individuals, including those with intellectual disabilities, the focus should be on adaptive technology that can adapt to the unique needs of the user. Biomedical engineers have an interdisciplinary background that helps them to lead multidisciplinary teams in the development of human-centered AI solutions. These solutions can personalize learning, enhance communication, and improve accessibility for individuals with intellectual disabilities. Furthermore, AI can aid in healthcare research, diagnostics, and therapy. The ethical use of AI in healthcare and the collaboration of AI with human expertise must be emphasized. Public funding for inclusive research is encouraged, promoting equity and economic growth while empowering those with intellectual disabilities in society.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"256-257"},"PeriodicalIF":3.4,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10314515","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135562888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated, Vision-Based Goniometry and Range of Motion Calculation in Individuals With Suspected Ehlers-Danlos Syndromes/Generalized Hypermobility Spectrum Disorders: A Comparison of Pose-Estimation Libraries to Goniometric Measurements 疑似ehers - danlos综合征/广泛性多动谱系障碍患者的自动、基于视觉的角度测量和运动范围计算:姿态估计库与角度测量的比较
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-06 DOI: 10.1109/JTEHM.2023.3327691
Andrea Sabo;Nimish Mittal;Amol Deshpande;Hance Clarke;Babak Taati
Generalized joint hypermobility (GJH) often leads clinicians to suspect a diagnosis of Ehlers Danlos Syndrome (EDS), but it can be difficult to objectively assess. Video-based goniometry has been proposed to objectively estimate joint range of motion in hyperextended joints. As part of an exam of joint hypermobility at a specialized EDS clinic, a mobile phone was used to record short videos of 97 adults (89 female, 35.0 ± 9.9 years old) undergoing assessment of the elbows, knees, shoulders, ankles, and fifth fingers. Five body keypoint pose-estimation libraries (AlphaPose, Detectron, MediaPipe-Body, MoveNet – Thunder, OpenPose) and two hand keypoint pose-estimation libraries (AlphaPose, MediaPipe-Hands) were used to geometrically calculate the maximum angle of hyperextension or hyperflexion of each joint. A custom domain-specific model with a MobileNet-v2 backbone finetuned on data collected as part of this study was also evaluated for the fifth finger movement. Spearman’s correlation was used to analyze the angles calculated from the tracked joint positions, the angles calculated from manually annotated keypoints, and the angles measured using a goniometer. Moderate correlations between the angles estimated using pose-tracked keypoints and the goniometer measurements were identified for the elbow (rho =.722; Detectron), knee (rho =.608; MoveNet – Thunder), shoulder (rho =.632; MoveNet – Thunder), and fifth finger (rho =.786; custom model) movements. The angles estimated from keypoints predicted by open-source libraries at the ankles were not significantly correlated with the goniometer measurements. Manually annotated angles at the elbows, knees, shoulders, and fifth fingers were moderately to strongly correlated to goniometer measurements but were weakly correlated for the ankles. There was not one pose-estimation library which performed best across all joints, so the library of choice must be selected separately for each joint of interest. This work evaluates several pose-estimation models as part of a vision-based system for estimating joint angles in individuals with suspected joint hypermobility. Future applications of the proposed system could facilitate objective assessment and screening of individuals referred to specialized EDS clinics.
广泛性关节过度活动(GJH)经常导致临床医生怀疑诊断为Ehlers Danlos综合征(EDS),但它很难客观评估。基于视频的角度测量法被提出用于客观估计超伸关节的关节活动范围。作为EDS专业诊所关节过度活动检查的一部分,研究人员用手机记录了97名成年人(89名女性,35.0±9.9岁)的短视频,对他们的肘部、膝盖、肩膀、脚踝和第五指进行了评估。使用5个身体关键点姿态估计库(AlphaPose、Detectron、mediapie - body、MoveNet - Thunder、OpenPose)和2个手部关键点姿态估计库(AlphaPose、mediapie - hands)几何计算各关节过伸或过屈的最大角度。根据本研究收集的数据对MobileNet-v2主干进行微调的定制领域特定模型也对五指运动进行了评估。利用Spearman相关分析跟踪关节位置计算的角度、人工标注关键点计算的角度和测角仪测量的角度。使用姿态跟踪关键点估算的角度与测角仪测量的肘关节之间存在适度的相关性(rho =.722;Detectron),膝关节(rho = 0.608;MoveNet -雷霆),肩部(rho =.632;MoveNet - Thunder)和无名指(rho =.786;自定义模型)运动。由开源库在脚踝处预测的关键点估计的角度与测角仪的测量结果没有显著相关。手肘、膝盖、肩膀和无名指的手工标注角度与测角仪测量值有中等到强烈的相关性,但与踝关节的相关性较弱。没有一个姿态估计库在所有关节中表现最好,因此必须为每个感兴趣的关节单独选择选择库。这项工作评估了几种姿态估计模型,作为基于视觉的系统的一部分,用于估计可疑关节过度活动的个体的关节角度。拟议系统的未来应用可以促进转介到专门的EDS诊所的个人的客观评估和筛选。
{"title":"Automated, Vision-Based Goniometry and Range of Motion Calculation in Individuals With Suspected Ehlers-Danlos Syndromes/Generalized Hypermobility Spectrum Disorders: A Comparison of Pose-Estimation Libraries to Goniometric Measurements","authors":"Andrea Sabo;Nimish Mittal;Amol Deshpande;Hance Clarke;Babak Taati","doi":"10.1109/JTEHM.2023.3327691","DOIUrl":"10.1109/JTEHM.2023.3327691","url":null,"abstract":"Generalized joint hypermobility (GJH) often leads clinicians to suspect a diagnosis of Ehlers Danlos Syndrome (EDS), but it can be difficult to objectively assess. Video-based goniometry has been proposed to objectively estimate joint range of motion in hyperextended joints. As part of an exam of joint hypermobility at a specialized EDS clinic, a mobile phone was used to record short videos of 97 adults (89 female, 35.0 ± 9.9 years old) undergoing assessment of the elbows, knees, shoulders, ankles, and fifth fingers. Five body keypoint pose-estimation libraries (AlphaPose, Detectron, MediaPipe-Body, MoveNet – Thunder, OpenPose) and two hand keypoint pose-estimation libraries (AlphaPose, MediaPipe-Hands) were used to geometrically calculate the maximum angle of hyperextension or hyperflexion of each joint. A custom domain-specific model with a MobileNet-v2 backbone finetuned on data collected as part of this study was also evaluated for the fifth finger movement. Spearman’s correlation was used to analyze the angles calculated from the tracked joint positions, the angles calculated from manually annotated keypoints, and the angles measured using a goniometer. Moderate correlations between the angles estimated using pose-tracked keypoints and the goniometer measurements were identified for the elbow (rho =.722; Detectron), knee (rho =.608; MoveNet – Thunder), shoulder (rho =.632; MoveNet – Thunder), and fifth finger (rho =.786; custom model) movements. The angles estimated from keypoints predicted by open-source libraries at the ankles were not significantly correlated with the goniometer measurements. Manually annotated angles at the elbows, knees, shoulders, and fifth fingers were moderately to strongly correlated to goniometer measurements but were weakly correlated for the ankles. There was not one pose-estimation library which performed best across all joints, so the library of choice must be selected separately for each joint of interest. This work evaluates several pose-estimation models as part of a vision-based system for estimating joint angles in individuals with suspected joint hypermobility. Future applications of the proposed system could facilitate objective assessment and screening of individuals referred to specialized EDS clinics.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"140-150"},"PeriodicalIF":3.4,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10309843","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135501586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TTN: Topological Transformer Network for Automated Coronary Artery Branch Labeling in Cardiac CT Angiography 心脏CT血管造影中自动冠状动脉分支标记的拓扑变压器网络
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-01 DOI: 10.1109/JTEHM.2023.3329031
Yuyang Zhang;Gongning Luo;Wei Wang;Shaodong Cao;Suyu Dong;Daren Yu;Xiaoyun Wang;Kuanquan Wang
Objective: Existing methods for automated coronary artery branch labeling in cardiac CT angiography face two limitations: 1) inability to model overall correlation of branches, since differences between branches cannot be captured directly. 2) a serious class imbalance between main and side branches. Methods and procedures: Inspired by the application of Transformer in sequence data, we propose a topological Transformer network (TTN), which solves the vessel branch labeling from a novel perspective of sequence labeling learning. TTN detects differences between branches by establishing their overall correlation. A topological encoding that represents the positions of vessel segments in the artery tree, is proposed to assist the model in classifying branches. Also, a segment-depth loss is introduced to solve the class imbalance between main and side branches. Results: On a dataset with 325 CCTA, our method obtains the best overall result on all branches, the best result on side branches, and a competitive result on main branches. Conclusion: TTN solves two limitations in existing methods perfectly, thus achieving the best result in coronary artery branch labeling task. It is the first Transformer based vessel branch labeling method and is notably different from previous methods. Clinical impact: This Pre-Clinical Research can be integrated into a computer-aided diagnosis system to generate cardiovascular disease diagnosis report, assisting clinicians in locating the atherosclerotic plaques.
目的:现有的心脏CT血管造影冠状动脉分支自动标记方法存在两个局限性:1)无法建立分支的整体相关性模型,无法直接捕捉分支之间的差异。2)主分支和副分支之间严重的类不平衡。方法和步骤:受Transformer在序列数据中的应用启发,我们提出了一种拓扑Transformer网络(TTN),该网络从序列标记学习的新角度解决了血管分支标记问题。TTN通过建立分支之间的总体相关性来检测分支之间的差异。提出了一种表示动脉树中血管段位置的拓扑编码,以帮助模型对分支进行分类。此外,还引入了段深度损失来解决主分支和副分支之间的类不平衡问题。结果:在具有325个CCTA的数据集上,我们的方法在所有分支上获得了最佳的总体结果,在侧分支上获得了最佳结果,在主分支上获得了竞争结果。结论:TTN很好地解决了现有方法的两个局限性,从而在冠状动脉分支标记任务中取得了最佳效果。这是第一个基于变压器的管道分支标记方法,与以往的方法有明显的不同。临床影响:本临床前研究可整合到计算机辅助诊断系统中生成心血管疾病诊断报告,协助临床医生定位动脉粥样硬化斑块。
{"title":"TTN: Topological Transformer Network for Automated Coronary Artery Branch Labeling in Cardiac CT Angiography","authors":"Yuyang Zhang;Gongning Luo;Wei Wang;Shaodong Cao;Suyu Dong;Daren Yu;Xiaoyun Wang;Kuanquan Wang","doi":"10.1109/JTEHM.2023.3329031","DOIUrl":"10.1109/JTEHM.2023.3329031","url":null,"abstract":"Objective: Existing methods for automated coronary artery branch labeling in cardiac CT angiography face two limitations: 1) inability to model overall correlation of branches, since differences between branches cannot be captured directly. 2) a serious class imbalance between main and side branches. Methods and procedures: Inspired by the application of Transformer in sequence data, we propose a topological Transformer network (TTN), which solves the vessel branch labeling from a novel perspective of sequence labeling learning. TTN detects differences between branches by establishing their overall correlation. A topological encoding that represents the positions of vessel segments in the artery tree, is proposed to assist the model in classifying branches. Also, a segment-depth loss is introduced to solve the class imbalance between main and side branches. Results: On a dataset with 325 CCTA, our method obtains the best overall result on all branches, the best result on side branches, and a competitive result on main branches. Conclusion: TTN solves two limitations in existing methods perfectly, thus achieving the best result in coronary artery branch labeling task. It is the first Transformer based vessel branch labeling method and is notably different from previous methods. Clinical impact: This Pre-Clinical Research can be integrated into a computer-aided diagnosis system to generate cardiovascular disease diagnosis report, assisting clinicians in locating the atherosclerotic plaques.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"129-139"},"PeriodicalIF":3.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10304172","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wearable Accelerometer and Gyroscope Sensors for Estimating the Severity of Essential Tremor 用于估计特发性震颤严重程度的可穿戴加速度计和陀螺仪传感器
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-01 DOI: 10.1109/JTEHM.2023.3329344
Sheik Mohammed Ali;Sridhar Poosapadi Arjunan;James Peter;Laura Perju-Dumbrava;Catherine Ding;Michael Eller;Sanjay Raghav;Peter Kempster;Mohammod Abdul Motin;P. J. Radcliffe;Dinesh Kant Kumar
Background: Several validated clinical scales measure the severity of essential tremor (ET). Their assessments are subjective and can depend on familiarity and training with scoring systems. Method: We propose a multi-modal sensing using a wearable inertial measurement unit for estimating scores on the Fahn-Tolosa-Marin tremor rating scale (FTM) and determine the classification accuracy within the tremor type. 17 ET participants and 18 healthy controls were recruited for the study. Two movement disorder neurologists who were blinded to prior clinical information viewed video recordings and scored the FTM. Participants drew a guided Archimedes spiral while wearing an inertial measurement unit placed at the mid-point between the lateral epicondyle of the humerus and the anatomical snuff box. Acceleration and gyroscope recordings were analyzed. The ratio of the power spectral density between frequency bands 0.5-4 Hz and 4–12 Hz, and the sum of power spectrum density over the entire spectrum of 2–74 Hz, for both accelerometer and gyroscope data, were computed. FTM was estimated using regression model and classification using SVM was validated using the leave-one-out method. Results: Regression analysis showed a moderate to good correlation when individual features were used, while correlation was high ( $r^{2}$ = 0.818) when suitable features of the gyro and accelerometer were combined. The accuracy for two-class classification of the combined features using SVM was 91.42% while for four-class it was 68.57%. Conclusion: Potential applications of this novel wearable sensing method using a wearable Inertial Measurement Unit (IMU) include monitoring of ET and clinical trials of new treatments for the disorder.
背景:有几种有效的临床量表可以测量本质性震颤(ET)的严重程度。其评估是主观的,可能取决于对评分系统的熟悉程度和培训情况。方法:我们提出了一种使用可穿戴惯性测量装置的多模态传感方法,用于估算法恩-托洛萨-马林震颤评分量表(FTM)的分数,并确定震颤类型分类的准确性。研究招募了 17 名 ET 参与者和 18 名健康对照者。两名运动障碍神经学家在对之前的临床信息保密的情况下观看录像并对 FTM 进行评分。参与者佩戴惯性测量装置,在肱骨外上髁与解剖鼻烟盒之间的中点绘制阿基米德螺旋线。对加速度和陀螺仪记录进行了分析。计算了加速度计和陀螺仪数据在 0.5-4 Hz 和 4-12 Hz 频段之间的功率谱密度比,以及 2-74 Hz 整个频谱的功率谱密度总和。使用回归模型估算了 FTM,并使用留一法验证了使用 SVM 进行的分类。结果显示回归分析表明,当使用单个特征时,相关性为中等到良好,而当陀螺仪和加速度计的合适特征组合在一起时,相关性很高($r^{2}$ = 0.818)。使用 SVM 对组合特征进行两级分类的准确率为 91.42%,而四级分类的准确率为 68.57%。结论这种使用可穿戴惯性测量单元(IMU)的新型可穿戴传感方法的潜在应用包括监测 ET 和对该疾病的新疗法进行临床试验。
{"title":"Wearable Accelerometer and Gyroscope Sensors for Estimating the Severity of Essential Tremor","authors":"Sheik Mohammed Ali;Sridhar Poosapadi Arjunan;James Peter;Laura Perju-Dumbrava;Catherine Ding;Michael Eller;Sanjay Raghav;Peter Kempster;Mohammod Abdul Motin;P. J. Radcliffe;Dinesh Kant Kumar","doi":"10.1109/JTEHM.2023.3329344","DOIUrl":"10.1109/JTEHM.2023.3329344","url":null,"abstract":"Background: Several validated clinical scales measure the severity of essential tremor (ET). Their assessments are subjective and can depend on familiarity and training with scoring systems. Method: We propose a multi-modal sensing using a wearable inertial measurement unit for estimating scores on the Fahn-Tolosa-Marin tremor rating scale (FTM) and determine the classification accuracy within the tremor type. 17 ET participants and 18 healthy controls were recruited for the study. Two movement disorder neurologists who were blinded to prior clinical information viewed video recordings and scored the FTM. Participants drew a guided Archimedes spiral while wearing an inertial measurement unit placed at the mid-point between the lateral epicondyle of the humerus and the anatomical snuff box. Acceleration and gyroscope recordings were analyzed. The ratio of the power spectral density between frequency bands 0.5-4 Hz and 4–12 Hz, and the sum of power spectrum density over the entire spectrum of 2–74 Hz, for both accelerometer and gyroscope data, were computed. FTM was estimated using regression model and classification using SVM was validated using the leave-one-out method. Results: Regression analysis showed a moderate to good correlation when individual features were used, while correlation was high (\u0000<inline-formula> <tex-math>$r^{2}$ </tex-math></inline-formula>\u0000 = 0.818) when suitable features of the gyro and accelerometer were combined. The accuracy for two-class classification of the combined features using SVM was 91.42% while for four-class it was 68.57%. Conclusion: Potential applications of this novel wearable sensing method using a wearable Inertial Measurement Unit (IMU) include monitoring of ET and clinical trials of new treatments for the disorder.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"194-203"},"PeriodicalIF":3.4,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10304233","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135319103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interpretable Neonatal Lung Ultrasound Feature Extraction and Lung Sliding Detection System Using Object Detectors 一种可解释的新生儿肺部超声特征提取与肺滑动检测系统
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-25 DOI: 10.1109/JTEHM.2023.3327424
Rodina Bassiouny;Adel Mohamed;Karthi Umapathy;Naimul Khan
The objective of this study was to develop an interpretable system that could detect specific lung features in neonates. A challenging aspect of this work was that normal lungs showed the same visual features (as that of Pneumothorax (PTX)). M-mode is typically necessary to differentiate between the two cases, but its generation in clinics is time-consuming and requires expertise for interpretation, which remains limited. Therefore, our system automates M-mode generation by extracting Regions of Interest (ROIs) without human in the loop. Object detection models such as faster Region Based Convolutional Neural Network (fRCNN) and RetinaNet models were employed to detect seven common Lung Ultrasound (LUS) features. fRCNN predictions were then stored and further used to generate M-modes. Beyond static feature extraction, we used a Hough transform based statistical method to detect “lung sliding” in these M-modes. Results showed that fRCNN achieved a greater mean Average Precision (mAP) of 86.57% (Intersection-over-Union (IoU) = 0.2) than RetinaNet, which only displayed a mAP of 61.15%. The calculated accuracy for the generated RoIs was 97.59% for Normal videos and 96.37% for PTX videos. Using this system, we successfully classified 5 PTX and 6 Normal video cases with 100% accuracy. Automating the process of detecting seven prominent LUS features addresses the time-consuming manual evaluation of Lung ultrasound in a fast paced environment. Clinical impact: Our research work provides a significant clinical impact as it provides a more accurate and efficient method for diagnosing lung diseases in neonates.
本研究的目的是开发一种可解释的系统,可以检测新生儿的特定肺部特征。这项工作的一个具有挑战性的方面是正常肺显示相同的视觉特征(气胸(PTX))。m -模式通常是区分两种病例所必需的,但在诊所中产生m -模式是耗时的,并且需要专业知识来解释,这一点仍然有限。因此,我们的系统通过提取感兴趣区域(roi)来自动生成m模式,而无需人工参与循环。采用更快的基于区域的卷积神经网络(fRCNN)和retanet模型等目标检测模型检测肺超声(LUS)的7个常见特征。然后存储fRCNN预测结果,并进一步用于生成m模态。除了静态特征提取之外,我们还使用了基于霍夫变换的统计方法来检测这些m模式中的“肺滑动”。结果表明,fRCNN的平均平均精度(mAP)为86.57% (Intersection-over-Union (IoU) = 0.2),高于retanet的平均平均精度(mAP) 61.15%。对于Normal视频和PTX视频,所生成roi的计算精度分别为97.59%和96.37%。应用该系统对5例PTX和6例Normal视频进行了分类,准确率达到100%。自动化检测七个突出的LUS特征的过程解决了在快节奏环境中耗时的手动肺超声评估。临床影响:我们的研究工作为新生儿肺部疾病的诊断提供了更准确、更有效的方法,具有重要的临床影响。
{"title":"An Interpretable Neonatal Lung Ultrasound Feature Extraction and Lung Sliding Detection System Using Object Detectors","authors":"Rodina Bassiouny;Adel Mohamed;Karthi Umapathy;Naimul Khan","doi":"10.1109/JTEHM.2023.3327424","DOIUrl":"10.1109/JTEHM.2023.3327424","url":null,"abstract":"The objective of this study was to develop an interpretable system that could detect specific lung features in neonates. A challenging aspect of this work was that normal lungs showed the same visual features (as that of Pneumothorax (PTX)). M-mode is typically necessary to differentiate between the two cases, but its generation in clinics is time-consuming and requires expertise for interpretation, which remains limited. Therefore, our system automates M-mode generation by extracting Regions of Interest (ROIs) without human in the loop. Object detection models such as faster Region Based Convolutional Neural Network (fRCNN) and RetinaNet models were employed to detect seven common Lung Ultrasound (LUS) features. fRCNN predictions were then stored and further used to generate M-modes. Beyond static feature extraction, we used a Hough transform based statistical method to detect “lung sliding” in these M-modes. Results showed that fRCNN achieved a greater mean Average Precision (mAP) of 86.57% (Intersection-over-Union (IoU) = 0.2) than RetinaNet, which only displayed a mAP of 61.15%. The calculated accuracy for the generated RoIs was 97.59% for Normal videos and 96.37% for PTX videos. Using this system, we successfully classified 5 PTX and 6 Normal video cases with 100% accuracy. Automating the process of detecting seven prominent LUS features addresses the time-consuming manual evaluation of Lung ultrasound in a fast paced environment. Clinical impact: Our research work provides a significant clinical impact as it provides a more accurate and efficient method for diagnosing lung diseases in neonates.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"119-128"},"PeriodicalIF":3.4,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10295523","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134981016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Assessment of Physiological Responses to Emotional Elicitation by Auditory and Visual Stimuli 听觉与视觉刺激诱发情绪的生理反应之比较评估
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-12 DOI: 10.1109/JTEHM.2023.3324249
Edoardo M. Polo;Andrea Farabbi;Maximiliano Mollura;Alessia Paglialonga;Luca Mainardi;Riccardo Barbieri
The study of emotions through the analysis of the induced physiological responses gained increasing interest in the past decades. Emotion-related studies usually employ films or video clips, but these stimuli do not give the possibility to properly separate and assess the emotional content provided by sight or hearing in terms of physiological responses. In this study we have devised an experimental protocol to elicit emotions by using, separately and jointly, pictures and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. We processed galvanic skin response, electrocardiogram, blood volume pulse, pupillary signal and electroencephalogram from 21 subjects to extract both autonomic and central nervous system indices to assess physiological responses in relation to three types of stimulation: auditory, visual, and auditory/visual. Results show a higher galvanic skin response to sounds compared to images. Electrocardiogram and blood volume pulse show different trends between auditory and visual stimuli. The electroencephalographic signal reveals a greater attention paid by the subjects when listening to sounds compared to watching images. In conclusion, these results suggest that emotional responses increase during auditory stimulation at both central and peripheral levels, demonstrating the importance of sounds for emotion recognition experiments and also opening the possibility toward the extension of auditory stimuli in other fields of psychophysiology. Clinical and Translational Impact Statement- These findings corroborate auditory stimuli’s importance in eliciting emotions, supporting their use in studying affective responses, e.g., mood disorder diagnosis, human-machine interaction, and emotional perception in pathology.
在过去的几十年里,通过分析诱发的生理反应来研究情绪得到了越来越多的关注。情感相关的研究通常使用电影或视频片段,但这些刺激并不能从生理反应的角度正确地分离和评估视觉或听觉提供的情感内容。在这项研究中,我们设计了一个实验方案,通过使用广泛使用的国际情感图片系统和国际情感数字声音数据库中的图片和声音,单独或联合来引发情感。我们处理了21名受试者的皮肤电反应、心电图、血容量脉冲、瞳孔信号和脑电图,提取了自主神经和中枢神经系统指标,以评估与三种刺激(听觉、视觉和听觉/视觉)相关的生理反应。结果显示,与图像相比,声音对皮肤的电反应更高。在听觉刺激和视觉刺激之间,心电图和血容量脉搏表现出不同的变化趋势。脑电图信号显示,与观看图像相比,受试者在听声音时更加关注。综上所述,这些结果表明,在听觉刺激期间,情绪反应在中枢和外周水平上都有所增加,这表明声音在情绪识别实验中的重要性,也为听觉刺激在心理生理学其他领域的扩展开辟了可能性。临床和转化影响声明-这些发现证实了听觉刺激在引发情绪方面的重要性,支持了听觉刺激在情感反应研究中的应用,例如情绪障碍诊断、人机交互和病理学中的情绪感知。
{"title":"Comparative Assessment of Physiological Responses to Emotional Elicitation by Auditory and Visual Stimuli","authors":"Edoardo M. Polo;Andrea Farabbi;Maximiliano Mollura;Alessia Paglialonga;Luca Mainardi;Riccardo Barbieri","doi":"10.1109/JTEHM.2023.3324249","DOIUrl":"10.1109/JTEHM.2023.3324249","url":null,"abstract":"The study of emotions through the analysis of the induced physiological responses gained increasing interest in the past decades. Emotion-related studies usually employ films or video clips, but these stimuli do not give the possibility to properly separate and assess the emotional content provided by sight or hearing in terms of physiological responses. In this study we have devised an experimental protocol to elicit emotions by using, separately and jointly, pictures and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. We processed galvanic skin response, electrocardiogram, blood volume pulse, pupillary signal and electroencephalogram from 21 subjects to extract both autonomic and central nervous system indices to assess physiological responses in relation to three types of stimulation: auditory, visual, and auditory/visual. Results show a higher galvanic skin response to sounds compared to images. Electrocardiogram and blood volume pulse show different trends between auditory and visual stimuli. The electroencephalographic signal reveals a greater attention paid by the subjects when listening to sounds compared to watching images. In conclusion, these results suggest that emotional responses increase during auditory stimulation at both central and peripheral levels, demonstrating the importance of sounds for emotion recognition experiments and also opening the possibility toward the extension of auditory stimuli in other fields of psychophysiology. Clinical and Translational Impact Statement- These findings corroborate auditory stimuli’s importance in eliciting emotions, supporting their use in studying affective responses, e.g., mood disorder diagnosis, human-machine interaction, and emotional perception in pathology.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"171-181"},"PeriodicalIF":3.4,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10283859","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136303851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overground Walking With a Transparent Exoskeleton Shows Changes in Spatiotemporal Gait Parameters 透明外骨骼的地面行走时空步态参数变化研究
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-10 DOI: 10.1109/JTEHM.2023.3323381
Rafhael M. Andrade;Stefano Sapienza;Abolfazl Mohebbi;Eric E. Fabara;Paolo Bonato
Lower-limb gait training (GT) exoskeletons have been successfully used in rehabilitation programs to overcome the burden of locomotor impairment. However, providing suitable net interaction torques to assist patient movements is still a challenge. Previous transparent operation approaches have been tested in treadmill-based GT exoskeletons to improve user-robot interaction. However, it is not yet clear how a transparent lower-limb GT system affects user’s gait kinematics during overground walking, which unlike treadmill-based systems, requires active participation of the subjects to maintain stability. In this study, we implemented a transparent operation strategy on the ExoRoboWalker, an overground GT exoskeleton, to investigate its effect on the user’s gait. The approach employs a feedback zero-torque controller with feedforward compensation for the exoskeleton’s dynamics and actuators’ impedance. We analyzed the data of five healthy subjects walking overground with the exoskeleton in transparent mode (ExoTransp) and non-transparent mode (ExoOff) and walking without exoskeleton (NoExo). The transparent controller reduced the user-robot interaction torque and improved the user’s gait kinematics relative to ExoOff. No significant difference in stride length is observed between ExoTransp and NoExo (p = 0.129). However, the subjects showed a significant difference in cadence between ExoTransp (50.9± 1.1 steps/min) and NoExo (93.7 ± 8.7 steps/min) (p = 0.015), but not between ExoTransp and ExoOff (p = 0.644). Results suggest that subjects wearing the exoskeleton adjust their gait as in an attention-demanding task changing the spatiotemporal gait characteristics likely to improve gait balance.
下肢步态训练(GT)外骨骼已经成功地应用于康复项目中,以克服运动障碍的负担。然而,提供合适的净相互作用扭矩来帮助患者运动仍然是一个挑战。以前的透明操作方法已经在基于跑步机的GT外骨骼中进行了测试,以改善用户与机器人的交互。然而,目前尚不清楚透明的下肢GT系统如何影响使用者在地上行走时的步态运动学,这与基于跑步机的系统不同,需要受试者积极参与以保持稳定性。在这项研究中,我们在地面GT外骨骼——ExoRoboWalker上实施了一种透明的操作策略,以研究其对用户步态的影响。该方法采用反馈零转矩控制器,对外骨骼的动力学和执行器的阻抗进行前馈补偿。我们分析了5名健康受试者在透明模式(ExoTransp)和非透明模式(ExoOff)下带外骨骼在地面行走和不带外骨骼(NoExo)的数据。相对于ExoOff,透明控制器降低了用户与机器人的交互力矩,改善了用户的步态运动学。exexp和NoExo在步幅上无显著差异(p = 0.129)。然而,受试者在ExoTransp(50.9±1.1步/分钟)和NoExo(93.7±8.7步/分钟)之间表现出显著差异(p = 0.015),而在ExoTransp和ExoOff之间表现出显著差异(p = 0.644)。结果表明,佩戴外骨骼的受试者在注意力要求高的任务中调整步态,改变时空步态特征可能改善步态平衡。
{"title":"Overground Walking With a Transparent Exoskeleton Shows Changes in Spatiotemporal Gait Parameters","authors":"Rafhael M. Andrade;Stefano Sapienza;Abolfazl Mohebbi;Eric E. Fabara;Paolo Bonato","doi":"10.1109/JTEHM.2023.3323381","DOIUrl":"10.1109/JTEHM.2023.3323381","url":null,"abstract":"Lower-limb gait training (GT) exoskeletons have been successfully used in rehabilitation programs to overcome the burden of locomotor impairment. However, providing suitable net interaction torques to assist patient movements is still a challenge. Previous transparent operation approaches have been tested in treadmill-based GT exoskeletons to improve user-robot interaction. However, it is not yet clear how a transparent lower-limb GT system affects user’s gait kinematics during overground walking, which unlike treadmill-based systems, requires active participation of the subjects to maintain stability. In this study, we implemented a transparent operation strategy on the ExoRoboWalker, an overground GT exoskeleton, to investigate its effect on the user’s gait. The approach employs a feedback zero-torque controller with feedforward compensation for the exoskeleton’s dynamics and actuators’ impedance. We analyzed the data of five healthy subjects walking overground with the exoskeleton in transparent mode (ExoTransp) and non-transparent mode (ExoOff) and walking without exoskeleton (NoExo). The transparent controller reduced the user-robot interaction torque and improved the user’s gait kinematics relative to ExoOff. No significant difference in stride length is observed between ExoTransp and NoExo (p = 0.129). However, the subjects showed a significant difference in cadence between ExoTransp (50.9± 1.1 steps/min) and NoExo (93.7 ± 8.7 steps/min) (p = 0.015), but not between ExoTransp and ExoOff (p = 0.644). Results suggest that subjects wearing the exoskeleton adjust their gait as in an attention-demanding task changing the spatiotemporal gait characteristics likely to improve gait balance.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"182-193"},"PeriodicalIF":3.4,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10275098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136207717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Letter to the Editor on “Leveraging Biomedical Engineering Engineers to Improve Obstructive Sleep Apnea (OSA) Care for Our Stroke Patients” 利用生物医学工程工程师改善卒中患者的阻塞性睡眠呼吸暂停(OSA)护理。
IF 3.4 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-09-29 DOI: 10.1109/JTEHM.2023.3318930
Sara E. Benjamin;Charlene E. Gamaldo
Obstructive sleep apnea (OSA), a condition of recurring, episodic complete or upper airway collapse, is a common disorder, affecting an estimated 17.4% of women and 33.9% of men in the United States [1]. The first line treatment for OSA is Continuous Positive Airway Pressure (CPAP) therapy, a medical device that delivers adequate airflow and oxygenation during sleep by way of a tube that connects an air compressor to a face mask that can fit over the nose, under the nose, or over the nose and mouth.
阻塞性睡眠呼吸暂停(OSA)是一种复发性、发作性完全性或上呼吸道塌陷的疾病,是一种常见的疾病,在美国约有17.4%的女性和33.9%的男性受到影响[1]。阻塞性睡眠呼吸暂停的第一线治疗是持续气道正压通气(CPAP)治疗,这是一种医疗设备,通过一根管子在睡眠期间提供足够的气流和氧合,该管子将空气压缩机连接到面罩上,面罩可以盖在鼻子上、鼻子下或盖在鼻子和嘴巴上。
{"title":"Letter to the Editor on “Leveraging Biomedical Engineering Engineers to Improve Obstructive Sleep Apnea (OSA) Care for Our Stroke Patients”","authors":"Sara E. Benjamin;Charlene E. Gamaldo","doi":"10.1109/JTEHM.2023.3318930","DOIUrl":"10.1109/JTEHM.2023.3318930","url":null,"abstract":"Obstructive sleep apnea (OSA), a condition of recurring, episodic complete or upper airway collapse, is a common disorder, affecting an estimated 17.4% of women and 33.9% of men in the United States \u0000<xref>[1]</xref>\u0000. The first line treatment for OSA is Continuous Positive Airway Pressure (CPAP) therapy, a medical device that delivers adequate airflow and oxygenation during sleep by way of a tube that connects an air compressor to a face mask that can fit over the nose, under the nose, or over the nose and mouth.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"11 ","pages":"536-537"},"PeriodicalIF":3.4,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10268080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135844603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Translational Engineering in Health and Medicine-Jtehm
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1