首页 > 最新文献

International Journal of Medical Robotics and Computer Assisted Surgery最新文献

英文 中文
A back propagation neural network based respiratory motion modelling method 基于反向传播神经网络的呼吸运动建模方法。
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-05-28 DOI: 10.1002/rcs.2647
Shan Jiang, Bowen Li, Zhiyong Yang, Yuhua Li, Zeyang Zhou

Background

This study presents the development of a backpropagation neural network-based respiratory motion modelling method (BP-RMM) for precisely tracking arbitrary points within lung tissue throughout free respiration, encompassing deep inspiration and expiration phases.

Methods

Internal and external respiratory data from four-dimensional computed tomography (4DCT) are processed using various artificial intelligence algorithms. Data augmentation through polynomial interpolation is employed to enhance dataset robustness. A BP neural network is then constructed to comprehensively track lung tissue movement.

Results

The BP-RMM demonstrates promising accuracy. In cases from the public 4DCT dataset, the average target registration error (TRE) between authentic deep respiration phases and those forecasted by BP-RMM for 75 marked points is 1.819 mm. Notably, TRE for normal respiration phases is significantly lower, with a minimum error of 0.511 mm.

Conclusions

The proposed method is validated for its high accuracy and robustness, establishing it as a promising tool for surgical navigation within the lung.

背景:本研究提出了一种基于反向传播神经网络的呼吸运动建模方法(BP-RMM),用于在整个自由呼吸过程中精确跟踪肺组织内的任意点,包括深吸气和呼气阶段:方法:使用各种人工智能算法处理来自四维计算机断层扫描(4DCT)的内部和外部呼吸数据。通过多项式插值进行数据增强,以提高数据集的鲁棒性。然后构建一个 BP 神经网络,以全面跟踪肺组织运动:结果:BP-RMM 显示出了良好的准确性。在公共 4DCT 数据集的案例中,75 个标记点的真实深度呼吸相位与 BP-RMM 预测相位之间的平均目标配准误差(TRE)为 1.819 毫米。值得注意的是,正常呼吸相位的目标注册误差要小得多,最小误差为 0.511 毫米:所提出的方法因其高精度和鲁棒性而得到验证,使其成为肺部手术导航的理想工具。
{"title":"A back propagation neural network based respiratory motion modelling method","authors":"Shan Jiang,&nbsp;Bowen Li,&nbsp;Zhiyong Yang,&nbsp;Yuhua Li,&nbsp;Zeyang Zhou","doi":"10.1002/rcs.2647","DOIUrl":"10.1002/rcs.2647","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>This study presents the development of a backpropagation neural network-based respiratory motion modelling method (BP-RMM) for precisely tracking arbitrary points within lung tissue throughout free respiration, encompassing deep inspiration and expiration phases.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Internal and external respiratory data from four-dimensional computed tomography (4DCT) are processed using various artificial intelligence algorithms. Data augmentation through polynomial interpolation is employed to enhance dataset robustness. A BP neural network is then constructed to comprehensively track lung tissue movement.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The BP-RMM demonstrates promising accuracy. In cases from the public 4DCT dataset, the average target registration error (TRE) between authentic deep respiration phases and those forecasted by BP-RMM for 75 marked points is 1.819 mm. Notably, TRE for normal respiration phases is significantly lower, with a minimum error of 0.511 mm.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The proposed method is validated for its high accuracy and robustness, establishing it as a promising tool for surgical navigation within the lung.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141158124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ERegPose: An explicit regression based 6D pose estimation for snake-like wrist-type surgical instruments ERegPose:基于显式回归的蛇形腕式手术器械 6D 姿势估计。
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-05-24 DOI: 10.1002/rcs.2640
Jinhua Li, Zhengyang Ma, Xinan Sun, He Su

Background

Accurately estimating the 6D pose of snake-like wrist-type surgical instruments is challenging due to their complex kinematics and flexible design.

Methods

We propose ERegPose, a comprehensive strategy for precise 6D pose estimation. The strategy consists of two components: ERegPoseNet, an original deep neural network model designed for explicit regression of the instrument's 6D pose, and an annotated in-house dataset of simulated surgical operations. To capture rotational features, we employ an Single Shot multibox Detector (SSD)-like detector to generate bounding boxes of the instrument tip.

Results

ERegPoseNet achieves an error of 1.056 mm in 3D translation, 0.073 rad in 3D rotation, and an average distance (ADD) metric of 3.974 mm, indicating an overall spatial transformation error. The necessity of the SSD-like detector and L1 loss is validated through experiments.

Conclusions

ERegPose outperforms existing approaches, providing accurate 6D pose estimation for snake-like wrist-type surgical instruments. Its practical applications in various surgical tasks hold great promise.

背景:由于蛇形腕式手术器械运动学复杂,设计灵活,因此准确估计其 6D 姿态具有挑战性:我们提出了ERegPose--一种用于精确估计6D姿态的综合策略。该策略由两部分组成:ERegPoseNet是一个原创的深度神经网络模型,旨在对器械的6D姿态进行显式回归;ERegPoseNet还包括一个带有注释的模拟手术操作内部数据集。为了捕捉旋转特征,我们采用了类似于单射多框检测器(SSD)的检测器来生成器械尖端的边界框:结果:ERegPoseNet 的三维平移误差为 1.056 毫米,三维旋转误差为 0.073 拉德,平均距离 (ADD) 指标为 3.974 毫米,表明存在整体空间转换误差。通过实验验证了类 SSD 检测器和 L1 损失的必要性:ERegPose优于现有方法,可为蛇形腕式手术器械提供精确的6D姿态估计。它在各种手术任务中的实际应用前景广阔。
{"title":"ERegPose: An explicit regression based 6D pose estimation for snake-like wrist-type surgical instruments","authors":"Jinhua Li,&nbsp;Zhengyang Ma,&nbsp;Xinan Sun,&nbsp;He Su","doi":"10.1002/rcs.2640","DOIUrl":"10.1002/rcs.2640","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Accurately estimating the 6D pose of snake-like wrist-type surgical instruments is challenging due to their complex kinematics and flexible design.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We propose ERegPose, a comprehensive strategy for precise 6D pose estimation. The strategy consists of two components: ERegPoseNet, an original deep neural network model designed for explicit regression of the instrument's 6D pose, and an annotated in-house dataset of simulated surgical operations. To capture rotational features, we employ an Single Shot multibox Detector (SSD)-like detector to generate bounding boxes of the instrument tip.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>ERegPoseNet achieves an error of 1.056 mm in 3D translation, 0.073 rad in 3D rotation, and an average distance (ADD) metric of 3.974 mm, indicating an overall spatial transformation error. The necessity of the SSD-like detector and L1 loss is validated through experiments.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>ERegPose outperforms existing approaches, providing accurate 6D pose estimation for snake-like wrist-type surgical instruments. Its practical applications in various surgical tasks hold great promise.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new ring fixator system for automated bone fixation 用于自动骨固定的新型环形固定器系统。
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-05-23 DOI: 10.1002/rcs.2637
Ahmet Aydi(ı)n, M. Kerem U(Ü)n

Background

In the field of orthopaedics, external fixators are commonly employed for treating extremity fractures and deformities. Computer-assisted systems offer a promising and less error-prone treatment alternative to manual fixation by utilising a software to plan treatments based on radiological and clinical data. Nevertheless, existing computer-assisted systems have limitations and constraints.

Methods

This work represents the culmination of a project aimed at developing a new automatised fixation system and a corresponding software to minimise human intervention and associated errors, and the developed system incorporates enhanced functionalities and has fewer constraints compared to existing systems.

Results

The automatised fixation system and its graphical user interface (GUI) demonstrate promising results in terms of accuracy, efficiency, and reliability.

Conclusion

The developed fixation system and its accompanying GUI represent an improvement in computer-assisted fixation systems. Future research may focus on further refining the system and conducting clinical trials.

背景:在骨科领域,外固定器通常用于治疗四肢骨折和畸形。计算机辅助系统利用软件根据放射学和临床数据规划治疗方案,是一种替代人工固定的有前途且不易出错的治疗方法。然而,现有的计算机辅助系统存在局限性和制约因素:这项工作是一个项目的高潮,该项目旨在开发一种新的自动化固定系统和相应的软件,以最大限度地减少人工干预和相关错误:结果:自动化定影系统及其图形用户界面(GUI)在准确性、效率和可靠性方面都取得了可喜的成果:结论:所开发的定点系统及其图形用户界面代表了计算机辅助定点系统的进步。今后的研究重点是进一步完善该系统并开展临床试验。
{"title":"A new ring fixator system for automated bone fixation","authors":"Ahmet Aydi(ı)n,&nbsp;M. Kerem U(Ü)n","doi":"10.1002/rcs.2637","DOIUrl":"10.1002/rcs.2637","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>In the field of orthopaedics, external fixators are commonly employed for treating extremity fractures and deformities. Computer-assisted systems offer a promising and less error-prone treatment alternative to manual fixation by utilising a software to plan treatments based on radiological and clinical data. Nevertheless, existing computer-assisted systems have limitations and constraints.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>This work represents the culmination of a project aimed at developing a new automatised fixation system and a corresponding software to minimise human intervention and associated errors, and the developed system incorporates enhanced functionalities and has fewer constraints compared to existing systems.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The automatised fixation system and its graphical user interface (GUI) demonstrate promising results in terms of accuracy, efficiency, and reliability.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>The developed fixation system and its accompanying GUI represent an improvement in computer-assisted fixation systems. Future research may focus on further refining the system and conducting clinical trials.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.2637","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer learning for anatomical structure segmentation in otorhinolaryngology microsurgery 耳鼻喉显微外科解剖结构分割的迁移学习。
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-05-20 DOI: 10.1002/rcs.2634
Xin Ding, Yu Huang, Yang Zhao, Xu Tian, Guodong Feng, Zhiqiang Gao

Background

Reducing the annotation burden is an active and meaningful area of artificial intelligence (AI) research.

Methods

Multiple datasets for the segmentation of two landmarks were constructed based on 41 257 labelled images and 6 different microsurgical scenarios. These datasets were trained using the multi-stage transfer learning (TL) methodology.

Results

The multi-stage TL enhanced segmentation performance over baseline (mIOU 0.6892 vs. 0.8869). Besides, Convolutional Neural Networks (CNNs) achieved a robust performance (mIOU 0.8917 vs. 0.8603) even when the training dataset size was reduced from 90% (30 078 images) to 10% (3342 images). When directly applying the weight from one certain surgical scenario to recognise the same target in images of other scenarios without training, CNNs still obtained an optimal mIOU of 0.6190 ± 0.0789.

Conclusions

Model performance can be improved with TL in datasets with reduced size and increased complexity. It is feasible for data-based domain adaptation among different microsurgical fields.

背景:减轻注释负担是人工智能(AI)研究的一个活跃而有意义的领域:减轻标注负担是人工智能(AI)研究中一个活跃而有意义的领域:方法:基于 41 257 张标注图像和 6 种不同的显微手术场景,构建了两个地标分割的多个数据集。这些数据集采用多阶段迁移学习(TL)方法进行训练:结果:多阶段迁移学习比基线提高了分割性能(mIOU 0.6892 对 0.8869)。此外,卷积神经网络(CNN)即使在训练数据集规模从 90%(30 078 幅图像)减少到 10%(3342 幅图像)的情况下,也能实现稳健的性能(mIOU 0.8917 vs. 0.8603)。在不进行训练的情况下,直接应用某一手术场景的权重来识别其他场景图像中的相同目标时,CNN 仍然获得了 0.6190 ± 0.0789 的最佳 mIOU:在规模缩小、复杂度增加的数据集中,模型性能可以通过 TL 得到改善。在不同的显微外科领域,基于数据的领域适应是可行的。
{"title":"Transfer learning for anatomical structure segmentation in otorhinolaryngology microsurgery","authors":"Xin Ding,&nbsp;Yu Huang,&nbsp;Yang Zhao,&nbsp;Xu Tian,&nbsp;Guodong Feng,&nbsp;Zhiqiang Gao","doi":"10.1002/rcs.2634","DOIUrl":"10.1002/rcs.2634","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Reducing the annotation burden is an active and meaningful area of artificial intelligence (AI) research.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Multiple datasets for the segmentation of two landmarks were constructed based on 41 257 labelled images and 6 different microsurgical scenarios. These datasets were trained using the multi-stage transfer learning (TL) methodology.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The multi-stage TL enhanced segmentation performance over baseline (mIOU 0.6892 vs. 0.8869). Besides, Convolutional Neural Networks (CNNs) achieved a robust performance (mIOU 0.8917 vs. 0.8603) even when the training dataset size was reduced from 90% (30 078 images) to 10% (3342 images). When directly applying the weight from one certain surgical scenario to recognise the same target in images of other scenarios without training, CNNs still obtained an optimal mIOU of 0.6190 ± 0.0789.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Model performance can be improved with TL in datasets with reduced size and increased complexity. It is feasible for data-based domain adaptation among different microsurgical fields.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141066207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the feasibility of indocyanine green fluorescence for intraoperative ureteral visualisation in robotic transvaginal natural orifice transluminal endoscopy surgery during endometriosis resection 探索在子宫内膜异位症切除术中使用吲哚菁绿荧光术中输尿管可视化的可行性
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-05-16 DOI: 10.1002/rcs.2636
Luis E. Delgadillo Chabolla, Linda A. Alpuing Radilla, Tamisa Koythong, Sowmya Sunkara, Yamely Mendez, Qianqing Wang, Xiaoming Guan

Background

To assess the feasibility of use of indocyanine green (ICG) in identifying and minimising urinary tract injury during surgical resection of endometriosis through robotic transvaginal natural orifice transluminal endoscopy surgery (RvNOTES).

Methods

We conducted a retrospective case series in two academic tertiary care hospitals. We examined 53 patients who underwent RvNOTES hysterectomy with planned endometriosis resection.

Results

The study involved 53 patients undergoing RvNOTES with ICG fluorescence for endometriosis resection. Mean patient age was 37.98 ± 6.65 years. Operative time averaged 181.32 ± 53.94 min, with estimated blood loss at 45.57 ± 33.62 mL. Postoperative stay averaged 0.23 ± 0.47 days. No ICG-related complications occurred.

Conclusion

No complications occurred with ICG fluorescence in RvNOTES. It appears to be a safe option for ureteral localisation and preservation. ICG fluorescence is widely used in diverse medical specialities for identifying ureters during complex surgeries. Larger studies are needed to firmly establish its advantages in intraoperative ureteral visualisation during RvNOTES for deep infiltrative endometriosis.

背景 评估在通过机器人经阴道自然腔道内窥镜手术(RvNOTES)切除子宫内膜异位症时使用吲哚菁绿(ICG)来识别和减少尿路损伤的可行性。 方法 我们在两家学术性三级医院开展了一项回顾性病例系列研究。我们对 53 例接受 RvNOTES 子宫切除术并计划切除子宫内膜异位症的患者进行了研究。 结果 研究涉及 53 名接受 RvNOTES 及 ICG 荧光子宫内膜异位症切除术的患者。患者平均年龄为(37.98 ± 6.65)岁。手术时间平均为(181.32 ± 53.94)分钟,失血量估计为(45.57 ± 33.62)毫升。术后平均住院时间为(0.23 ± 0.47)天。未发生 ICG 相关并发症。 结论 RvNOTES 中的 ICG 荧光术未出现并发症。它似乎是输尿管定位和保留的安全选择。ICG 荧光技术被广泛应用于各种医学专科,在复杂的手术中识别输尿管。需要进行更大规模的研究,以确定ICG荧光在RvNOTES治疗深部浸润性子宫内膜异位症时术中输尿管显像的优势。
{"title":"Exploring the feasibility of indocyanine green fluorescence for intraoperative ureteral visualisation in robotic transvaginal natural orifice transluminal endoscopy surgery during endometriosis resection","authors":"Luis E. Delgadillo Chabolla,&nbsp;Linda A. Alpuing Radilla,&nbsp;Tamisa Koythong,&nbsp;Sowmya Sunkara,&nbsp;Yamely Mendez,&nbsp;Qianqing Wang,&nbsp;Xiaoming Guan","doi":"10.1002/rcs.2636","DOIUrl":"https://doi.org/10.1002/rcs.2636","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>To assess the feasibility of use of indocyanine green (ICG) in identifying and minimising urinary tract injury during surgical resection of endometriosis through robotic transvaginal natural orifice transluminal endoscopy surgery (RvNOTES).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We conducted a retrospective case series in two academic tertiary care hospitals. We examined 53 patients who underwent RvNOTES hysterectomy with planned endometriosis resection.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The study involved 53 patients undergoing RvNOTES with ICG fluorescence for endometriosis resection. Mean patient age was 37.98 ± 6.65 years. Operative time averaged 181.32 ± 53.94 min, with estimated blood loss at 45.57 ± 33.62 mL. Postoperative stay averaged 0.23 ± 0.47 days. No ICG-related complications occurred.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>No complications occurred with ICG fluorescence in RvNOTES. It appears to be a safe option for ureteral localisation and preservation. ICG fluorescence is widely used in diverse medical specialities for identifying ureters during complex surgeries. Larger studies are needed to firmly establish its advantages in intraoperative ureteral visualisation during RvNOTES for deep infiltrative endometriosis.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140952999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot-assisted total knee arthroplasty system provides more precise control of the femoral rotation angle: A retrospective study 机器人辅助全膝关节置换术系统能更精确地控制股骨旋转角度:一项回顾性研究。
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-05-11 DOI: 10.1002/rcs.2635
Peng Yan, Xudong Duan, Yutian Lei, Fangze Xing, Ruomu Cao, Sen Luo, Yang Chen, Zeyu Liu, Kunzheng Wang, Pei Yang, Run Tian

Background

Rotational alignment in total knee arthroplasty (TKA) is a crucial technical point that needs attention. We conducted a retrospective study to investigate whether a new robot-assisted TKA (RA-TKA) could improve the accuracy of rotational alignment and whether rotational alignment affects postoperative pain and functional evaluation of the knee.

Methods

A total of 136 consecutive patients who underwent TKA were included in this study. Half of the patients underwent RA-TKA and the other half underwent conventional TKA (CON-TKA) by the same group of surgeons. Collect the relevant parameters.

Results

The postoperative femoral rotation angle (FRA) was −0.72 ± 2.59° in the robot-assisted group and 1.13 ± 2.73° in the conventional group, and were statistically significantly different (p < 0.001).

Conclusion

This study provides preliminary evidence that the RA-TKA provides more precise control of FRA than CON-TKA, and verifies that tibial rotation angle and combined rotation angle affect postoperative knee pain and functional evaluation.

背景:全膝关节置换术(TKA)中的旋转对位是一个需要关注的关键技术点。我们进行了一项回顾性研究,探讨新型机器人辅助 TKA(RA-TKA)能否提高旋转对位的准确性,以及旋转对位是否会影响术后疼痛和膝关节功能评估:本研究共纳入了 136 名连续接受 TKA 的患者。一半患者接受了 RA-TKA,另一半患者接受了传统 TKA(CON-TKA),由同一组外科医生进行。收集相关参数:结果:机器人辅助组术后股骨旋转角度(FRA)为-0.72 ± 2.59°,传统组为1.13 ± 2.73°,两组差异有统计学意义(P 结论:该研究初步证明了RA-TKA术后股骨旋转角度(FRA)与传统TKA术后股骨旋转角度(FRA)之间的差异:本研究初步证明 RA-TKA 比 CON-TKA 能更精确地控制 FRA,并验证了胫骨旋转角度和联合旋转角度对术后膝关节疼痛和功能评估的影响。
{"title":"Robot-assisted total knee arthroplasty system provides more precise control of the femoral rotation angle: A retrospective study","authors":"Peng Yan,&nbsp;Xudong Duan,&nbsp;Yutian Lei,&nbsp;Fangze Xing,&nbsp;Ruomu Cao,&nbsp;Sen Luo,&nbsp;Yang Chen,&nbsp;Zeyu Liu,&nbsp;Kunzheng Wang,&nbsp;Pei Yang,&nbsp;Run Tian","doi":"10.1002/rcs.2635","DOIUrl":"10.1002/rcs.2635","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Rotational alignment in total knee arthroplasty (TKA) is a crucial technical point that needs attention. We conducted a retrospective study to investigate whether a new robot-assisted TKA (RA-TKA) could improve the accuracy of rotational alignment and whether rotational alignment affects postoperative pain and functional evaluation of the knee.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>A total of 136 consecutive patients who underwent TKA were included in this study. Half of the patients underwent RA-TKA and the other half underwent conventional TKA (CON-TKA) by the same group of surgeons. Collect the relevant parameters.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The postoperative femoral rotation angle (FRA) was −0.72 ± 2.59° in the robot-assisted group and 1.13 ± 2.73° in the conventional group, and were statistically significantly different (<i>p</i> &lt; 0.001).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>This study provides preliminary evidence that the RA-TKA provides more precise control of FRA than CON-TKA, and verifies that tibial rotation angle and combined rotation angle affect postoperative knee pain and functional evaluation.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 3","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140909695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic pterygopalatine fossa segmentation and localisation based on DenseASPP 基于 DenseASPP 的翼腭窝自动分割和定位系统
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-04-23 DOI: 10.1002/rcs.2633
Bing Wang, Weili Shi

Background

Allergic rhinitis constitutes a widespread health concern, with traditional treatments often proving to be painful and ineffective. Acupuncture targeting the pterygopalatine fossa proves effective but is complicated due to the intricate nearby anatomy.

Methods

To enhance the safety and precision in targeting the pterygopalatine fossa, we introduce a deep learning-based model to refine the segmentation of the pterygopalatine fossa. Our model expands the U-Net framework with DenseASPP and integrates an attention mechanism for enhanced precision in the localisation and segmentation of the pterygopalatine fossa.

Results

The model achieves Dice Similarity Coefficient of 93.89% and 95% Hausdorff Distance of 2.53 mm with significant precision. Remarkably, it only uses 1.98 M parameters.

Conclusions

Our deep learning approach yields significant advancements in localising and segmenting the pterygopalatine fossa, providing a reliable basis for guiding pterygopalatine fossa-assisted punctures.

背景过敏性鼻炎是一个普遍存在的健康问题,传统的治疗方法往往痛苦且无效。针对翼腭窝的针灸被证明是有效的,但由于附近复杂的解剖结构而变得复杂。 方法 为了提高针对翼腭窝针刺的安全性和精确性,我们引入了基于深度学习的模型来完善翼腭窝的分割。我们的模型利用 DenseASPP 扩展了 U-Net 框架,并整合了注意力机制,以提高翼腭窝定位和分割的精确度。 结果 该模型的骰子相似系数达到 93.89%,95% Hausdorff 距离为 2.53 mm,精确度显著提高。值得注意的是,它只使用了 1.98 M 个参数。 结论 我们的深度学习方法在定位和分割翼腭窝方面取得了重大进展,为指导翼腭窝辅助穿刺提供了可靠的依据。
{"title":"Automatic pterygopalatine fossa segmentation and localisation based on DenseASPP","authors":"Bing Wang,&nbsp;Weili Shi","doi":"10.1002/rcs.2633","DOIUrl":"https://doi.org/10.1002/rcs.2633","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Allergic rhinitis constitutes a widespread health concern, with traditional treatments often proving to be painful and ineffective. Acupuncture targeting the pterygopalatine fossa proves effective but is complicated due to the intricate nearby anatomy.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>To enhance the safety and precision in targeting the pterygopalatine fossa, we introduce a deep learning-based model to refine the segmentation of the pterygopalatine fossa. Our model expands the U-Net framework with DenseASPP and integrates an attention mechanism for enhanced precision in the localisation and segmentation of the pterygopalatine fossa.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The model achieves Dice Similarity Coefficient of 93.89% and 95% Hausdorff Distance of 2.53 mm with significant precision. Remarkably, it only uses 1.98 M parameters.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Our deep learning approach yields significant advancements in localising and segmenting the pterygopalatine fossa, providing a reliable basis for guiding pterygopalatine fossa-assisted punctures.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140639573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic management of primary cholecystoduodenal fistula: A case report and brief literature review 原发性胆囊十二指肠瘘的机器人治疗:病例报告和简要文献综述
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-04-21 DOI: 10.1002/rcs.2629
Anjelica Alfonso, Kimberly N. McFarland, Kush Savsani, Seung Lee, Daisuke Imai, Aamir Khan, Amit Sharma, Muhammad Saeed, Vinay Kumaran, Adrian Cotterell, David Bruno, Marlon Levy

Background

Cholecystoduodenal fistula (CDF) arises from persistent biliary tree disorders, causing fusion between the gallbladder and duodenum. Initially, open resection was common until laparoscopic fistula closure gained popularity. However, complexities within the gallbladder fossa yielded inconsistent outcomes. Advanced imaging and robotic surgery now enhance precision and detection.

Method

A 62-year-old woman with chronic cholangitis attributed to cholecystoduodenal fistula underwent successful robotic cholecystectomy and fistula closure.

Results

Postoperatively, the symptoms subsided with no complications during the robotic procedure. Existing studies report favourable outcomes for robotic cholecystectomy and fistula closure.

Conclusions

Our case report showcases a rare instance of successful robotic cholecystectomy with CDF closure. This case, along with a review of previous cases, suggests the potential of robotic surgery as the preferred approach, especially for patients anticipated to face significant laparoscopic morbidity.

背景 胆囊十二指肠瘘(CDF)源于持续性胆道疾病,导致胆囊和十二指肠融合。最初,在腹腔镜瘘管闭合术流行之前,开腹切除术很常见。然而,胆囊窝内的复杂情况导致了不一致的结果。现在,先进的成像技术和机器人手术提高了精确度和检测能力。 方法 一名患有慢性胆管炎、胆囊十二指肠瘘的 62 岁女性成功接受了机器人胆囊切除术和瘘管闭合术。 结果 术后症状缓解,机器人手术过程中未出现并发症。现有研究显示,机器人胆囊切除术和瘘管闭合术的疗效良好。 结论 我们的病例报告展示了一例罕见的成功机器人胆囊切除术和 CDF 闭合术。该病例以及对以往病例的回顾表明,机器人手术有可能成为首选方法,尤其是对于预计会面临严重腹腔镜发病率的患者。
{"title":"Robotic management of primary cholecystoduodenal fistula: A case report and brief literature review","authors":"Anjelica Alfonso,&nbsp;Kimberly N. McFarland,&nbsp;Kush Savsani,&nbsp;Seung Lee,&nbsp;Daisuke Imai,&nbsp;Aamir Khan,&nbsp;Amit Sharma,&nbsp;Muhammad Saeed,&nbsp;Vinay Kumaran,&nbsp;Adrian Cotterell,&nbsp;David Bruno,&nbsp;Marlon Levy","doi":"10.1002/rcs.2629","DOIUrl":"https://doi.org/10.1002/rcs.2629","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Cholecystoduodenal fistula (CDF) arises from persistent biliary tree disorders, causing fusion between the gallbladder and duodenum. Initially, open resection was common until laparoscopic fistula closure gained popularity. However, complexities within the gallbladder fossa yielded inconsistent outcomes. Advanced imaging and robotic surgery now enhance precision and detection.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Method</h3>\u0000 \u0000 <p>A 62-year-old woman with chronic cholangitis attributed to cholecystoduodenal fistula underwent successful robotic cholecystectomy and fistula closure.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Postoperatively, the symptoms subsided with no complications during the robotic procedure. Existing studies report favourable outcomes for robotic cholecystectomy and fistula closure.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Our case report showcases a rare instance of successful robotic cholecystectomy with CDF closure. This case, along with a review of previous cases, suggests the potential of robotic surgery as the preferred approach, especially for patients anticipated to face significant laparoscopic morbidity.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.2629","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140622647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
‘Burn and Push’ technique: A novel robotic liver parenchymal transection technique 烧推 "技术:一种新型机器人肝实质横切技术
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-04-20 DOI: 10.1002/rcs.2631
Yuzuru Sambommatsu, Seung Duk Lee, Daisuke Imai, Kush Savsani, Aamir A. Khan, Amit Sharma, Muhammad Saeed, Adrian H. Cotterell, Vinay Kumaran, Marlon F. Levy, David A. Bruno

Background

Liver parenchymal transection during robotic liver resection (RLR) remains a significant challenge due to the limited range of specialised instruments. This study introduces our ‘Burn and Push’ technique as a novel approach to address these challenges.

Methods

A retrospective analysis was conducted on 20 patients who underwent RLR using the ‘Burn and Push’ technique at Virginia Commonwealth University Health System from November 2021 to August 2023. The study evaluated peri- and post-operative outcomes.

Results

The median operation time was 241.5 min (range, 90–620 min), and the median blood loss was 100 mL (range, 10–600 mL). Major complications occurred in one case, with no instances of postoperative bleeding, bile leak, or liver failure.

Conclusions

The ‘Burn and Push’ technique is a viable and efficient alternative for liver parenchymal transection in RLR. Further research with larger sample sizes and consideration of the learning curve is necessary to validate these findings.

背景 由于专用器械的范围有限,机器人肝切除术(RLR)中的肝实质横切仍然是一项重大挑战。本研究介绍了我们的 "烧推 "技术,作为应对这些挑战的新方法。 方法 对 2021 年 11 月至 2023 年 8 月期间在弗吉尼亚联邦大学卫生系统使用 "烧推 "技术进行 RLR 的 20 位患者进行了回顾性分析。研究评估了围手术期和术后结果。 结果 中位手术时间为 241.5 分钟(90-620 分钟不等),中位失血量为 100 毫升(10-600 毫升不等)。主要并发症发生 1 例,无术后出血、胆汁渗漏或肝功能衰竭。 结论 "烧推 "技术是在 RLR 中进行肝实质横断的一种可行而有效的替代方法。为了验证这些发现,有必要进行样本量更大的进一步研究,并考虑学习曲线。
{"title":"‘Burn and Push’ technique: A novel robotic liver parenchymal transection technique","authors":"Yuzuru Sambommatsu,&nbsp;Seung Duk Lee,&nbsp;Daisuke Imai,&nbsp;Kush Savsani,&nbsp;Aamir A. Khan,&nbsp;Amit Sharma,&nbsp;Muhammad Saeed,&nbsp;Adrian H. Cotterell,&nbsp;Vinay Kumaran,&nbsp;Marlon F. Levy,&nbsp;David A. Bruno","doi":"10.1002/rcs.2631","DOIUrl":"https://doi.org/10.1002/rcs.2631","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Liver parenchymal transection during robotic liver resection (RLR) remains a significant challenge due to the limited range of specialised instruments. This study introduces our ‘Burn and Push’ technique as a novel approach to address these challenges.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>A retrospective analysis was conducted on 20 patients who underwent RLR using the ‘Burn and Push’ technique at Virginia Commonwealth University Health System from November 2021 to August 2023. The study evaluated peri- and post-operative outcomes.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The median operation time was 241.5 min (range, 90–620 min), and the median blood loss was 100 mL (range, 10–600 mL). Major complications occurred in one case, with no instances of postoperative bleeding, bile leak, or liver failure.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The ‘Burn and Push’ technique is a viable and efficient alternative for liver parenchymal transection in RLR. Further research with larger sample sizes and consideration of the learning curve is necessary to validate these findings.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.2631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140622678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of remaining surgery duration in laparoscopic videos based on visual saliency and the transformer network 基于视觉显著性和变压器网络预测腹腔镜视频中的剩余手术时间
IF 2.5 3区 医学 Q2 SURGERY Pub Date : 2024-04-17 DOI: 10.1002/rcs.2632
Constantinos Loukas, Ioannis Seimenis, Konstantina Prevezanou, Dimitrios Schizas

Background

Real-time prediction of the remaining surgery duration (RSD) is important for optimal scheduling of resources in the operating room.

Methods

We focus on the intraoperative prediction of RSD from laparoscopic video. An extensive evaluation of seven common deep learning models, a proposed one based on the Transformer architecture (TransLocal) and four baseline approaches, is presented. The proposed pipeline includes a CNN-LSTM for feature extraction from salient regions within short video segments and a Transformer with local attention mechanisms.

Results

Using the Cholec80 dataset, TransLocal yielded the best performance (mean absolute error (MAE) = 7.1 min). For long and short surgeries, the MAE was 10.6 and 4.4 min, respectively. Thirty minutes before the end of surgery MAE = 6.2 min, 7.2 and 5.5 min for all long and short surgeries, respectively.

Conclusions

The proposed technique achieves state-of-the-art results. In the future, we aim to incorporate intraoperative indicators and pre-operative data.

背景 实时预测剩余手术时间(RSD)对于优化手术室资源调度非常重要。 方法 我们专注于从腹腔镜视频中预测术中剩余手术时间(RSD)。我们对七种常见的深度学习模型、一种基于 Transformer 架构的拟议模型(TransLocal)和四种基线方法进行了广泛评估。提议的管道包括一个 CNN-LSTM,用于从短视频片段中的突出区域提取特征,以及一个具有局部关注机制的 Transformer。 结果 使用 Cholec80 数据集,TransLocal 的性能最佳(平均绝对误差 (MAE) = 7.1 分钟)。对于长手术和短手术,平均绝对误差分别为 10.6 分钟和 4.4 分钟。手术结束前 30 分钟,所有长手术和短手术的 MAE 分别为 6.2 分钟、7.2 分钟和 5.5 分钟。 结论 建议的技术达到了最先进的效果。今后,我们的目标是纳入术中指标和术前数据。
{"title":"Prediction of remaining surgery duration in laparoscopic videos based on visual saliency and the transformer network","authors":"Constantinos Loukas,&nbsp;Ioannis Seimenis,&nbsp;Konstantina Prevezanou,&nbsp;Dimitrios Schizas","doi":"10.1002/rcs.2632","DOIUrl":"https://doi.org/10.1002/rcs.2632","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Real-time prediction of the remaining surgery duration (RSD) is important for optimal scheduling of resources in the operating room.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We focus on the intraoperative prediction of RSD from laparoscopic video. An extensive evaluation of seven common deep learning models, a proposed one based on the Transformer architecture (TransLocal) and four baseline approaches, is presented. The proposed pipeline includes a CNN-LSTM for feature extraction from salient regions within short video segments and a Transformer with local attention mechanisms.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Using the Cholec80 dataset, TransLocal yielded the best performance (mean absolute error (MAE) = 7.1 min). For long and short surgeries, the MAE was 10.6 and 4.4 min, respectively. Thirty minutes before the end of surgery MAE = 6.2 min, 7.2 and 5.5 min for all long and short surgeries, respectively.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The proposed technique achieves state-of-the-art results. In the future, we aim to incorporate intraoperative indicators and pre-operative data.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.2632","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140606365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Medical Robotics and Computer Assisted Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1