首页 > 最新文献

International Journal of Medical Robotics and Computer Assisted Surgery最新文献

英文 中文
Transoral robotic surgery in the diagnosis and treatment of primary unknown head and neck squamous cell carcinoma: A preliminary single centre experience 经口机器人手术诊断和治疗原发性未知头颈部鳞状细胞癌:一个单中心的初步经验
IF 2.3 3区 医学 Q1 Medicine Pub Date : 2024-06-21 DOI: 10.1002/rcs.2652
Yinghui Zhi, Yabing Zhang, Bin Zhang

Background

Squamous cell carcinoma of unknown primary (CUP) in the head and neck is difficult to diagnose and treat. This report outlines 11 cases of CUP treated with transoral robotic surgery (TORS), aimed at investigating the diagnostic efficiency of primary tumour and radical resection effectiveness of TORS.

Methods

11 cases of CUP among 68 oropharyngeal cancer patients treated by TORS were analysed retrospectively.

Results

All the 11 cases received TORS with cervical lymph node dissection. Primary tumours were found in 8 cases (72.7%), 4 cases in the palatine tonsil and 4 cases in the base of the tongue. The average diameter of the primary tumour was 1.65 cm. All patients resumed eating by mouth within 24 h, no tracheotomy, no pharyngeal fistula and no postoperative death. The 3-year disease-free survival rate was 91%.

Conclusions

TORS can improve the diagnostic efficiency of primary tumour of CUP and achieve good oncology and functional results.

背景 头颈部原发灶不明的鳞状细胞癌(CUP)难以诊断和治疗。本报告概述了 11 例经口机器人手术(TORS)治疗的 CUP,旨在研究原发肿瘤的诊断效率和 TORS 的根治性切除效果。 方法 回顾性分析 68 例口咽癌患者中接受 TORS 治疗的 11 例 CUP 病例。 结果 所有 11 例患者均接受了 TORS 和颈淋巴结清扫术。原发肿瘤有 8 例(72.7%),其中 4 例位于腭扁桃体,4 例位于舌根部。原发肿瘤的平均直径为 1.65 厘米。所有患者均在 24 小时内恢复经口进食,无气管切开,无咽瘘,无术后死亡。3 年无病生存率为 91%。 结论 TORS 可以提高 CUP 原发肿瘤的诊断效率,并取得良好的肿瘤学和功能效果。
{"title":"Transoral robotic surgery in the diagnosis and treatment of primary unknown head and neck squamous cell carcinoma: A preliminary single centre experience","authors":"Yinghui Zhi,&nbsp;Yabing Zhang,&nbsp;Bin Zhang","doi":"10.1002/rcs.2652","DOIUrl":"https://doi.org/10.1002/rcs.2652","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Squamous cell carcinoma of unknown primary (CUP) in the head and neck is difficult to diagnose and treat. This report outlines 11 cases of CUP treated with transoral robotic surgery (TORS), aimed at investigating the diagnostic efficiency of primary tumour and radical resection effectiveness of TORS.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>11 cases of CUP among 68 oropharyngeal cancer patients treated by TORS were analysed retrospectively.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>All the 11 cases received TORS with cervical lymph node dissection. Primary tumours were found in 8 cases (72.7%), 4 cases in the palatine tonsil and 4 cases in the base of the tongue. The average diameter of the primary tumour was 1.65 cm. All patients resumed eating by mouth within 24 h, no tracheotomy, no pharyngeal fistula and no postoperative death. The 3-year disease-free survival rate was 91%.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>TORS can improve the diagnostic efficiency of primary tumour of CUP and achieve good oncology and functional results.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141439709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D evaluation model of facial aesthetics based on multi-input 3D convolution neural networks for orthognathic surgery 基于多输入三维卷积神经网络的面部美学三维评估模型,用于正颌外科手术。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-06-14 DOI: 10.1002/rcs.2651
Qingchuan Ma, Etsuko Kobayashi, Siao Jin, Ken Masamune, Hideyuki Suenaga

Background

Quantitative evaluation of facial aesthetics is an important but also time-consuming procedure in orthognathic surgery, while existing 2D beauty-scoring models are mainly used for entertainment with less clinical impact.

Methods

A deep-learning-based 3D evaluation model DeepBeauty3D was designed and trained using 133 patients' CT images. The customised image preprocessing module extracted the skeleton, soft tissue, and personal physical information from raw DICOM data, and the predicting network module employed 3-input-2-output convolution neural networks (CNN) to receive the aforementioned data and output aesthetic scores automatically.

Results

Experiment results showed that this model predicted the skeleton and soft tissue score with 0.231 ± 0.218 (4.62%) and 0.100 ± 0.344 (2.00%) accuracy in 11.203 ± 2.824 s from raw CT images.

Conclusion

This study provided an end-to-end solution using real clinical data based on 3D CNN to quantitatively evaluate facial aesthetics by considering three anatomical factors simultaneously, showing promising potential in reducing workload and bridging the surgeon-patient aesthetics perspective gap.

背景:定量评估面部美学是正颌外科中一项重要但耗时的工作,而现有的二维美学评分模型主要用于娱乐,对临床影响较小:对面部美学进行定量评估是正颌外科中一项重要但耗时的程序,而现有的二维美学评分模型主要用于娱乐,临床影响较小:方法:使用 133 幅患者 CT 图像设计并训练了基于深度学习的 3D 评估模型 DeepBeauty3D。定制的图像预处理模块从 DICOM 原始数据中提取骨架、软组织和个人体征信息,预测网络模块采用 3 输入 2 输出的卷积神经网络(CNN)接收上述数据并自动输出美学评分:实验结果表明,该模型能在 11.203±2.824 秒内从原始 CT 图像预测骨骼和软组织评分,准确率分别为 0.231±0.218 (4.62%)和 0.100±0.344 (2.00%):该研究基于三维 CNN,利用真实临床数据提供了一种端到端的解决方案,通过同时考虑三个解剖因素来定量评估面部美学,在减少工作量和缩小外科医生-患者美学视角差距方面显示出巨大潜力。
{"title":"3D evaluation model of facial aesthetics based on multi-input 3D convolution neural networks for orthognathic surgery","authors":"Qingchuan Ma,&nbsp;Etsuko Kobayashi,&nbsp;Siao Jin,&nbsp;Ken Masamune,&nbsp;Hideyuki Suenaga","doi":"10.1002/rcs.2651","DOIUrl":"10.1002/rcs.2651","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Quantitative evaluation of facial aesthetics is an important but also time-consuming procedure in orthognathic surgery, while existing 2D beauty-scoring models are mainly used for entertainment with less clinical impact.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>A deep-learning-based 3D evaluation model DeepBeauty3D was designed and trained using 133 patients' CT images. The customised image preprocessing module extracted the skeleton, soft tissue, and personal physical information from raw DICOM data, and the predicting network module employed 3-input-2-output convolution neural networks (CNN) to receive the aforementioned data and output aesthetic scores automatically.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Experiment results showed that this model predicted the skeleton and soft tissue score with 0.231 ± 0.218 (4.62%) and 0.100 ± 0.344 (2.00%) accuracy in 11.203 ± 2.824 s from raw CT images.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>This study provided an end-to-end solution using real clinical data based on 3D CNN to quantitatively evaluate facial aesthetics by considering three anatomical factors simultaneously, showing promising potential in reducing workload and bridging the surgeon-patient aesthetics perspective gap.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.2651","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of a fluoroscopy-based robotic-assisted total hip arthroplasty system resulted in greater improvements in hip-specific outcome measures at one-year compared to a CT-based robotic-assisted system 与基于CT的机器人辅助系统相比,使用基于透视的机器人辅助全髋关节置换术系统一年后,髋关节特异性结果指标的改善幅度更大。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-06-10 DOI: 10.1002/rcs.2650
Christian B. Ong, Graham B. J. Buchan, Christian J. Hecht II, David Liu, Joshua Petterwood, Atul F. Kamath

Background

The purpose of this study was to compare one-year patient reported outcome measures between a novel fluoroscopy-based robotic-assisted (FL-RTHA) system and an existing computerised tomography-based robotic assisted (CT-RTHA) system.

Methods

A review of 85 consecutive FL-RTHA and 125 consecutive CT-RTHA was conducted. Outcomes included one-year post-operative Veterans RAND-12 (VR-12) Physical (PCS)/Mental (MCS), Hip Disability and Osteoarthritis Outcome (HOOS) Pain/Physical Function (PS)/Joint replacement, and University of California Los Angeles (UCLA) Activity scores.

Results

The FL-RTHA cohort had lower pre-operative VR-12 PCS, HOOS Pain, HOOS-PS, HOOS-JR, and UCLA Activity scores compared with patients in the CT-RTHA cohort. The FL-RTHA cohort reported greater improvements in HOOS-PS scores (−41.54 vs. −36.55; p = 0.028) than the CT-RTHA cohort. Both cohorts experienced similar rates of major post-operative complications, and had similar radiographic outcomes.

Conclusions

Use of the fluoroscopy-based robotic system resulted in greater improvements in HOOS-PS in one-year relative to the CT-based robotic technique.

背景:本研究的目的是比较新型透视机器人辅助(FL-RTHA)系统和现有计算机断层扫描机器人辅助(CT-RTHA)系统一年的患者报告结果:方法:对 85 例连续的 FL-RTHA 和 125 例连续的 CT-RTHA 进行了回顾。结果包括术后一年的退伍军人兰德-12(VR-12)身体(PCS)/心理(MCS)、髋关节残疾和骨关节炎结果(HOOS)疼痛/身体功能(PS)/关节置换以及加州大学洛杉矶分校(UCLA)活动评分:与 CT-RTHA 组患者相比,FL-RTHA 组患者术前的 VR-12 PCS、HOOS 疼痛、HOOS-PS、HOOS-JR 和 UCLA 活动评分较低。FL-RTHA队列的HOOS-PS评分(-41.54 vs. -36.55;p = 0.028)比CT-RTHA队列有更大的改善。两组患者的术后主要并发症发生率相似,放射学结果相似:结论:与基于CT的机器人技术相比,基于透视的机器人系统可在一年内提高HOOS-PS。
{"title":"Use of a fluoroscopy-based robotic-assisted total hip arthroplasty system resulted in greater improvements in hip-specific outcome measures at one-year compared to a CT-based robotic-assisted system","authors":"Christian B. Ong,&nbsp;Graham B. J. Buchan,&nbsp;Christian J. Hecht II,&nbsp;David Liu,&nbsp;Joshua Petterwood,&nbsp;Atul F. Kamath","doi":"10.1002/rcs.2650","DOIUrl":"10.1002/rcs.2650","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>The purpose of this study was to compare one-year patient reported outcome measures between a novel fluoroscopy-based robotic-assisted (FL-RTHA) system and an existing computerised tomography-based robotic assisted (CT-RTHA) system.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>A review of 85 consecutive FL-RTHA and 125 consecutive CT-RTHA was conducted. Outcomes included one-year post-operative Veterans RAND-12 (VR-12) Physical (PCS)/Mental (MCS), Hip Disability and Osteoarthritis Outcome (HOOS) Pain/Physical Function (PS)/Joint replacement, and University of California Los Angeles (UCLA) Activity scores.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The FL-RTHA cohort had lower pre-operative VR-12 PCS, HOOS Pain, HOOS-PS, HOOS-JR, and UCLA Activity scores compared with patients in the CT-RTHA cohort. The FL-RTHA cohort reported greater improvements in HOOS-PS scores (−41.54 vs. −36.55; <i>p</i> = 0.028) than the CT-RTHA cohort. Both cohorts experienced similar rates of major post-operative complications, and had similar radiographic outcomes.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Use of the fluoroscopy-based robotic system resulted in greater improvements in HOOS-PS in one-year relative to the CT-based robotic technique.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented-reality-based surgical navigation for endoscope retrograde cholangiopancreatography: A phantom study 基于增强现实技术的内镜逆行胰胆管造影手术导航:模拟研究
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-06-07 DOI: 10.1002/rcs.2649
Zhipeng Lin, Zhuoyue Yang, Ranyang Li, Shangyu Sun, Bin Yan, Yongming Yang, Hao Liu, Junjun Pan

Background

Endoscope retrograde cholangiopancreatography is a standard surgical treatment for gallbladder and pancreatic diseases. However, surgeons is at high risk and require sufficient surgical experience and skills.

Methods

(1) The simultaneous localisation and mapping technique to reconstruct the surgical environment. (2) The preoperative 3D model is transformed into the intraoperative video environment to implement the multi-modal fusion. (3) A framework for virtual-to-real projection based on hand-eye alignment. For the purpose of projecting the 3D model onto the imaging plane of the camera, it uses position data from electromagnetic sensors.

Results

Our AR-assisted navigation system can accurately guide physicians, which means a distance of registration error to be restricted to under 5 mm and a projection error of 5.76 ± 2.13, and the intubation procedure is done at 30 frames per second.

Conclusions

Coupled with clinical validation and user studies, both the quantitative and qualitative results indicate that our navigation system has the potential to be highly useful in clinical practice.

背景:内镜逆行胰胆管造影术是胆囊和胰腺疾病的标准外科治疗方法。方法:(1)采用同步定位和绘图技术重建手术环境。(2) 将术前三维模型转化为术中视频环境,实现多模态融合。(3) 基于手眼对准的虚拟到现实投影框架。为了将三维模型投影到摄像机的成像平面上,它使用了电磁传感器的位置数据:结果:我们的 AR 辅助导航系统可以为医生提供精确的指导,这意味着注册误差的距离被限制在 5 毫米以下,投影误差为 5.76 ± 2.13,插管过程以每秒 30 帧的速度完成:结论:结合临床验证和用户研究,定量和定性结果均表明,我们的导航系统在临床实践中具有非常有用的潜力。
{"title":"Augmented-reality-based surgical navigation for endoscope retrograde cholangiopancreatography: A phantom study","authors":"Zhipeng Lin,&nbsp;Zhuoyue Yang,&nbsp;Ranyang Li,&nbsp;Shangyu Sun,&nbsp;Bin Yan,&nbsp;Yongming Yang,&nbsp;Hao Liu,&nbsp;Junjun Pan","doi":"10.1002/rcs.2649","DOIUrl":"10.1002/rcs.2649","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Endoscope retrograde cholangiopancreatography is a standard surgical treatment for gallbladder and pancreatic diseases. However, surgeons is at high risk and require sufficient surgical experience and skills.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>(1) The simultaneous localisation and mapping technique to reconstruct the surgical environment. (2) The preoperative 3D model is transformed into the intraoperative video environment to implement the multi-modal fusion. (3) A framework for virtual-to-real projection based on hand-eye alignment. For the purpose of projecting the 3D model onto the imaging plane of the camera, it uses position data from electromagnetic sensors.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Our AR-assisted navigation system can accurately guide physicians, which means a distance of registration error to be restricted to under 5 mm and a projection error of 5.76 ± 2.13, and the intubation procedure is done at 30 frames per second.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>Coupled with clinical validation and user studies, both the quantitative and qualitative results indicate that our navigation system has the potential to be highly useful in clinical practice.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Force/position tracking control of fracture reduction robot based on nonlinear disturbance observer and neural network 基于非线性干扰观测器和神经网络的断裂修复机器人力/位置跟踪控制。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-06-07 DOI: 10.1002/rcs.2639
Jintao Lei, Zhuangzhuang Wang

Background

For the fracture reduction robot, the position tracking accuracy and compliance are affected by dynamic loads from muscle stretching, uncertainties in robot dynamics models, and various internal and external disturbances.

Methods

A control method that integrates a Radial Basis Function Neural Network (RBFNN) with Nonlinear Disturbance Observer is proposed to enhance position tracking accuracy. Additionally, an admittance control is employed for force tracking to enhance the robot's compliance, thereby improving the safety.

Results

Experiments are conducted on a long bone fracture model with simulated muscle forces and the results demonstrate that the position tracking error is less than ±0.2 mm, the angular displacement error is less than ±0.3°, and the maximum force tracking error is 26.28 N. This result can meet surgery requirements.

Conclusions

The control method shows promising outcomes in enhancing the safety and accuracy of long bone fracture reduction with robotic assistance.

背景:对于骨折复位机器人来说,肌肉拉伸产生的动态载荷、机器人动力学模型的不确定性以及各种内部和外部干扰都会影响其位置跟踪精度和顺应性:对于骨折复位机器人来说,位置跟踪精度和顺应性受到肌肉拉伸产生的动态负载、机器人动力学模型的不确定性以及各种内部和外部干扰的影响:方法:提出了一种将径向基函数神经网络(RBFNN)与非线性干扰观测器相结合的控制方法,以提高位置跟踪精度。此外,还在力跟踪中采用了导纳控制,以增强机器人的顺应性,从而提高安全性:在模拟肌肉力量的长骨骨折模型上进行了实验,结果表明位置跟踪误差小于±0.2 mm,角位移误差小于±0.3°,最大力跟踪误差为 26.28 N,这一结果可以满足手术要求:该控制方法在提高机器人辅助长骨骨折复位的安全性和准确性方面显示出良好的效果。
{"title":"Force/position tracking control of fracture reduction robot based on nonlinear disturbance observer and neural network","authors":"Jintao Lei,&nbsp;Zhuangzhuang Wang","doi":"10.1002/rcs.2639","DOIUrl":"10.1002/rcs.2639","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>For the fracture reduction robot, the position tracking accuracy and compliance are affected by dynamic loads from muscle stretching, uncertainties in robot dynamics models, and various internal and external disturbances.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>A control method that integrates a Radial Basis Function Neural Network (RBFNN) with Nonlinear Disturbance Observer is proposed to enhance position tracking accuracy. Additionally, an admittance control is employed for force tracking to enhance the robot's compliance, thereby improving the safety.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Experiments are conducted on a long bone fracture model with simulated muscle forces and the results demonstrate that the position tracking error is less than ±0.2 mm, the angular displacement error is less than ±0.3°, and the maximum force tracking error is 26.28 N. This result can meet surgery requirements.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The control method shows promising outcomes in enhancing the safety and accuracy of long bone fracture reduction with robotic assistance.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radical prostatectomy using the Hinotori robot-assisted surgical system: Docking-free design may contribute to reduction in postoperative pain 使用日之鸟机器人辅助手术系统进行根治性前列腺切除术:无对接设计有助于减轻术后疼痛。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-06-02 DOI: 10.1002/rcs.2648
Yutaro Sasaki, Yoshito Kusuhara, Takuro Oyama, Mitsuki Nishiyama, Saki Kobayashi, Kei Daizumoto, Ryotaro Tomida, Yoshiteru Ueno, Tomoya Fukawa, Kunihisa Yamaguchi, Yasuyo Yamamoto, Masayuki Takahashi, Hiroomi Kanayama, Junya Furukawa

Background

The docking-free design of the Japanese Hinotori surgical robotic system allows the robotic arm to avoid trocar grasping, thereby minimising excessive abdominal wall stress. The aim of this study was to evaluate the safety and efficacy of robotic-assisted radical prostatectomy (RARP) using the Hinotori system and to explore the potential contribution of its docking-free design to postoperative pain reduction.

Methods

This study reviewed the clinical records of 94 patients who underwent RARP: 48 patients in the Hinotori group and 46 in the da Vinci Xi group.

Results

Hinotori group had significantly longer operative and console times (p = 0.030 and p = 0.029, respectively). Perioperative complications and oncologic outcomes did not differ between the two groups. On postoperative day 4, the rate of decline from the maximum visual analogue scale score was marginally significant in the Hinotori group (p = 0.062).

Conclusions

The docking-free design may contribute to reducing postoperative pain.

背景:日本Hinotori手术机器人系统的无对接设计可使机器人手臂避免抓取套管,从而最大限度地减少过度的腹壁应力。本研究旨在评估使用日之鸟系统进行机器人辅助根治性前列腺切除术(RARP)的安全性和有效性,并探讨其无对接设计对减轻术后疼痛的潜在贡献:这项研究回顾了94名接受前列腺癌根治术的患者的临床记录:Hinotori组48人,达芬奇Xi组46人:结果:Hinotori组的手术时间和控制时间明显更长(分别为p = 0.030和p = 0.029)。两组围手术期并发症和肿瘤结果无差异。在术后第4天,Hinotori组的最大视觉模拟评分下降率略微显著(p = 0.062):结论:无对接设计可能有助于减轻术后疼痛。
{"title":"Radical prostatectomy using the Hinotori robot-assisted surgical system: Docking-free design may contribute to reduction in postoperative pain","authors":"Yutaro Sasaki,&nbsp;Yoshito Kusuhara,&nbsp;Takuro Oyama,&nbsp;Mitsuki Nishiyama,&nbsp;Saki Kobayashi,&nbsp;Kei Daizumoto,&nbsp;Ryotaro Tomida,&nbsp;Yoshiteru Ueno,&nbsp;Tomoya Fukawa,&nbsp;Kunihisa Yamaguchi,&nbsp;Yasuyo Yamamoto,&nbsp;Masayuki Takahashi,&nbsp;Hiroomi Kanayama,&nbsp;Junya Furukawa","doi":"10.1002/rcs.2648","DOIUrl":"10.1002/rcs.2648","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>The docking-free design of the Japanese Hinotori surgical robotic system allows the robotic arm to avoid trocar grasping, thereby minimising excessive abdominal wall stress. The aim of this study was to evaluate the safety and efficacy of robotic-assisted radical prostatectomy (RARP) using the Hinotori system and to explore the potential contribution of its docking-free design to postoperative pain reduction.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>This study reviewed the clinical records of 94 patients who underwent RARP: 48 patients in the Hinotori group and 46 in the da Vinci Xi group.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Hinotori group had significantly longer operative and console times (<i>p</i> = 0.030 and <i>p</i> = 0.029, respectively). Perioperative complications and oncologic outcomes did not differ between the two groups. On postoperative day 4, the rate of decline from the maximum visual analogue scale score was marginally significant in the Hinotori group (<i>p</i> = 0.062).</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The docking-free design may contribute to reducing postoperative pain.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A haptic guidance system for simulated catheter navigation with different kinaesthetic feedback profiles 用于模拟导管导航的触觉引导系统,具有不同的动觉反馈曲线。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-05-31 DOI: 10.1002/rcs.2638
Taha Abbasi-Hashemi, Farrokh Janabi-Sharifi, Asim N. Cheema, Kourosh Zareinia

Background

This paper proposes a haptic guidance system to improve catheter navigation within a simulated environment.

Methods

Three force profiles were constructed to evaluate the system: collision prevention; centreline navigation; and a novel force profile of reinforcement learning (RL). All force profiles were evaluated from the left common iliac to the right atrium.

Results

Our findings show that providing haptic feedback improved surgical safety compared to visual-only feedback. If staying inside the vasculature is the priority, RL provides the safest option. It is also shown that the performance of each force profile varies in different anatomical regions.

Conclusion

The implications of these findings are significant, as they hold the potential to improve how and when haptic feedback is applied for cardiovascular intervention.

背景:本文提出了一种在模拟环境中改善导管导航的触觉引导系统:方法:构建了三种力曲线来评估该系统:碰撞预防、中心线导航和强化学习(RL)的新型力曲线。我们对从左髂总动脉到右心房的所有力曲线进行了评估:结果:我们的研究结果表明,与纯视觉反馈相比,触觉反馈提高了手术安全性。如果在血管内进行手术是首要任务,那么 RL 就是最安全的选择。研究还表明,在不同的解剖区域,每种力谱的性能都有所不同:这些发现意义重大,因为它们有可能改善触觉反馈在心血管介入中的应用方式和时间。
{"title":"A haptic guidance system for simulated catheter navigation with different kinaesthetic feedback profiles","authors":"Taha Abbasi-Hashemi,&nbsp;Farrokh Janabi-Sharifi,&nbsp;Asim N. Cheema,&nbsp;Kourosh Zareinia","doi":"10.1002/rcs.2638","DOIUrl":"10.1002/rcs.2638","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>This paper proposes a haptic guidance system to improve catheter navigation within a simulated environment.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Three force profiles were constructed to evaluate the system: collision prevention; centreline navigation; and a novel force profile of reinforcement learning (RL). All force profiles were evaluated from the left common iliac to the right atrium.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>Our findings show that providing haptic feedback improved surgical safety compared to visual-only feedback. If staying inside the vasculature is the priority, RL provides the safest option. It is also shown that the performance of each force profile varies in different anatomical regions.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>The implications of these findings are significant, as they hold the potential to improve how and when haptic feedback is applied for cardiovascular intervention.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141184716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A back propagation neural network based respiratory motion modelling method 基于反向传播神经网络的呼吸运动建模方法。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-05-28 DOI: 10.1002/rcs.2647
Shan Jiang, Bowen Li, Zhiyong Yang, Yuhua Li, Zeyang Zhou

Background

This study presents the development of a backpropagation neural network-based respiratory motion modelling method (BP-RMM) for precisely tracking arbitrary points within lung tissue throughout free respiration, encompassing deep inspiration and expiration phases.

Methods

Internal and external respiratory data from four-dimensional computed tomography (4DCT) are processed using various artificial intelligence algorithms. Data augmentation through polynomial interpolation is employed to enhance dataset robustness. A BP neural network is then constructed to comprehensively track lung tissue movement.

Results

The BP-RMM demonstrates promising accuracy. In cases from the public 4DCT dataset, the average target registration error (TRE) between authentic deep respiration phases and those forecasted by BP-RMM for 75 marked points is 1.819 mm. Notably, TRE for normal respiration phases is significantly lower, with a minimum error of 0.511 mm.

Conclusions

The proposed method is validated for its high accuracy and robustness, establishing it as a promising tool for surgical navigation within the lung.

背景:本研究提出了一种基于反向传播神经网络的呼吸运动建模方法(BP-RMM),用于在整个自由呼吸过程中精确跟踪肺组织内的任意点,包括深吸气和呼气阶段:方法:使用各种人工智能算法处理来自四维计算机断层扫描(4DCT)的内部和外部呼吸数据。通过多项式插值进行数据增强,以提高数据集的鲁棒性。然后构建一个 BP 神经网络,以全面跟踪肺组织运动:结果:BP-RMM 显示出了良好的准确性。在公共 4DCT 数据集的案例中,75 个标记点的真实深度呼吸相位与 BP-RMM 预测相位之间的平均目标配准误差(TRE)为 1.819 毫米。值得注意的是,正常呼吸相位的目标注册误差要小得多,最小误差为 0.511 毫米:所提出的方法因其高精度和鲁棒性而得到验证,使其成为肺部手术导航的理想工具。
{"title":"A back propagation neural network based respiratory motion modelling method","authors":"Shan Jiang,&nbsp;Bowen Li,&nbsp;Zhiyong Yang,&nbsp;Yuhua Li,&nbsp;Zeyang Zhou","doi":"10.1002/rcs.2647","DOIUrl":"10.1002/rcs.2647","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>This study presents the development of a backpropagation neural network-based respiratory motion modelling method (BP-RMM) for precisely tracking arbitrary points within lung tissue throughout free respiration, encompassing deep inspiration and expiration phases.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>Internal and external respiratory data from four-dimensional computed tomography (4DCT) are processed using various artificial intelligence algorithms. Data augmentation through polynomial interpolation is employed to enhance dataset robustness. A BP neural network is then constructed to comprehensively track lung tissue movement.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The BP-RMM demonstrates promising accuracy. In cases from the public 4DCT dataset, the average target registration error (TRE) between authentic deep respiration phases and those forecasted by BP-RMM for 75 marked points is 1.819 mm. Notably, TRE for normal respiration phases is significantly lower, with a minimum error of 0.511 mm.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>The proposed method is validated for its high accuracy and robustness, establishing it as a promising tool for surgical navigation within the lung.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141158124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ERegPose: An explicit regression based 6D pose estimation for snake-like wrist-type surgical instruments ERegPose:基于显式回归的蛇形腕式手术器械 6D 姿势估计。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-05-24 DOI: 10.1002/rcs.2640
Jinhua Li, Zhengyang Ma, Xinan Sun, He Su

Background

Accurately estimating the 6D pose of snake-like wrist-type surgical instruments is challenging due to their complex kinematics and flexible design.

Methods

We propose ERegPose, a comprehensive strategy for precise 6D pose estimation. The strategy consists of two components: ERegPoseNet, an original deep neural network model designed for explicit regression of the instrument's 6D pose, and an annotated in-house dataset of simulated surgical operations. To capture rotational features, we employ an Single Shot multibox Detector (SSD)-like detector to generate bounding boxes of the instrument tip.

Results

ERegPoseNet achieves an error of 1.056 mm in 3D translation, 0.073 rad in 3D rotation, and an average distance (ADD) metric of 3.974 mm, indicating an overall spatial transformation error. The necessity of the SSD-like detector and L1 loss is validated through experiments.

Conclusions

ERegPose outperforms existing approaches, providing accurate 6D pose estimation for snake-like wrist-type surgical instruments. Its practical applications in various surgical tasks hold great promise.

背景:由于蛇形腕式手术器械运动学复杂,设计灵活,因此准确估计其 6D 姿态具有挑战性:我们提出了ERegPose--一种用于精确估计6D姿态的综合策略。该策略由两部分组成:ERegPoseNet是一个原创的深度神经网络模型,旨在对器械的6D姿态进行显式回归;ERegPoseNet还包括一个带有注释的模拟手术操作内部数据集。为了捕捉旋转特征,我们采用了类似于单射多框检测器(SSD)的检测器来生成器械尖端的边界框:结果:ERegPoseNet 的三维平移误差为 1.056 毫米,三维旋转误差为 0.073 拉德,平均距离 (ADD) 指标为 3.974 毫米,表明存在整体空间转换误差。通过实验验证了类 SSD 检测器和 L1 损失的必要性:ERegPose优于现有方法,可为蛇形腕式手术器械提供精确的6D姿态估计。它在各种手术任务中的实际应用前景广阔。
{"title":"ERegPose: An explicit regression based 6D pose estimation for snake-like wrist-type surgical instruments","authors":"Jinhua Li,&nbsp;Zhengyang Ma,&nbsp;Xinan Sun,&nbsp;He Su","doi":"10.1002/rcs.2640","DOIUrl":"10.1002/rcs.2640","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>Accurately estimating the 6D pose of snake-like wrist-type surgical instruments is challenging due to their complex kinematics and flexible design.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>We propose ERegPose, a comprehensive strategy for precise 6D pose estimation. The strategy consists of two components: ERegPoseNet, an original deep neural network model designed for explicit regression of the instrument's 6D pose, and an annotated in-house dataset of simulated surgical operations. To capture rotational features, we employ an Single Shot multibox Detector (SSD)-like detector to generate bounding boxes of the instrument tip.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>ERegPoseNet achieves an error of 1.056 mm in 3D translation, 0.073 rad in 3D rotation, and an average distance (ADD) metric of 3.974 mm, indicating an overall spatial transformation error. The necessity of the SSD-like detector and L1 loss is validated through experiments.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusions</h3>\u0000 \u0000 <p>ERegPose outperforms existing approaches, providing accurate 6D pose estimation for snake-like wrist-type surgical instruments. Its practical applications in various surgical tasks hold great promise.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new ring fixator system for automated bone fixation 用于自动骨固定的新型环形固定器系统。
IF 2.5 3区 医学 Q1 Medicine Pub Date : 2024-05-23 DOI: 10.1002/rcs.2637
Ahmet Aydi(ı)n, M. Kerem U(Ü)n

Background

In the field of orthopaedics, external fixators are commonly employed for treating extremity fractures and deformities. Computer-assisted systems offer a promising and less error-prone treatment alternative to manual fixation by utilising a software to plan treatments based on radiological and clinical data. Nevertheless, existing computer-assisted systems have limitations and constraints.

Methods

This work represents the culmination of a project aimed at developing a new automatised fixation system and a corresponding software to minimise human intervention and associated errors, and the developed system incorporates enhanced functionalities and has fewer constraints compared to existing systems.

Results

The automatised fixation system and its graphical user interface (GUI) demonstrate promising results in terms of accuracy, efficiency, and reliability.

Conclusion

The developed fixation system and its accompanying GUI represent an improvement in computer-assisted fixation systems. Future research may focus on further refining the system and conducting clinical trials.

背景:在骨科领域,外固定器通常用于治疗四肢骨折和畸形。计算机辅助系统利用软件根据放射学和临床数据规划治疗方案,是一种替代人工固定的有前途且不易出错的治疗方法。然而,现有的计算机辅助系统存在局限性和制约因素:这项工作是一个项目的高潮,该项目旨在开发一种新的自动化固定系统和相应的软件,以最大限度地减少人工干预和相关错误:结果:自动化定影系统及其图形用户界面(GUI)在准确性、效率和可靠性方面都取得了可喜的成果:结论:所开发的定点系统及其图形用户界面代表了计算机辅助定点系统的进步。今后的研究重点是进一步完善该系统并开展临床试验。
{"title":"A new ring fixator system for automated bone fixation","authors":"Ahmet Aydi(ı)n,&nbsp;M. Kerem U(Ü)n","doi":"10.1002/rcs.2637","DOIUrl":"10.1002/rcs.2637","url":null,"abstract":"<div>\u0000 \u0000 \u0000 <section>\u0000 \u0000 <h3> Background</h3>\u0000 \u0000 <p>In the field of orthopaedics, external fixators are commonly employed for treating extremity fractures and deformities. Computer-assisted systems offer a promising and less error-prone treatment alternative to manual fixation by utilising a software to plan treatments based on radiological and clinical data. Nevertheless, existing computer-assisted systems have limitations and constraints.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Methods</h3>\u0000 \u0000 <p>This work represents the culmination of a project aimed at developing a new automatised fixation system and a corresponding software to minimise human intervention and associated errors, and the developed system incorporates enhanced functionalities and has fewer constraints compared to existing systems.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Results</h3>\u0000 \u0000 <p>The automatised fixation system and its graphical user interface (GUI) demonstrate promising results in terms of accuracy, efficiency, and reliability.</p>\u0000 </section>\u0000 \u0000 <section>\u0000 \u0000 <h3> Conclusion</h3>\u0000 \u0000 <p>The developed fixation system and its accompanying GUI represent an improvement in computer-assisted fixation systems. Future research may focus on further refining the system and conducting clinical trials.</p>\u0000 </section>\u0000 </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rcs.2637","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141088785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Medical Robotics and Computer Assisted Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1