Robot-assisted minimally invasive surgery (RAMIS) offers numerous benefits over traditional open surgery, resulting in greater prevalence of use and range of approved procedures. The proliferation of RAMIS has highlighted a need for effective, robust, and objective methods for assessing robotic surgical skills. Traditionally, assessment has relied on expert observation using structured grading rubrics. Although validated and widely used, this method is also resource intensive and subject to reviewer bias. In response, recent work has explored the potential for more robust assessment methods, including the development of skill-based metrics, crowd-sourced assessment techniques, and automated evaluation systems. This review summarizes recent developments in robotic surgical technical skill assessment, focusing on studies using the da Vinci platform. Assessment methods are grouped into four categories: structured rubrics, skill-based metrics, crowd-sourcing techniques, and automated assessment models. Trends of note include adaptation of established rubrics for specific areas of specialty, the implementation of deep learning models for automated assessment, and a move to integrate crowd-sourcing platforms for efficient and inexpensive evaluation. While traditional grading rubric structures remain the standard, multilevel assessment strategies and objective feedback systems are gaining traction. Future work should seek to integrate task- and movement-based assessment into procedure-level evaluations to create more robust and generalizable models for assessment. These advances show a shift towards data-driven and objective assessment methods, which could improve surgical training and patient outcomes.
扫码关注我们
求助内容:
应助结果提醒方式:
