Comparative assessment of three standardized robotic surgery training methods
Andrew J. Hung, Isuru S. Jayaratna, Kara Teruya, Mihir M. Desai, Inderbir S. Gill and Alvin C. Goh*
USC Institute of Urology, Hillard and Roclyn Herzog Center for Robotic Surgery, Keck School of Medicine, University of Southern California, Los Angeles, CA, and *Department of Urology, Methodist Institute for Technology, Innovation and Education, The Methodist Hospital, Houston, TX, USA
• To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity.
• To explore the concept of cross-method validity, where the relative performance of each method is compared.
MATERIALS AND METHODS
• Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: ‘novice/trainee’: urology residents, previous experience <30 cases (n = 38) and ‘experts’: faculty surgeons, previous experience ≥30 cases (n = 11).
• Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool.
• A Kruskal–Wallis test was used to evaluate performance differences between novices and experts (construct validity).
• Spearman’s correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity).
• Novice and expert surgeons had previously performed a median (range) of 0 (0–20) and 300 (30–2000) robotic cases, respectively (P < 0.001).
• Construct validity: experts consistently outperformed residents with all three methods (P < 0.001).
• Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = −0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = −0.8, P < 0.0001).
• Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001).
• We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment.
• We externally confirmed the construct validity of each featured training tool.