Blog - Latest News

Article of the week: Robotic surgery training methods: take your pick

Every week the Editor-in-Chief selects the Article of the Week from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an accompanying editorial written by a prominent member of the urological community. This blog is intended to provoke comment and discussion and we invite you to use the comment tools at the bottom of each post to join the conversation.

Finally, the third post under the Article of the Week heading on the homepage will consist of additional material or media. This week we feature a video of Dr. Goh discussing standardized robotic surgery training methods.

If you only have time to read one article this week, it should be this one.

Comparative assessment of three standardized robotic surgery training methods

Andrew J. Hung, Isuru S. Jayaratna, Kara Teruya, Mihir M. Desai, Inderbir S. Gill and Alvin C. Goh*

USC Institute of Urology, Hillard and Roclyn Herzog Center for Robotic Surgery, Keck School of Medicine, University of Southern California, Los Angeles, CA, and *Department of Urology, Methodist Institute for Technology, Innovation and Education, The Methodist Hospital, Houston, TX, USA

OBJECTIVES

• To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity.

• To explore the concept of cross-method validity, where the relative performance of each method is compared.

MATERIALS AND METHODS

• Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: ‘novice/trainee’: urology residents, previous experience <30 cases (n = 38) and ‘experts’: faculty surgeons, previous experience ≥30 cases (n = 11).

• Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool.

• A Kruskal–Wallis test was used to evaluate performance differences between novices and experts (construct validity).

• Spearman’s correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity).

RESULTS

• Novice and expert surgeons had previously performed a median (range) of 0 (0–20) and 300 (30–2000) robotic cases, respectively (P < 0.001).

• Construct validity: experts consistently outperformed residents with all three methods (P < 0.001).

• Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = −0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = −0.8, P < 0.0001).

• Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001).

CONCLUSIONS

• We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment.

• We externally confirmed the construct validity of each featured training tool.

 

Read Previous Articles of the Week

 

© 2024 BJU International. All Rights Reserved.