Tag Archive for: clinical competence

Posts

Article of the week: Robotic surgery training methods: take your pick

Every week the Editor-in-Chief selects the Article of the Week from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an accompanying editorial written by a prominent member of the urological community. This blog is intended to provoke comment and discussion and we invite you to use the comment tools at the bottom of each post to join the conversation.

Finally, the third post under the Article of the Week heading on the homepage will consist of additional material or media. This week we feature a video of Dr. Goh discussing standardized robotic surgery training methods.

If you only have time to read one article this week, it should be this one.

Comparative assessment of three standardized robotic surgery training methods

Andrew J. Hung, Isuru S. Jayaratna, Kara Teruya, Mihir M. Desai, Inderbir S. Gill and Alvin C. Goh*

USC Institute of Urology, Hillard and Roclyn Herzog Center for Robotic Surgery, Keck School of Medicine, University of Southern California, Los Angeles, CA, and *Department of Urology, Methodist Institute for Technology, Innovation and Education, The Methodist Hospital, Houston, TX, USA

OBJECTIVES

• To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity.

• To explore the concept of cross-method validity, where the relative performance of each method is compared.

MATERIALS AND METHODS

• Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: ‘novice/trainee’: urology residents, previous experience <30 cases (n = 38) and ‘experts’: faculty surgeons, previous experience ≥30 cases (n = 11).

• Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool.

• A Kruskal–Wallis test was used to evaluate performance differences between novices and experts (construct validity).

• Spearman’s correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity).

RESULTS

• Novice and expert surgeons had previously performed a median (range) of 0 (0–20) and 300 (30–2000) robotic cases, respectively (P < 0.001).

• Construct validity: experts consistently outperformed residents with all three methods (P < 0.001).

• Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = −0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = −0.8, P < 0.0001).

• Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001).

CONCLUSIONS

• We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment.

• We externally confirmed the construct validity of each featured training tool.

 

Read Previous Articles of the Week

 

Editorial: Three robotic surgery training methods: is there a clear winner?

All training adds value. A craft-based specialty such as surgery has always recognised this. The advent of advanced minimally invasive surgical technology and techniques has provided both new challenges and new opportunities for surgical performance and for the delivery of training. Conceptually, we have moved from the Halstedian model of ‘See one, do one, teach one’ [1] to an environment where skills are acquired away from the operating room in simulator, inanimate and in vivo (animal) laboratory training sessions. Increased scrutiny of credentialling and medico-legal aspects of robotic surgery have reinforced the importance of training and have led to a number of papers outlining pathways to facilitate this [2, 3].

In the present paper, Hung et al. evaluate the construct validity of three standardised training methods (inanimate, simulator and in vivo) and also compare the three different platforms for cross-method training value. As others have shown, the latest generation of robotic surgery simulators have high face, content and construct validity [4, 5] and the present paper confirms the value of both inanimate and simulator training for novice surgeons. In addition, the authors confirmed the construct validity of a simple in vivo exercise using the daVinci© surgical system by demonstrating that experts outperformed novices. Using Spearman’s rank correlation coefficient, the authors compared the three training methods under evaluation and concluded that they were strongly correlated for construct validity between exert and novice surgeons. While construct validation of these exercises may be established, are they useful for experts? Until realistic virtual reality surgical simulations are available, only a novice, an inexperienced or an occasional robot-assisted surgeon may benefit from virtual reality exercises.

What are we therefore to conclude from this? For certain, the advent of excellent surgical simulators and structured inanimate exercises has provided tools for novice surgeons to acquire console skills in a safe and structured environment. This will enhance their operating performance and reduce aspects of the learning curve such as operating time; however, the lack of availability of in vivo training opportunities greatly limits the applicability of this method of surgical training. In many countries (including Australia and the UK), this type of training is illegal or not available. The robotic surgery industry has strongly recommended that in vivo training should be undertaken in one of their official training facilities before surgeons are given the credentials to use this technology; however, even in the USA where most of these facilities are located, key leaders within the AUA have called for the awarding of credentials for robotic surgery ‘not to be an industry driven process, but one that is a result of a standardized, competency based, peer evaluation system’ [2]. Notably, the current AUA Standard Operating Practices (guidelines) for the awarding of credentials for robotic surgery list in vivo training as being optional.

Our view is that although all training has value, there is not enough evidence that in vivo training (particularly on an animal with a rudimentary prostate), which requires international travel and considerable expense, adds sufficient value to be mandatory in any credentialling process. In fact, we have dropped the requirement to complete in vivo training from our requirements at major robotic surgery centres in Australia in favour of structured Mini-Fellowship training [6]. Hung et al. have confirmed what we already knew, which is that all training adds value; however it is likely that only simulator and inanimate training adds enough value to be incorporated into standardised training in robotic surgery.

The multi-disciplinary ‘Fundamentals of Robotic Surgery’ (FRS) curriculum being created by Dr Richard Satava and associates is working on psychomotor skills tasks that include inanimate models as well as corresponding virtual reality exercises. Multi-institutional validation of the FRS or similar curricula will allow the establishment of training milestones and proficiency benchmarks. We must continue to strive for further development of robotic and surgical simulation to change the training paradigm so that surgical training does not need to be at the expense, however minor, of increased operating time or adverse patient outcome.

Declan G. Murphy* and Chandru P. Sundaram
*Peter MacCallum Cancer Centre, Division of Cancer Surgery, University of Melbourne, Australian Prostate Cancer Research Centre, Epworth Richmond Hospital, Melbourne, Australia, and Department of Urology, Indiana University, Indianapolis, IN, USA

References

  1. Halsted WS. The training of the surgeon. Bull Johns Hop Hosp 1904; XV: 8
  2. Lee JY, Mucksavage P, Sundaram CP, McDougall EM. Best practices for robotic surgery training and credentialingJ Urol 2011;185: 1191–1197
  3. Zorn KC, Gautam G, Shalhav AL et al. Training, credentialing, proctoring and medicolegal risks of robotic urological surgery: recommendations of the society of urologic robotic surgeonsJ Urol 2009; 182: 1126–1132
  4. Finnegan KT, Meraney AM, Staff I, Shichman SJ. da Vinci Skills Simulator construct validation study: correlation of prior robotic experience with overall score and time score simulator performanceUrology 2012; 80: 330–335
  5. Abboudi H, Khan MS, Aboumarzouk O et al. Current status of validation for robotic surgery simulators – a systematic reviewBJU Int 2013; 111: 194–205
  6. Melbourne Uro-Oncology Training Program. Robotic surgery training. Available at: https://www.declanmurphy.com.au/training. Accessed 28 February 2013

Video: Take three: assessing robotic surgery training methods

Comparative assessment of three standardized robotic surgery training methods

Andrew J. Hung, Isuru S. Jayaratna, Kara Teruya, Mihir M. Desai, Inderbir S. Gill and Alvin C. Goh*

USC Institute of Urology, Hillard and Roclyn Herzog Center for Robotic Surgery, Keck School of Medicine, University of Southern California, Los Angeles, CA, and *Department of Urology, Methodist Institute for Technology, Innovation and Education, The Methodist Hospital, Houston, TX, USA

OBJECTIVES

• To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity.

• To explore the concept of cross-method validity, where the relative performance of each method is compared.

MATERIALS AND METHODS

• Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: ‘novice/trainee’: urology residents, previous experience <30 cases (n = 38) and ‘experts’: faculty surgeons, previous experience ≥30 cases (n = 11).

• Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool.

• A Kruskal–Wallis test was used to evaluate performance differences between novices and experts (construct validity).

• Spearman’s correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity).

RESULTS

• Novice and expert surgeons had previously performed a median (range) of 0 (0–20) and 300 (30–2000) robotic cases, respectively (P < 0.001).

• Construct validity: experts consistently outperformed residents with all three methods (P < 0.001).

• Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = −0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = −0.8, P < 0.0001).

• Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001).

CONCLUSIONS

• We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment.

• We externally confirmed the construct validity of each featured training tool.

© 2024 BJU International. All Rights Reserved.