Tag Archive for: EEG

Posts

Article of the Week: Cognitive skills assessment during robot-assisted surgery

Every week the Editor-in-Chief selects the Article of the Week from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an accompanying editorial written by a prominent member of the urological community. This blog is intended to provoke comment and discussion and we invite you to use the comment tools at the bottom of each post to join the conversation.

Finally, the third post under the Article of the Week heading on the homepage will consist of additional material or media. This week we feature a video from Dr Khurshid A. Guru discussing his paper. 

If you only have time to read one article this week, it should be this one.

Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff

Khurshid A. Guru, Ehsan T. Esfahani†, Syed J. Raza, Rohit Bhat†, Katy Wang‡,
Yana Hammond, Gregory Wilding‡, James O. Peabody§ and Ashirwad J. Chowriappa

Department of Urology, Roswell Park Cancer Institute, Buffalo, NY; †Brain Computer Interface Laboratory, Department of Mechanical & Aerospace Engineering, University at Buffalo, Buffalo, NY; ‡Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, NY; and §Henry Ford Health System, Detroit, MI, USA

OBJECTIVE

To investigate the utility of cognitive assessment during robot-assisted surgery (RAS) to define skills in terms of cognitive engagement, mental workload, and mental state; while objectively differentiating between novice and expert surgeons.

SUBJECTS AND METHODS

In all, 10 surgeons with varying operative experience were assigned to beginner (BG), combined competent and proficient (CPG), and expert (EG) groups based on the Dreyfus model. The participants performed tasks for basic, intermediate and advanced skills on the da Vinci Surgical System™. Participant performance was assessed using both tool-based and cognitive metrics.

RESULTS

Tool-based metrics showed significant differences between the BG vs CPG and the BG vs EG, in basic skills. While performing intermediate skills, there were significant differences only on the instrument-to-instrument collisions between the BG vs CPG (2.0 vs 0.2, P = 0.028), and the BG vs EG (2.0 vs 0.1, P = 0.018). There were no significant differences between the CPG and EG for both basic and intermediate skills. However, using cognitive metrics, there were significant differences between all groups for the basic and intermediate skills. In advanced skills, there were no significant differences between the CPG and the EG except time (1116 vs 599.6 s), using tool-based metrics. However, cognitive metrics revealed significant differences between both groups.

CONCLUSION

Cognitive assessment of surgeons may aid in defining levels of expertise performing complex surgical tasks once competence is achieved. Cognitive assessment may be used as an adjunct to the traditional methods for skill assessment during RAS.

Editorial: Cognitive training and assessment in robotic surgery – is it effective?

A formal and standardised process of credentialing and certification is required that should not merely be based on the number of completed cases but should be done via demonstration of proficiency and safety in robotic procedural skills. Therefore, validated assessment tools for technical and non-technical skills are required. In addition to effective technical skills, non-technical skills are vital for safe operative practice. These skill-sets can be divided into three categories; social (communication, leadership and teamwork), cognitive (decision making, planning and situation awareness) and personal resource factors (ability to cope with stress and fatigue) [1] (Fig. 1). Robotic surgeons are not exempt in requiring these skills, as situation awareness for example may become of even more significance with the surgeon placed at a distance from the patient. Most of these skills can, just like technical skills, be trained and assessed.

Various assessment tools have been developed, e.g. the Non-Technical Skills for Surgeons (NOTSS) rating system [1] that provides useful insight into individual non-technical skill performance. The Observational Teamwork Assessment for Surgery (OTAS) rating scale has additionally been developed and is suited better for operative team assessment [2]. Decision-making (cognitive skill) is considered as one of the advanced sets of skills and it consolidates exponentially with increasing clinical experience [3]. A structured method for this sub-set of skills training and assessment does not exist.

The present paper by Guru et al. [4] discusses an interesting objective method to evaluate robot-assisted surgical proficiency of surgeons at different levels. The paper discusses the use of utilising cognitive assessment tools to define skill levels. This incorporates cognitive engagement, mental workload, and mental state. The authors have concluded from the results that cognitive assessment offers a more effective method of differentiation of ability between beginners, competent and proficient, and expert surgeons than previously used objective methods, e.g. machine-based metrics.

Despite positive results, we think that further investigation is required before using cognitive tools for assessment reliably. Numbers were limited to 10 participants in the conducted study, with only two participants classified into the beginner cohort. This provides a limited cross-section of the demographic and further expansion of the remaining competent and proficient and expert cohorts used would be desirable. Furthermore, whilst cognitive assessment has the potential as a useful assessment tool, utility within training of surgeons is not discussed at present. Currently cognitive assessment shows at what stage a performer is within his development of acquiring technical skills; however, it does not offer the opportunity for identification as to how to improve the current level of skills. A tool with integration of constructive feedback is lacking. However, via identification of the stage of learning within steps of an individual procedure could provide this feedback. Via demonstration of steps that are showing a higher cognitive input, areas requiring further training are highlighted. Cognitive assessment may via this approach provide not only a useful assessment tool but may be used within training additionally.

The present paper [4] does highlight the current paucity and standardisation of assessment tools within robotics. Few tools have been developed specifically for addressing technical aspects of robotic surgery. The Global Evaluative Assessment of Robotic Skills (GEARS) offers one validated assessment method [5]. Additionally, several metrics recorded in the many robotic simulators available offer validated methods of assessment [6]. These two methods offer reliable methods of both assessing and training technical skills for robotic procedures.

It is now evident that validated methods for assessment exist; however, currently technical and non-technical skills assessments occur as separate entities. A true assessment of individual capability for robotic performance would be achieved via the integration of these assessment tools. Therefore, any assessment procedure should be conducted within a fully immersive environment and using both technical and non-technical assessment tools. Furthermore, standardisation of the assessment process is required before use for purposes of selection and certification.

Cognitive assessment requires further criteria for differentiation of skill levels. However, it does add an adjunct to the current technical and non-technical skill assessment tools. Integration and standardisation of several assessment methods is required to ensure a complete assessment process.

Oliver Brunckhorst and Kamran Ahmed

MRC Centre for Transplantation, King’s College London, King’s Health Partners, Department of Urology, Guy’s Hospital, London, UK

References

1 Yule S, Flin R, Paterson-Brown S, Maran N, Rowley D. Development of a rating system for surgeons’ non-technical skills. Med Educ 2006; 40: 1098–104

2 Undre S, Healey AN, Darzi A, Vincent CA. Observational assessment of surgical teamwork: a feasibility study. World J Surg 2006; 30: 1774–83

3 Flin R, Youngson G, Yule S. How do surgeons make intraoperative decisions? Qual Saf Health Care 2007; 16: 235–9

4 Guru KA, Esfahani ET, Raza SJ et al. Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff. BJU Int 2015; 115: 166–74

5 Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ. Global evaluative assessment of robotic skills: validation of a clinical assessmenttool to measure robotic surgical skills. J Urol 2012; 187: 247–52

6 Abboudi H, Khan MS, Aboumarzouk O et al. Current status of validation for robotic surgery simulators – a systematic review. BJU Int 2013; 111: 194–205

 

 

Video: Separating the wheat from the chaff – Cognitive skills assessment during RA surgery

Cognitive skills assessment during robot-assisted surgery: separating the wheat from the chaff

Khurshid A. Guru, Ehsan T. Esfahani†, Syed J. Raza, Rohit Bhat†, Katy Wang‡,
Yana Hammond, Gregory Wilding‡, James O. Peabody§ and Ashirwad J. Chowriappa

Department of Urology, Roswell Park Cancer Institute, Buffalo, NY; †Brain Computer Interface Laboratory, Department of Mechanical & Aerospace Engineering, University at Buffalo, Buffalo, NY; ‡Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, NY; and §Henry Ford Health System, Detroit, MI, USA

OBJECTIVE

To investigate the utility of cognitive assessment during robot-assisted surgery (RAS) to define skills in terms of cognitive engagement, mental workload, and mental state; while objectively differentiating between novice and expert surgeons.

SUBJECTS AND METHODS

In all, 10 surgeons with varying operative experience were assigned to beginner (BG), combined competent and proficient (CPG), and expert (EG) groups based on the Dreyfus model. The participants performed tasks for basic, intermediate and advanced skills on the da Vinci Surgical System™. Participant performance was assessed using both tool-based and cognitive metrics.

RESULTS

Tool-based metrics showed significant differences between the BG vs CPG and the BG vs EG, in basic skills. While performing intermediate skills, there were significant differences only on the instrument-to-instrument collisions between the BG vs CPG (2.0 vs 0.2, P = 0.028), and the BG vs EG (2.0 vs 0.1, P = 0.018). There were no significant differences between the CPG and EG for both basic and intermediate skills. However, using cognitive metrics, there were significant differences between all groups for the basic and intermediate skills. In advanced skills, there were no significant differences between the CPG and the EG except time (1116 vs 599.6 s), using tool-based metrics. However, cognitive metrics revealed significant differences between both groups.

CONCLUSION

Cognitive assessment of surgeons may aid in defining levels of expertise performing complex surgical tasks once competence is achieved. Cognitive assessment may be used as an adjunct to the traditional methods for skill assessment during RAS.

© 2024 BJU International. All Rights Reserved.