Tag Archive for: machine learning

Posts

Video: Predicting intra‐ and post-operative consequential events using machine‐learning techniques in RAPN

Read the full article
View more videos

BJUI journal prizes

Every year the BJUI awards three prizes to trainee urologists who have played a significant role in contributing to the work published in the journal. The prizes go towards travel costs enabling the trainees to visit international conferences. In 2020, due to the coronavirus pandemic leading to the cancellation of many of these conferences, the usual prize-giving ceremonies have not taken place so here we are introducing you to the prize winners and their work. We hope they will be able to spend their prize money in 2021.

Global prize

This is awarded to authors who are trainees based anywhere in the world other than the Americas and Europe. Usually presented at the USANZ annual meeting. In 2020 the prize was awarded to Sho Uehara for his work on artificial intelligence in prostate cancer diagnosis.

Sho Uehara MD Ph.D Tokyo, Japan
Assistant professor, Department of Urology
Tokyo Medical and Dental University

Email: [email protected]

 

Sho Uehara received a Ph.D. from the graduate school of Tokyo Medical and Dental University, Tokyo, Japan, in 2018.  He is now working as a urologist and an assistant professor at the university hospital. His research interests include prostate cancer diagnostics, and utilization of machine learning for them.

Membership of academic societies:

JUA (The Japanese Urological Association), EAU (European Association of Urology) and AUA (American Urological Association)

 

Coffey-Krane prize

The Coffey-Krane prize is awarded to an author who is a trainee based in The Americas. Normally presented at the AUA annual conference. Dr Nathan Wong received this year’s award for his work on using machine learning to predict biochemical cancer recurrence following prostatectomy.

Dr Nathan Wong
Associate Professor
Westchester Medical Center and New York Medical College

Dr Nathan Wong is an assistant professor and associate program director in the Department of Urology at Westchester Medical Center and New York Medical College. He specializes in urologic oncology and robotics surgery. His main interests are in technology, clinical trials and surgical education. He completed a Society of Urologic Oncology fellowship at Memorial Sloan Kettering Cancer Center in New York City and urology residency at McMaster University in Hamilton, Ontario in Canada. 

 

John Blandy prize

This prize is for authors who are trainees based in Europe. Presented at the BAUS annual conference; the winner gives a presentation. This year the prize went to Nicholas Raison for his work on a RCT on cognitive training in robotic surgery.

Nicholas Raison is Vattikuti fellow at the MRC Centre for Transplantation and Mucosal Cell Biology, King’s College London and a Urology Specialist Registrar in the London Deanery.

Article of the week: A machine learning‐assisted decision‐support model to better identify patients with PCa requiring an extended pelvic lymph node dissection

Every week, the Editor-in-Chief selects an Article of the Week from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an editorial written by a prominent member of the urology community and a video prepared by the authors; we invite you to use the comment tools at the bottom of each post to join the conversation. 

If you only have time to read one article this week, it should be this one. Merry Christmas!

A machine learning‐assisted decision‐support model to better identify patients with prostate cancer requiring an extended pelvic lymph node dissection

Ying Hou*, Mei-Ling Bao, Chen-Jiang Wu*, Jing Zhang*, Yu-Dong Zhang* and Hai-Bin Shi*

*Department of Radiology and Department of Pathology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, Jiangsu Province, China

Read the full article

Abstract

Objectives

To develop a machine learning (ML)‐assisted model to identify candidates for extended pelvic lymph node dissection (ePLND) in prostate cancer by integrating clinical, biopsy, and precisely defined magnetic resonance imaging (MRI) findings.

Patients and Methods

In all, 248 patients treated with radical prostatectomy and ePLND or PLND were included. ML‐assisted models were developed from 18 integrated features using logistic regression (LR), support vector machine (SVM), and random forests (RFs). The models were compared to the Memorial Sloan Kettering Cancer Center (MSKCC) nomogram using receiver operating characteristic‐derived area under the curve (AUC) calibration plots and decision curve analysis (DCA).

Results

A total of 59/248 (23.8%) lymph node invasions (LNIs) were identified at surgery. The predictive accuracy of the ML‐based models, with (+) or without (−) MRI‐reported LNI, yielded similar AUCs (RFs+/RFs: 0.906/0.885; SVM+/SVM: 0.891/0.868; LR+/LR: 0.886/0.882) and were higher than the MSKCC nomogram (0.816; P < 0.001). The calibration of the MSKCC nomogram tended to underestimate LNI risk across the entire range of predicted probabilities compared to the ML‐assisted models. The DCA showed that the ML‐assisted models significantly improved risk prediction at a risk threshold of ≤80% compared to the MSKCC nomogram. If ePLNDs missed was controlled at <3%, both RFs+ and RFs resulted in a higher positive predictive value (51.4%/49.6% vs 40.3%), similar negative predictive value (97.2%/97.8% vs 97.2%), and higher number of ePLNDs spared (56.9%/54.4% vs 43.9%) compared to the MSKCC nomogram.

Conclusions

Our ML‐based model, with a 5–15% cutoff, is superior to the MSKCC nomogram, sparing ≥50% of ePLNDs with a risk of missing <3% of LNIs.

 

Read more Articles of the week

Editorial: A better way to predict lymph node involvement using machine learning?

In their study in this issue of BJUIHou et al. [1] use machine‐learning algorithms to evaluate several preoperative clinical variables (highlighting specific MRI findings of locally advanced prostate cancer) to determine whether lymph node involvement (LNI) could be present during radical prostatectomy, which would justify an extended pelvic lymph node dissection (PLND). This is a well‐designed study with scientific rigour, providing evidence‐based justifications and definitions (i.e. of relevant MRI findings). The authors successfully illustrate a practical application of using artificial intelligence (AI) methods to augment clinical decision‐making prior to and during surgery compared to today’s ‘gold standard’ (nomograms).

For many years, the Memorial Sloan Kettering Cancer Centre (MSKCC) nomogram, among a number of predictive models, has been used to determine the probability of LNI. The output of these tools has assisted surgeons in determining whether to perform a PLND, and if so, to what extent [2,3,4]. The authors hypothesize that, with additional MRI parameters not previously used, machine‐learning algorithms can better select which patients are more likely to have LNI and will therefore require extended PLND. In fact, the authors report that the MSKCC nomogram and conventional MRI reporting of LNI consistently underestimated LNI risk compared to the machine‐learning‐assisted models presented in their study. The outputs of the present models would allow a higher number of extended PLNDs to be spared compared to reliance on the MSKCC nomogram alone. It was appropriate to use several existing AI models in this study, as it is never readily apparent initially which existing predictive model may perform best with a given dataset. In fact, all the models used – logistic regression (LR), support vector machine (SVM) and random forest (RF) – while similar in performance to each other, outperformed the MSKCC nomogram (P < 0.001). Many adjustments were probably performed for each model to tailor it to the dataset and optimize prediction performance.

Criticisms of the study are that: (i) cases for which PLND was not performed were excluded, which could have created a selection bias; (ii) the model would only be applicable when the patient has undergone MRI; (iii) the study was conducted at a single institution in a small sample (AI methods thrive on big and diverse datasets).

This study by Hou et al. is a great example of a machine‐learning application that may positively impact clinical practice. For many years, we have relied on nomograms, but with increasing use of MRI, additional factors should also be included, as Hou et al. have done. Machine‐learning is particularly adept at simultaneously examining numerous variables to elicit which ones may contribute best to a particular outcome. As BJUI has evaluated many manuscripts examining machine‐learning methods for clinical decision‐making in the past year, we have encouraged authors to use present‐day gold standard methods, such as the MSKCC nomogram, as controls [5]. As we embrace AI methods, we must keep one eye on the tried and tested conventional ways. This ensures that we do not take backward steps but rather take forward steps responsibly. Similarly to recent AI studies published in the BJUI, the sample size in this study was relatively small. External validation in a multicentre study on larger datasets is highly recommended.

by Andrew J. Hung

References

  1. Hou YBao MWu CJZhang JZhang YDShi HBA machine learning‐assisted decision support model with mri can better spare the extended pelvic lymph node dissection at cost of less missing in prostate cancerBJU Int 2019124972– 83
  2. Briganti ALarcher AAbdollah F et al. Updated nomogram predicting lymph node invasion in patients with prostate cancer undergoing extended pelvic lymph node dissection: the essential importance of percentage of positive cores. Eur Urol 201261480– 7
  3. Memorial Sloan Kettering Cancer CenterDynamic prostate cancer nomogram: coefficients. Accessed April 2018
  4. Tosoian JJChappidi MFeng Z et al. Prediction of pathological stage based on clinical stage, serum prostate-specific antigen, and biopsy Gleason score: Partin Tables in the contemporary era. BJU Int 2017119676– 83
  5. Hung AJCan machine‐learning algorithms replace conventional statistics? BJU Int 20181231

 

Video: Machine learning‐assisted decision‐support model to identify PCa patients requiring an extended PLND

A machine learning‐assisted decision‐support model to better identify patients with prostate cancer requiring an extended pelvic lymph node dissection

Read the full article

Abstract

Objectives

To develop a machine learning (ML)‐assisted model to identify candidates for extended pelvic lymph node dissection (ePLND) in prostate cancer by integrating clinical, biopsy, and precisely defined magnetic resonance imaging (MRI) findings.

Patients and Methods

In all, 248 patients treated with radical prostatectomy and ePLND or PLND were included. ML‐assisted models were developed from 18 integrated features using logistic regression (LR), support vector machine (SVM), and random forests (RFs). The models were compared to the Memorial SloanKettering Cancer Center (MSKCC) nomogram using receiver operating characteristic‐derived area under the curve (AUC) calibration plots and decision curve analysis (DCA).

Results

A total of 59/248 (23.8%) lymph node invasions (LNIs) were identified at surgery. The predictive accuracy of the ML‐based models, with (+) or without (−) MRI‐reported LNI, yielded similar AUCs (RFs+/RFs: 0.906/0.885; SVM+/SVM: 0.891/0.868; LR+/LR: 0.886/0.882) and were higher than the MSKCC nomogram (0.816; P < 0.001). The calibration of the MSKCC nomogram tended to underestimate LNI risk across the entire range of predicted probabilities compared to the ML‐assisted models. The DCA showed that the ML‐assisted models significantly improved risk prediction at a risk threshold of ≤80% compared to the MSKCC nomogram. If ePLNDs missed was controlled at <3%, both RFs+ and RFs resulted in a higher positive predictive value (51.4%/49.6% vs 40.3%), similar negative predictive value (97.2%/97.8% vs 97.2%), and higher number of ePLNDs spared (56.9%/54.4% vs 43.9%) compared to the MSKCC nomogram.

Conclusions

Our ML‐based model, with a 5–15% cutoff, is superior to the MSKCC nomogram, sparing ≥50% of ePLNDs with a risk of missing <3% of LNIs.

View more videos

Article of the month: Current status of artificial intelligence applications in urology and their potential to influence clinical practice

Every month, the Editor-in-Chief selects an Article of the Month from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an editorial  and a visual abstract produced by prominent members of the urological community. These are intended to provoke comment and discussion and we invite you to use the comment tools at the bottom of each post to join the conversation. 

If you only have time to read one article this month, it should be this one.

Current status of artificial intelligence applications in urology and their potential to influence clinical practice

Jian Chen*, Daphne Remulla*, Jessica H. Nguyen*, D. Aastha, Yan Liu, Prokar Dasgupta and Andrew J. Hung*

*Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, University of Southern California Institute of Urology, Computer Science Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA, and Division of Transplantation Immunology and Mucosal Biology, Faculty of Life Sciences and Medicine, Kings College London, London, UK

Read the full article

Abstract

Objective

To investigate the applications of artificial intelligence (AI) in diagnosis, treatment and outcome prediction in urologic diseases and evaluate its advantages over traditional models and methods.

Materials and methods

A literature search was performed after PROSPERO registration (CRD42018103701) and in compliance with Preferred Reported Items for Systematic Reviews and Meta‐Analyses (PRISMA) methods. Articles between 1994 and 2018 using the search terms “urology”, “artificial intelligence”, “machine learning” were included and categorized by the application of AI in urology. Review articles, editorial comments, articles with no full‐text access, and non-urologic studies were excluded.

Results

Initial search yielded 231 articles, but after excluding duplicates and following full‐text review and examination of article references, only 111 articles were included in the final analysis. AI applications in urology include: utilizing radiomic imaging or ultrasonic echo data to improve or automate cancer detection or outcome prediction, utilizing digitized tissue specimen images to automate detection of cancer on pathology slides, and combining patient clinical data, biomarkers, or gene expression to assist disease diagnosis or outcome prediction. Some studies employed AI to plan brachytherapy and radiation treatments while others used video based or robotic automated performance metrics to objectively evaluate surgical skill. Compared to conventional statistical analysis, 71.8% of studies concluded that AI is superior in diagnosis and outcome prediction.

Conclusion

AI has been widely adopted in urology. Compared to conventional statistics AI approaches are more accurate in prediction and more explorative for analyzing large data cohorts. With an increasing library of patient data accessible to clinicians, AI may help facilitate evidence‐based and individualized patient care.

Read more Articles of the week

 

Editorial: Machines in urology: a brief odyssey of the future

Artificial intelligence (AI) will bring in a new wave of changes in the medical field, likely altering how we practice medicine. In a timely contribution, Chen et al. [1] outline the current landscape of AI and provide us with a glimpse of the future, in which sophisticated computers and algorithms play a front-and-centre role in the daily hospital routine.

Widespread adoption of electronic medical records (EMRs), an ever-increasing amount of radiographic imaging, and the ubiquity of genome sequencing, among other factors, have created an impossibly large body of medical data. This poses obvious challenges for clinicians to remain abreast of new discoveries, but also presents new opportunities for scientific discovery. AI is the inevitable and much-needed tool with which to harness the ‘big data’ of medicine.

Currently, the most immediate and important application of AI appears to be in the field of diagnostics and radiology. In prostate cancer, for example, machine learning algorithms (MLAs) are not only able to automate radiographic detection of prostate cancer but have also been shown to improve diagnostic accuracy compared to standard clinical scoring schemes. MLAs can use clinicopathological data to predict clinically significant prostate cancer and disease recurrence
with a high degree of accuracy. The same has been shown for other urological malignancies, including urothelial cancer and RCC. Implementation of MLAs will lead to improved accuracy and reproducibility, reducing human bias and variability. We also predict that as natural language processing becomes more sophisticated, the troves of nonstructured data that exist in EMRs will be harnessed to deliver improved and more personalized patient care. Patient data and clinical outcomes can be analysed in short time, drawing from a deep body of knowledge, and leading to rapid insights that can guide medical decision-making.

Current AI technology, however, remains experimental and we are still far from the widespread implementation of AI within clinical medicine. A valid criticism of today’s AI is that it functions in the setting of a ‘black box’; the rules that govern the clinical decision-making of an algorithm are often poorly understood or unknowable. We cannot become operators of machines for which we know not how they work, to do so would be to practice medicine blindly.

Another barrier to incorporating AI into common practice is the level of noise in healthcare data. MLAs will use whatever data that are fed to the algorithm, thus running the risk of producing predicative models that include nonsensical variables gleaned from the noise. This concept is similar to multiple hypothesis-testing, where if you feed enough random information into a model, a pattern might emerge. Furthermore, none of the studies described by Chen et al. have been externally validated on large, representative datasets of diverse patients. MLAs trained on a narrow patient population run the risk of creating predictions that
are not generalizable. This problem has already been popularized within genome analysis, where one study found that 81% of all genome-wide studies were taken from individuals of European ancestry [2]. It is easy to imagine situations where risk score calculators or biomarkers are validated using non-representative datasets, leading to less accurate and even inappropriate treatment decisions for underrepresented patient populations. At best, MLAs that are not validated using stringent principles can lead to erroneous disease models. At worst, they can bias the delivery of healthcare to patients, leading to worse patient outcomes and exacerbation of healthcare disparities.

Chen et al. write of the possibility of AI in urology today. What about the future? Imagine a world in which computers with a robotic interface see patients in clinics, design and carry out complex medical treatment plans, and perform surgery without the aid of a human hand. This future may not be far off [3]. Or, even stranger, consider a world in which generalizable AI exists. Estimates of the dawn of this technology range, however the most optimistic projections put the timeline on the order of 20–30 years. Not far behind could be the ‘singularity’, a moment when technological advancement occurs at such an exponential rate that improbable scientific discoveries happen almost instantaneously, setting off a feed-forward cycle leading to an inconceivable superintelligence.

The future is, of course, hard to predict. Nevertheless, AI and the ensuing technology will certainly transform the practice of urology, albeit not without significant challenges and growing pains along the way. The urologist of the future may look very different indeed.

by Stephen W. Reese, Emily Ji, Aliya Sahraoui and Quoc-Dien Trinh

 

References

  1. Chen J, Remulla D, Nguyen JH et al. Current status of artificial intelligence applications in Urology and its potential to influence clinical practice. BJU Int 2019; 124: 567–77
  2. Popejoy AB, Fullerton SM. Genomics is failing on diversity. Nature 2016; 538: 161–4
  3. Grace K, Salvatier J, Dafoe A, Zhang B, Evans O. When Will AI Exceed Human Performance? Evidence from AI Experts, 2017

 

Visual abstract: Current status of artificial intelligence applications in urology and their potential to influence clinical practice

See more infographics

Article of the Month: Use of machine learning to predict early biochemical recurrence after robot‐assisted prostatectomy

Every month, the Editor-in-Chief selects an Article of the Month from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an accompanying editorial written by a prominent member of the urological community. This blog is intended to provoke comment and discussion and we invite you to use the comment tools at the bottom of each post to join the conversation.

If you only have time to read one article this week, it should be this one.

Use of machine learning to predict early biochemical recurrence after robot‐assisted prostatectomy

Nathan C. Wong , Cameron Lam, Lisa Patterson and Bobby Shayegan
Division of Urology, Department of Surgery, McMaster University, Hamilton, ON, Canada

Read the full article

Visual abstract created Rebecca Fisher @beckybeckyfish

Abstract

Objectives

To train and compare machine‐learning algorithms with traditional regression analysis for the prediction of early biochemical recurrence after robot‐assisted prostatectomy.

Patients and Methods

A prospectively collected dataset of 338 patients who underwent robot‐assisted prostatectomy for localized prostate cancer was examined. We used three supervised machine‐learning algorithms and 19 different training variables (demographic, clinical, imaging and operative data) in a hypothesis‐free manner to build models that could predict patients with biochemical recurrence at 1 year. We also performed traditional Cox regression analysis for comparison.

= 0.686) and with a univariate regression model (AUC = 0.865).

Results

K‐nearest neighbour, logistic regression and random forest classifier were used as machine‐learning models. Classic Cox regression analysis had an area under the curve (AUC) of 0.865 for the prediction of biochemical recurrence. All three of our machine‐learning models (K‐nearest neighbour (AUC 0.903), random forest tree (AUC 0.924) and logistic regression (AUC 0.940) outperformed the conventional statistical regression model. Accuracy prediction scores for K‐nearest neighbour, random forest tree and logistic regression were 0.976, 0.953 and 0.976, respectively.

Conclusions

Machine‐learning techniques can produce accurate disease predictability better that traditional statistical regression. These tools may prove clinically useful for the automated prediction of patients who develop early biochemical recurrence after robot‐assisted prostatectomy. For these patients, appropriate individualized treatment options can improve outcomes and quality of life.

Read more Articles of the week

© 2024 BJU International. All Rights Reserved.