Tag Archive for: logistic regression

Posts

Article of the week: A machine learning‐assisted decision‐support model to better identify patients with PCa requiring an extended pelvic lymph node dissection

Every week, the Editor-in-Chief selects an Article of the Week from the current issue of BJUI. The abstract is reproduced below and you can click on the button to read the full article, which is freely available to all readers for at least 30 days from the time of this post.

In addition to the article itself, there is an editorial written by a prominent member of the urology community and a video prepared by the authors; we invite you to use the comment tools at the bottom of each post to join the conversation. 

If you only have time to read one article this week, it should be this one. Merry Christmas!

A machine learning‐assisted decision‐support model to better identify patients with prostate cancer requiring an extended pelvic lymph node dissection

Ying Hou*, Mei-Ling Bao, Chen-Jiang Wu*, Jing Zhang*, Yu-Dong Zhang* and Hai-Bin Shi*

*Department of Radiology and Department of Pathology, The First Affiliated Hospital with Nanjing Medical University, Nanjing, Jiangsu Province, China

Abstract

Objectives

To develop a machine learning (ML)‐assisted model to identify candidates for extended pelvic lymph node dissection (ePLND) in prostate cancer by integrating clinical, biopsy, and precisely defined magnetic resonance imaging (MRI) findings.

Patients and Methods

In all, 248 patients treated with radical prostatectomy and ePLND or PLND were included. ML‐assisted models were developed from 18 integrated features using logistic regression (LR), support vector machine (SVM), and random forests (RFs). The models were compared to the Memorial Sloan Kettering Cancer Center (MSKCC) nomogram using receiver operating characteristic‐derived area under the curve (AUC) calibration plots and decision curve analysis (DCA).

Results

A total of 59/248 (23.8%) lymph node invasions (LNIs) were identified at surgery. The predictive accuracy of the ML‐based models, with (+) or without (−) MRI‐reported LNI, yielded similar AUCs (RFs+/RFs: 0.906/0.885; SVM+/SVM: 0.891/0.868; LR+/LR: 0.886/0.882) and were higher than the MSKCC nomogram (0.816; P < 0.001). The calibration of the MSKCC nomogram tended to underestimate LNI risk across the entire range of predicted probabilities compared to the ML‐assisted models. The DCA showed that the ML‐assisted models significantly improved risk prediction at a risk threshold of ≤80% compared to the MSKCC nomogram. If ePLNDs missed was controlled at <3%, both RFs+ and RFs resulted in a higher positive predictive value (51.4%/49.6% vs 40.3%), similar negative predictive value (97.2%/97.8% vs 97.2%), and higher number of ePLNDs spared (56.9%/54.4% vs 43.9%) compared to the MSKCC nomogram.

Conclusions

Our ML‐based model, with a 5–15% cutoff, is superior to the MSKCC nomogram, sparing ≥50% of ePLNDs with a risk of missing <3% of LNIs.

 

Editorial: A better way to predict lymph node involvement using machine learning?

In their study in this issue of BJUIHou et al. [1] use machine‐learning algorithms to evaluate several preoperative clinical variables (highlighting specific MRI findings of locally advanced prostate cancer) to determine whether lymph node involvement (LNI) could be present during radical prostatectomy, which would justify an extended pelvic lymph node dissection (PLND). This is a well‐designed study with scientific rigour, providing evidence‐based justifications and definitions (i.e. of relevant MRI findings). The authors successfully illustrate a practical application of using artificial intelligence (AI) methods to augment clinical decision‐making prior to and during surgery compared to today’s ‘gold standard’ (nomograms).

For many years, the Memorial Sloan Kettering Cancer Centre (MSKCC) nomogram, among a number of predictive models, has been used to determine the probability of LNI. The output of these tools has assisted surgeons in determining whether to perform a PLND, and if so, to what extent [2,3,4]. The authors hypothesize that, with additional MRI parameters not previously used, machine‐learning algorithms can better select which patients are more likely to have LNI and will therefore require extended PLND. In fact, the authors report that the MSKCC nomogram and conventional MRI reporting of LNI consistently underestimated LNI risk compared to the machine‐learning‐assisted models presented in their study. The outputs of the present models would allow a higher number of extended PLNDs to be spared compared to reliance on the MSKCC nomogram alone. It was appropriate to use several existing AI models in this study, as it is never readily apparent initially which existing predictive model may perform best with a given dataset. In fact, all the models used – logistic regression (LR), support vector machine (SVM) and random forest (RF) – while similar in performance to each other, outperformed the MSKCC nomogram (P < 0.001). Many adjustments were probably performed for each model to tailor it to the dataset and optimize prediction performance.

Criticisms of the study are that: (i) cases for which PLND was not performed were excluded, which could have created a selection bias; (ii) the model would only be applicable when the patient has undergone MRI; (iii) the study was conducted at a single institution in a small sample (AI methods thrive on big and diverse datasets).

This study by Hou et al. is a great example of a machine‐learning application that may positively impact clinical practice. For many years, we have relied on nomograms, but with increasing use of MRI, additional factors should also be included, as Hou et al. have done. Machine‐learning is particularly adept at simultaneously examining numerous variables to elicit which ones may contribute best to a particular outcome. As BJUI has evaluated many manuscripts examining machine‐learning methods for clinical decision‐making in the past year, we have encouraged authors to use present‐day gold standard methods, such as the MSKCC nomogram, as controls [5]. As we embrace AI methods, we must keep one eye on the tried and tested conventional ways. This ensures that we do not take backward steps but rather take forward steps responsibly. Similarly to recent AI studies published in the BJUI, the sample size in this study was relatively small. External validation in a multicentre study on larger datasets is highly recommended.

by Andrew J. Hung

References

  1. Hou YBao MWu CJZhang JZhang YDShi HBA machine learning‐assisted decision support model with mri can better spare the extended pelvic lymph node dissection at cost of less missing in prostate cancerBJU Int 2019124972– 83
  2. Briganti ALarcher AAbdollah F et al. Updated nomogram predicting lymph node invasion in patients with prostate cancer undergoing extended pelvic lymph node dissection: the essential importance of percentage of positive cores. Eur Urol 201261480– 7
  3. Memorial Sloan Kettering Cancer CenterDynamic prostate cancer nomogram: coefficients. Accessed April 2018
  4. Tosoian JJChappidi MFeng Z et al. Prediction of pathological stage based on clinical stage, serum prostate-specific antigen, and biopsy Gleason score: Partin Tables in the contemporary era. BJU Int 2017119676– 83
  5. Hung AJCan machine‐learning algorithms replace conventional statistics? BJU Int 20181231

 

Video: Machine learning‐assisted decision‐support model to identify PCa patients requiring an extended PLND

A machine learning‐assisted decision‐support model to better identify patients with prostate cancer requiring an extended pelvic lymph node dissection

Abstract

Objectives

To develop a machine learning (ML)‐assisted model to identify candidates for extended pelvic lymph node dissection (ePLND) in prostate cancer by integrating clinical, biopsy, and precisely defined magnetic resonance imaging (MRI) findings.

Patients and Methods

In all, 248 patients treated with radical prostatectomy and ePLND or PLND were included. ML‐assisted models were developed from 18 integrated features using logistic regression (LR), support vector machine (SVM), and random forests (RFs). The models were compared to the Memorial SloanKettering Cancer Center (MSKCC) nomogram using receiver operating characteristic‐derived area under the curve (AUC) calibration plots and decision curve analysis (DCA).

Results

A total of 59/248 (23.8%) lymph node invasions (LNIs) were identified at surgery. The predictive accuracy of the ML‐based models, with (+) or without (−) MRI‐reported LNI, yielded similar AUCs (RFs+/RFs: 0.906/0.885; SVM+/SVM: 0.891/0.868; LR+/LR: 0.886/0.882) and were higher than the MSKCC nomogram (0.816; P < 0.001). The calibration of the MSKCC nomogram tended to underestimate LNI risk across the entire range of predicted probabilities compared to the ML‐assisted models. The DCA showed that the ML‐assisted models significantly improved risk prediction at a risk threshold of ≤80% compared to the MSKCC nomogram. If ePLNDs missed was controlled at <3%, both RFs+ and RFs resulted in a higher positive predictive value (51.4%/49.6% vs 40.3%), similar negative predictive value (97.2%/97.8% vs 97.2%), and higher number of ePLNDs spared (56.9%/54.4% vs 43.9%) compared to the MSKCC nomogram.

Conclusions

Our ML‐based model, with a 5–15% cutoff, is superior to the MSKCC nomogram, sparing ≥50% of ePLNDs with a risk of missing <3% of LNIs.

View more videos
© 2024 BJU International. All Rights Reserved.