Tag Archive for: The Lancet


Randomised Controlled Trials in Robotic Surgery

PDGSep16It has been nearly 15 years since one of the first ever randomised controlled trials (RCT) in robotic surgery was conducted in 2002. The STAR-TRAK compared telerobotic percutaneous nephrolithotomy (PCNL) to standard PCNL and showed that the robot was slower but more accurate than the human hand [1].

In the 24 h since the much anticipated RCT of open vs robot-assisted radical prostatectomy was published in The Lancet [2], our BJUI blog from @declangmurphy was viewed >2500 times, receiving >40 comments, making it one of our most read and interactive blogs ever. It is a negative trial showing no differences in early functional outcomes between the two approaches.

And it is not the only negative trial of its kind as a number of others have matured and reported recently. The RCT of open vs robot-assisted radical cystectomy and extracorporeal urinary diversion showed no differences in the two arms [3], and likewise a comparison of the two approaches to cystectomy as a prelude to the RAZOR (randomised open vs robotic cystectomy) trial showed no differences in quality of life at 3-monthly time points up to a year [4]. The only RCT comparing open, laparoscopic and robotic cystectomy, the CORAL, took a long time to recruit and yet again showed no differences in 90-day complication rates between the three techniques [5].

In all likelihood, despite the level 1 evidence provided in The Lancet paper showing no superiority of the robotic over the open approach, the Brisbane study may not change the current dominance of robotic prostatectomy in those countries who can afford this technology. Why is this? Apart from the inherent limitations that the BJUI blog identifies, there are other factors to consider. In particular, as observed previously in a memorable article ‘Why don’t Mercedes Benz publish randomised trials?’ [6], there may be reasons why surgical technique is not always suited to the RCT format.

A few additional reflections are perhaps appropriate at this time:

  1. Despite the best statistical input many of these and future studies are perhaps underpowered.
  2. Many have argued that the RCTs have shown robotics to be as good, although not better than open surgery, even in the hands of less experienced surgeons.
  3. Patient reported quality of life should perhaps become the primary outcome measure because that in the end that is what truly matters.
  4. Cost-effectiveness ratios should feature prominently, as otherwise there is much speculation by the lay press without any hard data.
  5. Industry has a role to play here in keeping costs manageable, so that these ratios can become more palatable to payers.
  6. Surgery is more of an art than a science. The best surgeons armed with the best technology that they are comfortable with will achieve the best outcomes for their patients.

While this debate will continue and influence national healthcare providers and decision makers, the message looks much clearer when it comes to training the next generation of robotic surgeons. A cognitive- and performance-based RCT using a device to simulate vesico-urethral anastomosis after robot-assisted radical prostatectomy (RARP) showed a clear advantage in favour of such structured training [7]. In this months’ issue of the BJUI, we present the first predictive validity of robotic simulation showing better clinical performance of RARP in patients [8]. This is a major step forward in patient safety and would reassure policy makers that investment in simulation of robotic technology rather than the traditional unstructured training is the way forward.

Most of our patients are knowledgeable, extensively research their options on ‘Dr Google’ and decide what is good for them. It is for this reason that many did not agree to randomisation in other robotic vs open surgery RCTs, like LopeRA (RCT of laparoscopic, open and robot assisted prostatectomy as treatment for organ-confined prostate cancer) and BOLERO (Bladder cancer: Open vs Lapararoscopic or RObotic cystectomy). Many of them continue to choose robotic surgery without necessarily paying heed to the best scientific evidence. Perhaps what patients will now do is select an experienced surgeon whom they can trust to use their best technology to deliver the best clinical outcomes.

Prokar Dasgupta @prokarurol
Editor-in-Chief, BJUI 


Associate Editor BJUI


2 Yaxley JW, Coughlin GD, Chambe rs SK et al. Robot-assisted laparoscopic prostatectomy versus open radical retropubic prostatectomy: early outcomes from a randomised controlled phase 3 study. Lancet 2016 [Epub ahead of print]. doi: 10.1016/S0140-6736(16)30592-X
3 Bochner BH, Sjoberg DD, Laudone VP, Memorial Sloan Kettering Cancer Center Bladder Cancer Surgical Trials Group. A randomized trial of robot-assisted laparoscopic radical cystectomy. N Engl J Med 2014; 371:38990


Messer JC, Punnen S , Fitzgerald J et al. Health-related quality of life from a

6 OBrien T, Viney R , Doherty A, Thomas K. Why dont Mercedes Benz publish
randomised trials? BJU Int 2010; 105 : 2935
8 Aghazadeh MA, Mercado MA, Pan MM , Miles BJ, Goh AC. Performance of


It’s not about the machine, stupid

Robotic surgery trial exposes limitations of randomised study design


Here it is, the highly anticipated randomised controlled trial of open versus robotic radical prostatectomy published today in The Lancet. Congratulations to the team at Royal Brisbane Hospital for completing this landmark study.


The early headlines around the world include everything from this one in the Australian Financial Review:

DM2b      –   to this from The Telegraph in London

DM3bAs ever, there will be intense and polarising discussion around this. One might expect that a randomised controlled trial, a true rarity in surgical practice, might settle the debate here; however, it is already clear that there will be anything BUT agreement on the findings of this study. Why is this so? Well let’s look first at what was reported today.


Study design and findings:

This is a prospective randomised trial of patients undergoing radical prostatectomy for localised prostate cancer. Patients were randomised to undergo either open radical prostatectomy (ORP, n=163) or robotic-assisted radical prostatectomy (RARP, n=163). All ORPs were done by one surgeon, Dr John Yaxley (JY), and all RARPs were done by Dr Geoff Coughlin (GC). The hypothesis was that patients undergoing RARP would have better functional outcomes at 12 weeks, as measured by validated patient-reported quality of life measures. Other endpoints included positive surgical margins and complications, as well as time to return to work.

So what did they find? In summary, the authors report no difference in urinary and sexual function at 12 weeks. There was also no statistical difference in positive surgical margins. RARP patients had a shorter hospital stay (1.5 vs 3.2days, p<0.0001) and less blood loss (443 vs 1338ml, P<0.001), and less pain post-operatively, yet, these benefits of minimally-invasive surgery did not translate into an earlier return to work. The average time to return to work in both arms was 6 weeks.

The authors therefore conclude by encouraging patients “to choose an experienced surgeon they trust and with whom they have a rapport, rather than choose a specific surgical approach”. Fair enough.

In summary therefore, this is a randomised controlled trial of ORP vs RARP showing no difference in the primary outcome. One might reasonably expect that we might start moth-balling these expensive machines and start picking up our old open surgery instruments. But that won’t happen, and my prediction is that this study will be severely criticized for elements of its design that explain why they failed to meet their primary endpoint.


Reasons why this study failed:

1.      Was this a realistic hypothesis? No it was not. For those of us who work full-time in prostate cancer, the notion that there would be a difference in sexual and urinary function at 12 weeks following ORP or RARP is fanciful. It is almost like it was set up to fail. There was no pilot study data to encourage such a hypothesis, and it remains a mystery to me why the authors thought this study might ever meet this endpoint. I hate to say “I told you so”, but this hypothesis could never have been proved with this study design.

2.      There is a gulf in surgical experience between the two arms. The lack of equipoise between the intervention arms is startling, and of itself, fully explains the failure of this study to meet its endpoints. I should state here that both surgeons in this study, JY (“Yax”) and GC (“Cogs”), are good mates of mine, and I hold them in the highest respect for undertaking this study. However, as I have discussed with them in detail, the study design which they signed up to here does not control for the massive difference in radical prostatectomy experience that exists between them.  Let’s look at this in more detail:

  1.        ORP arm: JY was more than 15 years post-Fellowship at the start of this study, and had completed over 1500 ORP before performing the first case in the trial.
  2.       RARP arm: GC was just two years post-Fellowship and had completed only 200 RARP at the start of the study.

The whole world knows that surgeon experience is the single most important determinant of outcomes following radical prostatectomy, and much data exists to support this fact. In the accompanying editorial, Lord Darzi reminds us that the learning curve for functional and oncological outcomes following RARP extends up to 700 cases. Yes 700 cases of RARP!! And GC had done 200 radical prostatectomies prior to operating on the first patient in this study. Meanwhile his vastly more experienced colleague JY, had done over 1500 cases. The authors believe that they controlled for surgeon heterogeneity based on the entry numbers detailed above, and state that it is “unlikely that a learning curve contributed substantially to the results”. This is bunkum. It just doesn’t stack up, and none of us who perform this type of surgery would accept that there is not a clinically meaningful difference in the experience of a surgeon who has performed 200 radical prostatectomies, compared with one who has performed 1500. Therein lies the fundamental weakness of this study, and the reason why it will be severely criticized. It would be the equivalent of comparing 66Gy with 78Gy of radiotherapy, or 160mg enzalutamide with 40mg – the study design is simply not comparing like with like, and the issue of surgeon heterogeneity as a confounder here is not accounted for.

3.      Trainee input is not controlled for – most surprisingly, the authors previously admitted that “various components of the operations are performed by trainee surgeons”. One would expect that with such concerns about surgeon heterogeneity, there should have been tighter control on this aspect of the interventions. It would have been reasonable within an RCT to reduce the heterogeneity as much as possible by sticking to the senior surgeons for all cases.

Having said all that, John and Geoff are to be congratulated for the excellent outcomes they have delivered to their patients in both arms of this study. These are excellent outcomes, highly credible, and represent, in my view, the best outcomes to be reported for patients undergoing RP in this country. We are all too familiar with completely unbelievable outcomes being reported for patients undergoing surgery/radiotherapy/HIFU etc around the world, and we have a responsibility to make sure patients have realistic expectations. John and Geoff have shown themselves to be at the top of the table reporting these credible outcomes today.


“It’s about the surgeon, stupid”

To paraphrase that classic phrase of the Clinton Presidential campaign of 1992, this study clearly demonstrates that outcomes following radical prostatectomy are about the surgeon, and not about the robot. Yet one of the co-authors, a psychologist, comments that, “at 12 weeks, these two surgical approaches yielded similar outcomes for prostate cancer patients”. Herein lies one of the classic failings of this study design, and also a failure of the investigators to fully understand the issue of surgeon heterogeneity in this study. It is not about the surgical approach, it is about the surgeon experience.

If the authors had designed a study that adequately controlled for surgeon experience, then it may have been possible for the surgical approach to be assessed with some equipoise. It is not impossible to do so, but is certainly challenging. For example a multi-centre study with multiple surgeons in each arm would have helped balance out the gulf in surgical experience in this two-surgeon study. Or at the very least, the authors should have ensured that they were comparing apples with apples by having a surgeon with in excess of 1500 RARP experience in that arm. Another approach would have been to get a surgeon with huge experience of both procedures (eg Dr Smith at Vanderbilt who has performed >3000 RARP and >3000 ORP), and to randomise patients to be operated on only by a single surgeon with such vast experience. That would have truly allowed the magnitude of the surgical approach effect to be measured, without the bias inherent in this study design.


Robotic surgery bridges the experience gap:

Having outlined these issues with surgeon heterogeneity and lack of equipoise, there is another angle which my colleague Dr Daniel Moon has identified in his comments in the Australian media today and which should be considered.

Although this is a negative study which failed to meet its primary endpoints, it does demonstrate that a much less experienced surgeon can actually deliver equivalent functional and oncological outcomes to a much more experienced surgeon, by adopting a robotic approach. Furthermore, his patients get the benefits of a minimally-invasive approach as detailed in the paper. This therefore demonstrates that patients can be spared the inferior outcomes that may be delivered by less experienced surgeons while on their learning curve, and the robotic approach may therefore reduce the learning curve effect.

On that note, a point to consider would be what would JY’s outcomes have been in this study if he had 13 years and 1300 cases less experience to what he had entering this study? Would the 200 case experience-Yax have been able to match the 1500 case experience-Yax?? Surely not.

And finally, just as a footnote for readers around the world about what is actually happening on the ground following this study. During the course of this study, the ORP surgeon JY transitioned to RARP, and this is what he now offers almost exclusively to his patients. Why is that? It is because he delivers better outcomes by bringing a robotic approach to the vast surgical experience that he also brings to his practice, and which is of course the most important determinant of better outcomes.

Sadly, “Yax” and “Cogs”, the two surgeons who operated in this study, have been prevented from speaking to the media or to being quoted in or commenting on this blog, but we are looking forward to hearing from them when they present this data at the Asia-Pacific Prostate Cancer Conference in Melbourne in a few weeks.


Declan G Murphy
Associate Editor BJUI; Urologist & Director of Genitourinary Oncology, Peter MacCallum Cancer Centre, Melbourne, Australia

Twitter: @declangmurphy




SUSPEND Trial Poll Results

SUSPEND Trial Poll Results


The Lancet
Click me
All Poll Results


Give the pill, or not give the pill. SUSPEND tries to end the debate

Christopher BayneJune 2015 #UROJC Summary

News of a landmark paper on medical expulsive therapy (MET) for ureteric colic swirled through the convention halls on the last day of the American Urological Association’s Annual Meeting in New Orleans, Louisiana. I watched the Twitter feeds evolve from my desk at home: the first tweets just mentioned the title, then the conclusion, followed by snippets about the abstract. As time passed and people had time to read the manuscript, discussion escalated. Without data to prove it, there seemed to be more Twitter chatter about the SUSPEND trial, even among conference attendees, than the actual AUA sessions.

Robert Pickard and Samuel McClinton’s group utilized a “real-world” study design to publish what many urologists consider to be the “best data” on MET. The study (SUSPEND) randomized 1167 participants with a single 1-10 mm calculi in the proximal, mid, or distal ureter across 24 UK hospitals to 1:1:1 MET with daily tamsulosin 0.4 mg, nifedipine 30 mg, or placebo. The study’s primary outcome was the need for intervention at 4 weeks after randomization. Secondary outcomes assessed via follow-up surveys were analgesic use, pain, and time to stone passage. Though the outcomes were evaluated at 4 weeks after randomization, patients were followed out to 12 weeks.

Some of the study design minutiae are worth specific mention before discussing the results and #urojc chat:

  • Treatment allotment was robustly blinded. Participants were handed 28 days of unmarked over-encapsulated medication by sources uninvolved in the remaining portions of the study
  • Medication compliance was not verified
  • The study protocol didn’t mandate additional imaging or tests at any point
  • Participants weren’t asked to strain their urine
  • Secondary outcomes assessed by follow-up surveys were incomplete: 62 and 49% of participants completed the 4- and 12-week questionnaires, respectively

The groups were well balanced, and the results were nullifying. A similar percentage of tamsulosin- , nifedipine-, and placebo-group patients did not require intervention (81%, 80%, and 80%, respectively). A similar percentage of tamsulosin-, nifedipine-, and placebo-group participants had interventions planned at 12 weeks (7%, 6%, and 8%). There were no differences in secondary outcomes, including stone passage. There was a trend toward significance for MET, specifically with tamsulosin, in women, calculi >5 mm, and calculi located in the lower ureter (see image taken from Figure 2).

June urojc 1

The authors concluded their paper was iron-clad with results that don’t need replication.

“Our judgment is that the results of our trial provide conclusive evidence that the effect of both tamsulosin and nifedipine in increasing the likelihood of stone passage as measured by the need for intervention is close to zero. Our trial results suggest that these drugs, with a 30-day cost of about US$20 (£13; €18), should not be offered to patients with ureteric colic managed expectantly, giving providers of health care an opportunity to reallocate resources elsewhere. The precision of our result, ruling out any clinically meaningful benefit, suggests that further trials involving these agents for increasing spontaneous stone passage rates will be futile. Additionally, subgroup analyses did not suggest any patient or stone characteristics predictive of benefit from MET.”

Much of the early discussion focused on the trend toward benefit for MET in cases of calculi >5 mm in the distal ureter:

June urojc 2June urojc 3June urojc 4June urojc 5June urojc 6

Journal Club participants raised eyebrows to the use of nifedipine and placebo medication in the trial:

June urojc 7
June urojc 8June urojc 9

A few hours in, discussion shifted toward the study design, particularly the primary endpoint of absence of intervention at 4 weeks rather than stone passage or radiographic endpoints. The overall consensus was that that this study was a microcosm of “real world” patient care with direct implications for emergency physicians, primary physicians, and urologists.

June urojc 10June urojc 11June urojc 12
June urojc 13June urojc 14June urojc 15
June urojc 16June urojc 17June urojc 18June urojc 19

The $20 question (cost of 4 weeks of tamsulosin according to SUSPEND) is whether or not the trial will change urologists’ practice patterns. Perhaps not surprisingly, opinions differed between American and European urologists.

June urojc 20June urojc 21June urojc 22June urojc 23June urojc 24

June urojc 25

We owe SUSPEND authors Robert Pickard and Sam McClinton special thanks for their availability during the discussion. In the end, the #urojc banter for June 2015 was the largest and most-interactive monthly installment of International Urology Journal Club to date.

June urojc 26Christopher Bayne is a PGY-4 urology resident at The George Washington University Hospital in Washington, DC and tweets @chrbayne.


Learning from The Lancet

The Lancet, established in 1823, is one of the most respected medical journals in the world. It has an impact factor of 39, and therefore attracts and publishes only the very best papers. Like most journals that have evolved with modern times, it has an active web and social media presence, particularly based around Twitter.

On a Monday morning, last autumn, the Editor of the BJUI had a meeting with the Web Editor of The Lancet at Guy’s Hospital. There was a mutual interest in surgical technology, particularly as Naomi Lee had been a urology trainee before joining The Lancet full-time. The topic of discussion was robot-assisted radical cystectomy with the emergence of randomised trials showing little difference between open and robotic surgery, despite the minimally invasive nature of the latter [1, 2]. Thereafter, The Lancet kindly invited the BJUI team to visit its offices in London. The location is rather bohemian with a mural of John Lennon on the wall across the street! Here is a summary of what we learnt that day.


1. Democracy – what gets published in The Lancet after peer review is decided at a team meeting, where editors of the main journal and its sister publications gather around a table to discuss individual articles. Most work full-time for The Lancet, unlike surgical journals that are led by working clinicians. No wonder that >80% of papers are immediately rejected and the final acceptance rate is ≈6%. Interesting case reports are still published and often highly cited because of the wider readership.

2. Quality has no boundaries – it does not matter where the article comes from as long as it has an important message. The BJUI recently published an excellent paper on circumcision in HIV-positive men from Africa [3]; the original randomised controlled trial had appeared some 7 years earlier in The Lancet [4].

3. Statisticians – the good ones are a rare breed and sometimes rather difficult to find. While we have two statistical editors at the BJUI, sometimes, it is difficult to approach the most qualified reviewer on a particular subject. The Lancet occasionally faces similar difficulties, which it almost always overcomes due to its’ team approach.

4. Meta-analysis and systematic reviews – they form a significant number of submissions to both journals. It is not always easy to judge their quality although a key starting point is to identify whether the topic is one of contemporary interest where there are significant existing data that can be analysed. Rare subjects usually fail to make the cut.

5. Paper not dead yet – this is certainly the case at The Lancet office, where its editors gather together with paper folders and hand-written notes. We are almost fully paperless at the BJUI offices, and are hoping to be completely electronic in the future. A recent live vote of our readership during the USANZ Annual Scientific Meeting in Adelaide, Australia, indicated that the majority would like us to go electronic in about 2–3 years’ time; however, ≈30% of our institutional subscribers still prefer the paper version and are reluctant to make the switch.

The BJUI and The Lancet are coming together to host a joint Social Media session at BAUS 2015, which will provide more opportunity to learn from one of the best journals ever. We hope to see many of you there.




2 Lee N. Robotic surgery: where are we now? Lancet 2014; 384: 1417



4 Gray RH, Kigozi G, Serwadda D et al. Male circumcision for HIV prevention in men in Rakai, Uganda: a randomised trial. Lancet 2007; 369: 65766


Prokar Dasgupta @prokarurol
Editor-in-Chief, BJUI 


Scott Millar
Managing Editor, BJUI 


Naomi Lee
Web Editor, The Lancet


© 2024 BJU International. All Rights Reserved.