Saturday, June 9, 2018

Richard Smith: The case for medical nihilism and “gentle medicine”

https://blogs.bmj.com/bmj/2018/06/04/richard-smith-the-case-for-medical-nihilism-and-gentle-medicine/

Most practising doctors are instinctive medical nihilists, argues Richard Smith
richard_smith_2014Jacob Stegenga, a philosopher of science in Cambridge, has written a closely argued and empirically supported book in which he argues the case for medical nihilism by which he means that our confidence in the effectiveness of medical interventions should be low. My belief is that many doctors, particularly senior ones, are instinctively nihilists but most patients are not.
Medical Nihilism is one of the latest in a long history of arguments doubting the effectiveness of medicine. Stegenga briefly summaries that history, starting with Heraclitus (the way that doctors torture their patients is “just as bad as the diseases they claim to cure”), passing through Oliver Wendell Holmes (“If the whole materia medica, as now used, could be sunk to the bottom of the sea it would be all the better for mankind—and all the worse for the fishes”) to Ivan Illich (“modern medicine is a negation of health . . . it makes more people sick than it heals”). In the same week that Stegenga’s book was launched the doctor and journalist James Le Fanu launched Too Many Pills, a book in which he argues that doctors are prescribing too many pills and endangering health (more of that in another blog) and The BMJ made the case for overcoming overuse of medicine.Something is up.
Stegenga is not opposed to the whole enterprise of modern medicine, as Illich was, but he does like many others think that it needs to change direction. Nor is he against evidence based medicine, but much of the book is a detailed critique of what he calls “the malleability” of medical research, meaning its background theories, priorities, funding, methods, biases, dissemination, and implementation. Much of this will be familiar to readers of The BMJ, and again most doctors are more sceptical about medical research than the public, although the profusion of stories in the media of “X,Y, and Z does/does not/ might cause A,B, and C” means that the public is becoming more sceptical about science, allowing some even to conclude that climate change is not happening.
But medical nihilism means more than “a tough scepticism espousing low confidence about this or that medical intervention.” Rather medical nihilism is a “more general stance.” We should be sceptical about the evaluation of particular interventions, but beyond that we should consider “the frequency of failed medical interventions, the extent of misleading and discordant evidence in medical research, the sketchy theoretical framework on which many medical interventions are based, and the malleability of even the very best empirical methods.”
The nihilism that Stegenga advocates fits with the emerging view in the philosophy of science that “facts and values are inextricably linked.” I have long thought that we have deceived ourselves by imagining that when we are being scientists we become objective data processors uncorrupted by the human failings that we all know and share. That self-deception is why we have been so slow and poor at responding to research misconduct.
Philosophers, including Karl Popper and Thomas Kuhn, have long tried, writes Stegenga, to demarcate good from bad science but “have now given up on the attempt to develop general, context-free principles of demarcation.” That’s why he has written a book that is highly “contextualised,” examining in detail the fragile base of medical knowledge.
As you would expect of a philosopher, Stegenga builds his case around three key arguments. The first is that the medical model or theory of targeting diseases with magic bullets is unhelpful. It emerged with the appearance of treatments like antibiotics and insulin, which in a world where there was no effective treatment for infections and people with type I diabetes would die would seem magical. But even with those treatments, we soon recognised that bacteria could develop resistance to the drugs, and that even though kept alive people with diabetes would develop complications.
The magic bullet theory supposes that an effective treatment (the magic bullet) moves a patient from disease to health. But—as Stegenga makes clear—health, disease, and effectiveness are all disputed concepts. The theory also does not extend from antibiotics to most conditions where the cause is much less clear and much more complex. Consider the cleverly misnamed antidepressants: there is no neat target; we don’t know the cause; and we can’t agree on who is depressed and the effectiveness of the drugs. Much more of medicine is like depression than type I diabetes, particularly in a world where most patients have multiple, long term conditions.
Despite its obvious limitations, the magic bullet model seems alive and well in the age of genetics and personalised medicine. Pharmaceutical companies are merchants of magic bullets and keen to keep the concept alive. It’s also very attractive to the public, which can fantasise that a pill will fix their problems. Stegenga advocates placing less emphasis on magic bullets and more on developing other kinds of interventions for improving health.
Stegenga’s second and third arguments will be familiar to readers of The BMJ: that contemporary research methods are “malleable” and that the medicine consistently underestimates harms (partly through having a narrow concept of what constitutes a harm) and fails to deal adequately with bias and fraud.
These arguments are tied together into a “master argument,” which uses Bayes Theorem, and says that we should start with a prior belief in the low effectiveness of medical interventions—to the point that “even when presented with evidence for a hypothesis regarding the effectiveness of a medical intervention, we ought to have low confidence in that hypothesis” (Stegenga’s italics). I think that this is what many doctors do, particularly when confronted with studies funded by the drug industry. This approach makes me think of one of my favourite sayings: “Good surgeons know how to operate. Better surgeons when to operate, and the best surgeons when not to operate.” This applies, I think, across all of medicine, and the best doctors are thus what Stegenga calls “medical nihilists.”
Throughout the book, Stegenga provides empirical evidence to support his particular arguments, but he also has three general pieces of evidence to support his argument. Firstly, many medical interventions have been rejected because they don’t work. I remember a letter from a retired surgeon to The BMJ pointing out that most of the operations he learnt as a young surgeon are no longer used. Secondly, the best evidence shows that many medical interventions are barely effective, if effective at all. Thirdly, there is conflicting evidence on the benefits of many medical interventions.
The word “nihilism” may be unfortunate. Stegenga and I spoke at a meeting chaired by the president of the Academy of Medical Sciences, where the president objected to the word nihilism, thinking that Stegenga was suggesting that medicine has achieved little and is more of a bad than a good thing. That is not Stegenga’s argument, and he ends his book with some positive ideas on how medical nihilism might lead to better medicine.
He advocates “gentle medicine,” borrowed from the 19th century term la médécine douce. Gentle medicine “encourages a moderate form of therapeutic conservatism.” Many doctors, particularly general practitioners, already practice medicine gently, and gentle medicine clearly overlaps with realistic medicine, prudent medicine, and slow medicine, all of which have their proponents.
The priorities of medical research should be rethought and changed. “The focus on magic bullets (‘genes, proteins, and molecular pathways’) has on the whole,” Stegenga  writes, ”been disappointing.” Yet this continues to be the main thrust of medical research, driven more by economic than human thinking and by the needs of pharmaceutical companies. Stegenga joins many others in urging more research into non-drug interventions, community action, and the human aspects of suffering and disease, “the art of medicine.”
Regulation also needs rethinking. At the moment there is pressure to make it easier for new drugs for enter the market—again perhaps more for the benefit of pharmaceutical companies than patients. But regulators face “inductive risk” in that they have to infer the benefit-risk profile from limited evidence, and the earlier they have to make a decision the greater the risk. But as few new drugs have major new benefits (and even then might still have undiscovered serious adverse risks), medical nihilism argues for tightening rather than loosening regulation.
Most practising doctors are, I believe, instinctive medical nihilists even if they would never use that term: they know the limitations of magic bullets, are highly sceptical of claims for new drugs, and recognise the importance of the human as opposed to the technical. Some patients and politicians are also medical nihilists, but most are not. They are the group who would benefit the most from this important book but also, sadly, are perhaps the least likely to read it.
Richard Smith was the editor of The BMJ until 2004.

Thursday, June 7, 2018

Artificial intelligence will improve medical treatments

From A&E to AIArtificial intelligence will improve medical treatments

It will not imminently put medical experts out of work

FOUR years ago a woman in her early 30s was hit by a car in London. She needed emergency surgery to reduce the pressure on her brain. Her surgeon, Chris Mansi, remembers the operation going well. But she died, and Mr Mansi wanted to know why. He discovered that the problem had been a four-hour delay in getting her from the accident and emergency unit of the hospital where she was first brought, to the operating theatre in his own hospital. That, in turn, was the result of a delay in identifying, from medical scans of her head, that she had a large blood clot in her brain and was in need of immediate treatment. It is to try to avoid repetitions of this sort of delay that Mr Mansi has helped set up a firm called Viz.ai. The firm’s purpose is to use machine learning, a form of artificial intelligence (AI), to tell those patients who need urgent attention from those who may safely wait, by analysing scans of their brains made on admission.
That idea is one among myriad projects now under way with the aim of using machine learning to transform how doctors deal with patients. Though diverse in detail, these projects have a common aim. This is to get the right patient to the right doctor at the right time.

Latest stories

See more
In Viz.ai’s case that is now happening. In February the firm received approval from regulators in the United States to sell its software for the detection, from brain scans, of strokes caused by a blockage in a large blood vessel. The technology is being introduced into hospitals in America’s “stroke belt”—the south-eastern part, in which strokes are unusually common. Erlanger Health System, in Tennessee, will turn on its Viz.ai system next week.
The potential benefits are great. As Tom Devlin, a stroke neurologist at Erlanger, observes, “We know we lose 2m brain cells every minute the clot is there.” Yet the two therapies that can transform outcomes—clot-busting drugs and an operation called a thrombectomy—are rarely used because, by the time a stroke is diagnosed and a surgical team assembled, too much of a patient’s brain has died. Viz.ai’s technology should improve outcomes by identifying urgent cases, alerting on-call specialists and sending them the scans directly.
The AIs have it
Another area ripe for AI’s assistance is oncology. In February 2017 Andre Esteva of Stanford University and his colleagues used a set of almost 130,000 images to train some artificial-intelligence software to classify skin lesions. So trained, and tested against the opinions of 21 qualified dermatologists, the software could identify both the most common type of skin cancer (keratinocyte carcinoma), and the deadliest type (malignant melanoma), as successfully as the professionals. That was impressive. But now, as described last month in a paper in the Annals of Oncology, there is an AI skin-cancer-detection system that can do better than most dermatologists. Holger Haenssle of the University of Heidelberg, in Germany, pitted an AI system against 58 dermatologists. The humans were able to identify 86.6% of skin cancers. The computer found 95%. It also misdiagnosed fewer benign moles as malignancies.
There has been progress in the detection of breast cancer, too. Last month Kheiron Medical Technologies, a firm in London, received news that a study it had commissioned had concluded that its software exceeded the officially required performance standard for radiologists screening for the disease. The firm says it will submit this study for publication when it has received European approval to use the AI—which it expects to happen soon.
This development looks important. Breast screening has saved many lives, but it leaves much to be desired. Overdiagnosis and overtreatment are common. Conversely, tumours are sometimes missed. In many countries such problems have led to scans being checked routinely by a second radiologist, which improves accuracy but adds to workloads. At a minimum Kheiron’s system looks useful for a second opinion. As it improves, it may be able to grade women according to their risks of breast cancer and decide the best time for their next mammogram.
Efforts to use AI to improve diagnosis are under way in other parts of medicine, too. In eye disease, DeepMind, a London-based subsidiary of Alphabet, Google’s parent company, has an AI that screens retinal scans for conditions such as glaucoma, diabetic retinopathy and age-related macular degeneration. The firm is also working on mammography.
Heart disease is yet another field of interest. Researchers at Oxford University have been developing AIs intended to interpret echocardiograms, which are ultrasonic scans of the heart. Cardiologists looking at these scans are searching for signs of heart disease, but can miss them 20% of the time. That means patients will be sent home and may then go on to have a heart attack. The AI, however, can detect changes invisible to the eye and improve the accuracy of diagnosis. Ultromics, a firm in Oxford, is trying to commercialise the technology and it could be rolled out later this year in Britain.
There are also efforts to detect cardiac arrhythmias, particularly atrial fibrillation, which increase the risk of heart failure and strokes. Researchers at Stanford University, led by Andrew Ng, have shown that AI software can identify arrhythmias from an electrocardiogram (ECG) better than an expert. The group has joined forces with a firm that makes portable ECG devices and is helping Apple with a study looking at whether arrhythmias can be detected in the heart-rate data picked up by its smart watches. Meanwhile, in Paris, a firm called Cardiologs is also trying to design an AI intended to read ECGs.
Seeing ahead
Eric Topol, a cardiologist and digital-medicine researcher at the Scripps Research Institute, in San Diego, says that doctors and algorithms are comparable in accuracy in some areas, but computers have the advantage of speed. This combination of traits, he reckons, will lead to higher accuracy and productivity in health care.
Artificial intelligence might also make medicine more specific, by being able to draw distinctions that elude human observers. It may be able to grade cancers or instances of cardiac disease according to their risks—thus, for example, distinguishing those prostate cancers that will kill quickly, and therefore need treatment, from those that will not, and can probably be left untreated.
What medical AI will not do—at least not for a long time—is make human experts redundant in the fields it invades. Machine-learning systems work on a narrow range of tasks and will need close supervision for years to come. They are “black boxes”, in that doctors do not know exactly how they reach their decisions. And they are inclined to become biased if insufficient care is paid to what they are learning from. They will, though, take much of the drudgery and error out of diagnosis. And they will also help make sure that patients, whether being screened for cancer or taken from the scene of a car accident, are treated in time to be saved.

Wednesday, June 6, 2018

Europe’s top science funder shows high-risk research pays off

Europe’s top science funder shows high-risk research pays off:



A popular and unusual self-review carried out by Europe’s most prestigious science funder is back. The annual assessment, now in its third year, found that nearly one in five projects supported by the European Research Council (ERC) led to a scientific breakthrough.
The independent review, undertaken in 2017, assessed 223 completed ERC projects that had ended by mid-2015. It deemed that 79% of them achieved a major scientific advance, 19% of which were considered fundamental breakthroughs. That proportion rose to 27% for ERC Advanced Grants, which are awarded to experienced researchers. Only 1% of the total were judged to have made no appreciable scientific contribution. The review was published on 31 May.
Established in 2007 to improve the quality of Europe’s science, the ERC is the European Union’s premier funder of blue-skies research and is part of Horizon 2020, the EU’s main science-funding programme. It awards generous, multiyear grants in any discipline and applications are judged solely on their quality. The council has undertaken annual reviews of the projects it funds since it ran a popular pilot assessment in 2015. The strategy is pioneering among European funders, most of which evaluate success on a project-by-project basis, and it was praised for taking a qualitative approach rather than relying, for instance, on bibliometrics.

Source: ERC

Risky business
The latest assessment was carried out by senior scientists convened by the ERC’s Scientific Council. Each panel member was asked a series of questions about a randomly selected set of projects. This year, evaluators were also asked to focus on a project’s risk to a greater extent than in previous years. (A spokesperson for the ERC said that the council is still refining the assessment’s methodology.)
The 19% figure of scientific breakthroughs in the latest assessment is lower than in previous years; 21% and 25% of ERC projects assessed in the 2015 and 2016 exercises, respectively, were classed as such (see ‘Europe’s top research grants’).
The reviewers deemed that most projects that made breakthroughs were high risk and high reward, and only 10% of projects were considered low risk. “The ERC has really pushed the expectation of raising the boundaries of science and taking more risks,” says Jan Palmowski, secretary-general of the Guild of European Research-Intensive Universities, a lobby group in Brussels.
The assessment shows that risk-friendly funding is crucial for retaining talent in Europe, where research funders are generally risk-averse, says Martin Vechev, a computer scientist at the Swiss Federal Institute of Technology in Zurich who received an ERC grant aimed at early-career researchers in 2015, after spending time at computing firm IBM in the United States. The grant encouraged him to stay in Europe, and he says that the funding helped his team to develop a new sub-field of artificial intelligence that focuses on machines that automatically write computer code.
The reviewers also deemed that more than 50% of projects had already made an economic and societal impact. In a speech earlier this year, ERC president Jean-Pierre Bourguignon, said that council-funded research generated 29% of patents approved from EU funding in 2007–13, despite receiving less than 17% of the money.
Funding incentive
The review comes at a crucial time for EU research funding, say observers. This week, the European Commission is expected to release a detailed budget plan for the next instalment of its main funding programme, which will include the ERC’s next funding pot. The programme, called Horizon Europe, will run from 2021 to 2027 and has a proposed budget of nearly €100 billion (US$117 billion).
The latest review provides ammunition in the fight to raise the ERC’s budget, says Palmowski. His organization advocates for a doubling of the annual budget, which in 2017 was €1.8 billion (it started with €300 million in 2007).
The findings should encourage policymakers around Europe to focus their national research funding on excellence, even if economic growth is their priority, says the League of European Research Universities (LERU). “The ERC clearly shows that focusing on excellence alone at application stage also leads to demonstrable impacts,” says Laura Keustermans, senior policy officer at the LERU in Leuven, Belgium. Since its creation, ERC grantees have won six Nobel prizes and four Fields Medals, considered the most prestigious prize in mathematics.
Nature 558, 16-17 (2018)
doi: 10.1038/d41586-018-05325-4