Thursday, 9 July 2015

Why We Need Evidence in Medicine


I’m a self-confessed geek. I like learning. I like working things out. I like science. I did a science degree before going to medical school and I spend my spare time on such fun activities as post graduate certificates and systematic reviews. This doesn’t mean that I think of medicine as a science – I think there is definitely an art to lots of what I do – but I do think that it’s important for doctors to have a good understanding of scientific methodology so that they can interpret the vast amount of evidence they are presented with when making clinical decisions.
This is the first in a series of posts about evidence-based medicine. It's deliberately simplistic so that those outside of the medical field can understand it, but I hope it doesn't come across as patronising. The premise of this post is to explain why we need evidence in medicine. In later posts, I'll go on to talk about what constitutes evidence, problems we face in evidence-based medicine and ways in which we can use it to benefit patients.
I’ve heard numerous colleagues tell me that they “aren’t convinced by this evidence based medicine lark”. Their arguments usually centre around the fact that they have seen interventions work before and therefore are happy to use them again. On the surface, this is a reasonable argument. Someone is poorly. You give them some tablets. They get better. Next time you see someone who is poorly in the same way, you give them the same tablets. They get better too. The problem is, how do you know that they wouldn’t have got better anyway?
Let’s take a common example – a cold. If you have a cold and you take a course of antibiotics, you’ll probably feel better within 2-3 days. However, if you don’t take the antibiotics, you’ll also feel better in the same amount of time. That’s just how long it takes for a cold to get better, but if you gave every patient with a cold some antibiotics, you’d be forgiven for thinking you had found an excellent treatment. Observations like this are very useful in medicine. They’re the basis of a lot of important discoveries. They just aren’t the be all and end all.
So, how do you work out whether your treatment actually makes a difference? This is where trials come in. In very simple terms, if you want to know whether or not your treatment works, you need to randomly give it to half the people you see with a particular condition and not give it to the other half. If the half who get the treatment get better more quickly than those who don’t, that’s a good indicator that your treatment probably does make a difference. Of course it’s more complicated than that, but that’s the basic premise.
You might ask why I really care about evidence at this stage. After all, in the examples I’ve been giving, the patients all get better. There are two issues that we need to address here. The first one is side effects. No medication is risk free, so before choosing to give it to your patient, you want to be comfortable that the benefits of using it outweigh the risks. If you’d stopped at the first, observational stage, you wouldn’t know that your treatment didn’t offer any benefit over not treating, so you’d be subjecting all your patients to the risk of a treatment that might not actually make them any better. Remember, “first, do no harm”.
The next issue is about multiple treatment options. What happens when two people think their treatment works? We owe it to our patients to give them the best possible treatment, so we need to use trials and evidence to work out which option is “best”. This needs to take into account multiple factors, such as which treatments make more people better, which ones get people better more quickly and which ones have fewest side effects.
This has been a very brief run-down of why we need evidence. Without proper trials etc, we are left guessing whether or not things work and which things work best, and who wants medicine based on guess-work?

No comments:

Post a Comment

And how does that make you feel?