Showing posts with label Evidence Based Medicine. Show all posts
Showing posts with label Evidence Based Medicine. Show all posts

Monday, 13 July 2015

Potential Pitfalls in Evidence Based Medicine

In my last post, I talked a bit about why we need to use evidence in medicine. However, much as I support using evidence wherever possible, I can also see that there are a number of things that can go wrong when using EBM approach. Most of these are not due, per se, to EBM, but are down to issues with its implementation. Nonetheless, I think they're worth discussing.

One of the biggest issues with evidence is knowing how to apply it. A study may say something that sound potentially interesting, but it's important to work out whether the result actually applies to the patient sat in front of you before changing your practise. Was the study you're reading carried out exclusively in 50 year old men with high blood pressure but no other co morbidity? That doesn't mean that the 65 year old diabetic women in front of you won't benefit from the intervention studied, but it does mean that the evidence is less applicable to her and you should think carefully about applying it to her case. Lots of studies look at extremely specific groups. This is to reduce the likelihood of "confounding variables" - things other than the intervention which may result in a difference in outcome between study groups. However, the flip side is that the study result may not apply to those who differ from the specific group looked at in the study. It's therefore well worth having a good look at the inclusion criteria for participants in trials and bearing in mind that the results might not automatically apply to all of the patients you see.

A related issue arises when we look at guidelines. Clinical guidelines are available for many, many conditions now, and provide advice on interventions, investigations, referrals etc. In the UK, most of these are issued by The National Institute for Health and Care Excellence and, in Scotland, the Scottish Intercollegiate Guidelines Network. Guidelines are usually devised by a group of professionals appraising the available evidence - basically, they've done the hard work for you and read through all the evidence to determine what the best thing to do is in a number of situations. They will usually reference the evidence they used, should you wish to read it for yourself, and also tell you how strong the evidence is behind each recommendation. However, they are not hard and fast rules, they don't replace clinical decision making and they certainly don't cover every eventuality. Use them, just don't do so without thinking.

An issue it's also worth talking about is the difference between clinically significant and statistically significant. Statistical significance in most medical (and other) science) is usually taken to mean p <0.05. This means that there is less than a 1 in 20 chance that the result occurred by chance; in other words, it's likely that any difference in outcome between groups was down to differences in intervention rather than just being coincidental. Statistical significance is important because it's how we know that our interventions have actually done something. However, this has no reflection on whether the difference in outcome will make any kind of difference to a patients health, well being or long-term risks. This is another important thing to bear in mind before advising or prescribing an intervention based on evidence; will the outcome matter to my patient? An example where this becomes important is when thinking about statins, a group of drugs which lower cholesterol. There is good evidence that (a particular group of) patients who take statins are significantly less likely to suffer a stroke or heart attack within 20 years than those who don't. This sounds great, but if you have an octogenarian sat in front of you, does this really matter? They are unlikely to live another 20 years, so is it worth adding to their drug burden, with all the risks of side effects and drug interactions this brings? I'm not saying don't, just that you should be realistic about what the benefits of this will be to your patient. Maybe discuss the risks and benefits with them and see what they think.

There are other important factors to think about too when looking at a paper/trial/study. Rather than go through all of them, it makes more sense to hand over to the experts at this point. There are really useful study appraisal checklists available on the CASP (Critical Appraisal Skills Programme) website, which guide you through the things you should ask yourself when you're considering the value of a piece of research.

I've hopefully discussed the main issues that occur when trying to implement evidence. In my next post, I plan to talk more about what we actually mean by "evidence" and how we can decide whether one piece of evidence is more or less worth using than another.

Thursday, 9 July 2015

Why We Need Evidence in Medicine


I’m a self-confessed geek. I like learning. I like working things out. I like science. I did a science degree before going to medical school and I spend my spare time on such fun activities as post graduate certificates and systematic reviews. This doesn’t mean that I think of medicine as a science – I think there is definitely an art to lots of what I do – but I do think that it’s important for doctors to have a good understanding of scientific methodology so that they can interpret the vast amount of evidence they are presented with when making clinical decisions.
This is the first in a series of posts about evidence-based medicine. It's deliberately simplistic so that those outside of the medical field can understand it, but I hope it doesn't come across as patronising. The premise of this post is to explain why we need evidence in medicine. In later posts, I'll go on to talk about what constitutes evidence, problems we face in evidence-based medicine and ways in which we can use it to benefit patients.
I’ve heard numerous colleagues tell me that they “aren’t convinced by this evidence based medicine lark”. Their arguments usually centre around the fact that they have seen interventions work before and therefore are happy to use them again. On the surface, this is a reasonable argument. Someone is poorly. You give them some tablets. They get better. Next time you see someone who is poorly in the same way, you give them the same tablets. They get better too. The problem is, how do you know that they wouldn’t have got better anyway?
Let’s take a common example – a cold. If you have a cold and you take a course of antibiotics, you’ll probably feel better within 2-3 days. However, if you don’t take the antibiotics, you’ll also feel better in the same amount of time. That’s just how long it takes for a cold to get better, but if you gave every patient with a cold some antibiotics, you’d be forgiven for thinking you had found an excellent treatment. Observations like this are very useful in medicine. They’re the basis of a lot of important discoveries. They just aren’t the be all and end all.
So, how do you work out whether your treatment actually makes a difference? This is where trials come in. In very simple terms, if you want to know whether or not your treatment works, you need to randomly give it to half the people you see with a particular condition and not give it to the other half. If the half who get the treatment get better more quickly than those who don’t, that’s a good indicator that your treatment probably does make a difference. Of course it’s more complicated than that, but that’s the basic premise.
You might ask why I really care about evidence at this stage. After all, in the examples I’ve been giving, the patients all get better. There are two issues that we need to address here. The first one is side effects. No medication is risk free, so before choosing to give it to your patient, you want to be comfortable that the benefits of using it outweigh the risks. If you’d stopped at the first, observational stage, you wouldn’t know that your treatment didn’t offer any benefit over not treating, so you’d be subjecting all your patients to the risk of a treatment that might not actually make them any better. Remember, “first, do no harm”.
The next issue is about multiple treatment options. What happens when two people think their treatment works? We owe it to our patients to give them the best possible treatment, so we need to use trials and evidence to work out which option is “best”. This needs to take into account multiple factors, such as which treatments make more people better, which ones get people better more quickly and which ones have fewest side effects.
This has been a very brief run-down of why we need evidence. Without proper trials etc, we are left guessing whether or not things work and which things work best, and who wants medicine based on guess-work?