Friday, May 14, 2010

The Myth of the Placebo "Effect"

Thursday, May 12, NPR carried a story on their Talk of the Nation program about the placebo effect, and more specifically, using placebos as medical treatments. While they mentioned the ethical concerns in passing, I was really disappointed that they presented a largely unskeptical view of such practice. It also showed a completely lack of understanding about what the placebo effect really is, further perpetuating a lie that has been used to support some quite dangerous practices.

A little quick background, placebo treatments are treatments that have no active ingredients. Often in drug trials this is the classic sugar pill: a pill made to look exactly like the actual drug, but with the active ingredients replaced with sugar or other inactive ingredients. In a trial, participants are split into two (or more) groups, with one given the active drug, and the other the sugar pill. After a period of time, participants symptoms are measured and they are given a brief survey asking them to detail how they feel about the symptoms being studied and whether they thought they were on the placebo or the real thing. Then, in a perfect world, the results of the study group is compared to the results of the placebo group and, if the drug is statistically equal, or worse, than the placebo it doesn't go to market.

The name "placebo effect", and the reason for the placebo based control at all, stems from the fact that the mere act of treating someone can appear to cause an improvement in the treated condition, whether or not the drug itself is having the effect. The important thing to realize is that appearance is largely all it is; the drug or other studied treatment itself is not having the effect, the effect, if there is one, is happening due to some other cause. By introducing a placebo (and proper blinding) to the study, you are able to remove the act of treating as a possible cause for the effect. The "placebo effect" is just the name applied to these other, untracked causes.

This brings me to my problem with the Talk of the Nation story itself. They treated the placebo itself, this empty sugar pill, as a cause itself, as opposed to a control stand-in for the untracked (or untrackable) variables in these clinical trials. This is an easy mistake to make, and one that human brains are evolutionarily developed to encourage. Back in the day, it was safer to think a pattern of shadows was a tiger and be wrong than to see a tiger and think it was a pattern of shadows. These days, it's a fun thing to be able to see fish and dragons in clouds or to see the face of Jesus on a potato chip, but it can cause issues when we see patterns in data that aren't real.

Specifically, the doctors interviewed by Jennifer Ludden made two critical mistakes in describing what is happening in placebo controlled studies. The first is confusing correlation and causation, or making the assumption that because effect x happened after doing action y, y must have made x happen. A simplistic example of this (stolen from Fraggle Rock) is to imagine a person who saw a group open umbrellas, and then felt it start raining. That person could believe that opening those umbrellas caused the rain. The same thing happens with medicine: People see a doctor, receive a treatment, or take a pill (or goes to their local faith healer); their illness goes away; thus whatever they did fixed their illness, ignoring anything else that may have also happened.

Well controlled trials limit or eliminate many of these variables so that only one thing, the drug or treatment, is being studied. The placebo is just one of those controlling mechanisms, removing such variables as doctor attention, the comfort and stress relief of being treated and many other environmental factors. It also helps to control for the body's natural ability to heal, even (rarely) from the most serious illnesses. This is true because in theory, the same number of people receiving both the real treatment and the placebo will have these externally caused effects. Thus if you give a fake treatment to 100 people, and a real treatment to 100 people, and 25 people from the fake treatment group get better, but 50 from the real treatment group get better, you have good reason to believe that the extra 25 people got better because of the treatment itself, not from shared causes.

The other common, related mistake, which Dr. Arthur Barsky touched on very briefly in the interview, is what is known as selection bias. This effect is manifested by people expecting things to happen, so when it does happen, they notice it more. An example of this is the "full moon myth" in hospitals. The myth goes that on nights with full moons, there are more injuries admitted to hospitals. This gets reinforced to hospital workers because on some nights, the number of admittances either is, or feels, higher, and when they look outside they see the full moon. Other nights when the volume is high, but there is no full moon, they don't think about, but they remember the nights that confirm the myth because they're told to expect that. If you remove this expectation, and look at the statistics, no such pattern is shown.

The same thing happens with medical treatments. People expect a treatment to have an effect. Thus, when someone is on a drug and feel the illness subside, they attribute it to the medicine. If a their symptoms get worse, a they just say "oh, it's a bad day" or the like; conversely if the symptoms get better while off the medication, they just feel lucky or attribute it to lingering effects of the medication. The change in symptom severity could be completely random, but it's attributed to the treatment because it's expected that the treatment would help with the symptoms.

In the same way, you notice such variation more because you're asked to pay more attention to it. Using chronic pain as an example, day to day, the severity of such pain varies. While a victim of the pain may notice strong differences, mild changes in pain go largely unnoticed. When they go on a treatment however, they are often asked to rate their pain level on a daily, if not more frequent, basis. This forces them to pay attention to even small changes that they would not have noticed before. This aspect is highlighted by the fact noted by Dr. Barsky that 25% of study subjects on placebo report side effects. Taking a sugar pill doesn't make you feel more tired or nauseous, but when you feel tired or nauseous while taking a sugar pill, you notice it more because you're expecting a side effect. This is another variable that properly blinded placebos are supposed to control for because with or without active treatment, this effect will still happen.

Another pair of issues that cloud medical studies, especially of subjective ailments, and is controlled by placebos is both the natural desire to help others and the desire to get something for your actions. The first is a problem because people might over-report the benefits of a treatment because they believe the doctor worked hard to help them. You may rate your pain somewhere between a 5 and a 7, but if you think the doctor tried hard to help you, you'd rate it at a 5, but if you got no aid or your doctor acted distant or uninterested, you'd rate a 7. This isn't a conscious choice (in most cases), but it will affect overall scoring. Similarly, if you are very involved in something (for example a long term diet/exercise regimen) or you pay a lot for something you feel you deserve more out of it whereas if you did not have to invest much time, effort or money (for example, sitting and watching a video), you wouldn't feel as invested in the treatment.

None of the above is to say that the variables a placebo controls for have no actual effect. I do not know any scientist or physician who would not say spending more time with a patient is a good thing (insurance companies on the other hand...). However, it does say that just the act of prescribing a placebo is not a substitution for real treatment. The placebo itself, sugar pill, acupuncture, faith healing, etc. has no effect in and of itself. It is only a stand in for many variables that are not tested. It is one, or many, of those unnamed, untracked variables that is having the effect, not the placebo itself. Don't replace that active cause with the inactive placebo; you short-change your patient and yourself.

NPR, I strongly recommend you get an alternate view on this. A set of doctors I know would be willing to speak with you are the ones that run Science Based Medicine, specifically Dr. Stephen Novella. For further reading see Science Based Medicine's coverage of the placebo effect. Also a great listing of common logical fallacies to watch for.