aidarrowcaretcheckclipboardcommenterrorexperienceeyegooglegownmicroscopenavigatepillTimer IconSearchshare-emailFacebookLinkedInTwitterx

Psychologists Explain Why Their Misconstrued Study Isn’t ‘Anti-Therapy’

The health news cycle can make it seem like Science Says a lot of new, extremely important, often contradictory things about the same topic: Today, an experimental drug is the best treatment for a certain disease. Tomorrow, research shows dietary changes are just as effective. Next week, it’s surgery or bust.

But there’s often a considerable gap between what studies find and what headlines claim. Case in point: According to a press release, a recent study “casts doubt on evidence for ‘gold standard’ psychological treatments,” seemingly challenging the value of going to therapy. As it turns out, the study isn’t anti-therapy at all.

For the study (a review of previous research), psychologists reevaluated the evidence typically used to show that different forms of psychotherapy effectively treat specific conditions. In the end, they didn’t conclude that all therapy is a waste of time. Instead, they determined that, when it comes to certain mental health diagnoses, there’s stronger data to support the use of some therapeutic methods over others.

To learn more, we spoke with the two experts behind the study: Alexander Williams, director of the Psychological Clinic at the University of Kansas’s Edwards Campus, and John Sakaluk, assistant professor of social psychology at the University of Victoria in Canada. 

They reassured us that, yes, there is solid evidence therapy works — but you need to make sure the treatment fits the problem.


Your study looked at empirically supported treatments, or ESTs. What are these? Can you give some examples?

Alexander Williams (AW): There are different types of psychotherapy. Empirically supported treatments are a combination of a specific diagnosis and a type of therapy that has clinical trials showing it helps people with the diagnosis. Cognitive behavioral therapy for depression is an example of an EST. Interpersonal therapy for depression is an example of an EST. Cognitive behavioral therapy for generalized anxiety disorder is an example of an EST. But interpersonal therapy for generalized anxiety disorder isn’t an EST. 

John Sakaluk (JS): Empirically supported therapies are therapies that clinicians and mental health consumers have a high degree of trust that they work.

Why did you decide to review the literature on empirically supported treatments?

JS: The backdrop of the work is the replicability crisis in mental health research. After some surprising questionable findings were published in flagship psychological research journals, and with a growing awareness that psychologists were analyzing data in ways that perhaps were not the best ways, people developed a renewed interest in [understanding] to what extent we could replicate findings published in journals. And any way you slice it, the replication rate was not what I think many people would have hoped for. So we looked at this review as an opportunity to peek under the hood of the literature that really comes close to touching on peoples’ everyday lives. 

Do your findings mean therapy doesn’t work?

AW: No. Around one in five of the treatments we looked at had strong evidence that they were effective. The other thing is, there is a large body of clinical literature showing that, the vast majority of the time, therapy works a lot better than not getting any help.

JS: We’re not saying that therapy doesn’t work. And we’re not saying that all therapies are the same. Our review shows that, based on the metrics we looked at, some therapies have exceptionally good evidence on their side. A good case of an EST with exceptionally strong research on its side is exposure therapy for specific phobias. Therapists will actually describe seeing observable changes in patients between individual exposure therapy sessions.

Do you worry that the news coverage of your study might make people think science is wishy-washy — that it tells us one thing is true today and the opposite is true tomorrow, and so how can we trust it?

JS: One of the important pieces of this conversation is that the scientific method requires us to have a little tolerance of uncertainty. I would not necessarily call it an unforgivable failure of the system that our review revealed the pattern of findings it has. The idea is that we gradually accumulate evidence, we scrutinize it, we look at it again and again and again, and we iteratively improve over time. If we’re going to live in a world where we build social systems that promote well-being premised on scientific findings, we can’t have a panic attack every time science requires us to do a bit of a reset. That’s not a bug of the system, that’s a feature of the system. 

In the lay public, there is sometimes this idea that if Article One says something and Article Two says something else, then by default, the scientific system has failed. And that is not exactly how it works. It may take years — it may take decades — to have this really slow, gradual buildup of accumulation of credible findings to bring us to a new place of understanding. 

What do your findings mean for people who are currently in therapy or might consider it in the future?

AW: Talk with the therapist about how they will measure how therapy is going and evaluate if you’re making progress or not. One way we can do that is to subjectively say, “Do I think I’m doing better than I was in the past?” But our memories and our therapists’ memories can be fickle things. So while that’s one metric to use, it’s probably not the only one therapists should be using. There are other quick metrics that are far from perfect but can help people track their symptoms for most major diagnoses, and that take about one minute to fill out, at most. You can fill that [assessment] out on a weekly or monthly basis to track: Are my symptoms getting better? Am I getting closer to my goals?

How else can a patient be sure they’re getting the best treatment for them?

AW: Ask the therapist up front, “How are we going to decide upon what treatment we’re using?” Look for a therapist who a) wants to do that collaboratively rather than imposing things upon you, and b) cites research supporting what they’re doing versus faux wisdom or experience.

Ask as well about the training the therapist has had. That’s not a perfect measure of how skilled someone is at administering that therapy, but in general we’d think that if someone had 100 supervised hours of training, that’s better than if someone attended a weekend seminar.

The final thing is, the treatments we evaluated were from the American Psychological Association. Their website, PsychologicalTreatments.org, lists of all the various therapy-by-diagnosis combos that they classify as empirically supported treatments. In general, if you have a conversation with a therapist about what treatment they’ll use in therapy and they suggest something that isn’t on that list, you should second guess it.

JS: There should also be some forethought and conversation between the therapist and the client about, “What is our backup plan if this initial plan doesn’t pan out?” Let’s not have it so that if you determine the therapy isn’t working, you’re just in a lurch and have no contingency plan. You want a therapist to say, “Our first plan of attack is XYZ, here’s how we’re going to monitor it, here’s how we’re going to check in, and failing that, this is what we’re going to try next.”

AW: And ‘trying next’ can [go] a lot of different ways. It can be a different form of therapy [or] a referral to a therapist who has different training and expertise. Or it might be a referral to a medication provider.

What do you hope further research on this topic will find, and how will that help patients?

JS: What excites me most about our review is how we are broadening definitions of what counts as evidence. The standard statistical criteria used in studies to determine if an EST is a helpful therapy won’t be all that useful if those statistical criteria are misreported, come from incredibly small or imprecise studies, or are implausibly frequent in their “significance,” given the sample size. From our perspective, all of these pieces are necessary ingredients in order for patients to have maximum confidence that they are getting the strongest evidence-based care possible.


This interview has been condensed and lightly edited.

No comments. Share your thoughts!

Leave a Comment

About us

The Paper Gown, powered by Zocdoc, covers health and healthcare with a focus on patient experiences — inside and outside the exam room, before check-ups and after surgery, across all states of health. We strive to tell stories that help patients feel informed, empowered and understood. Learn more.