This action might not be possible to undo. Are you sure you want to continue?
by Michael D. Anestis, M.S. I try to keep a calm stance towards the topics we cover on PBB. Obviously, Joye and I have our own beliefs about things, but we try to keep our opinions out of our writing to the degree that such a move is possible and to base all of our comments on empirical evidence. In doing this, we try to keep emotion out of the picture and to therefore make it easier to facilitate civil conversations amongst readers. That being said, occasionally a story takes hold in the media that simply makes me incredibly angry. When the story is prompted by ignorance (e.g., a journalist with no training in data analysis writing on a topic with which he or she is unfamiliar), I find it relatively easy to turn that anger into frustration and understanding and to channel that into a calm discussion of the facts, thereby debunking myths and errors. When the story is prompted by a willful misrepresentation of the evidence and an open effort to distort reality through a highly restricted discussion of data based purely upon highly flawed studies, on the other hand, my response is a bit more harsh. All of this being said, today I would like to reflect on an article posted on a number of websites, including that of the American Psychological Association, an organization more than capable of looking at all of the data and accurately describing the facts (thanks to PBB guest author Dr. James Coyne for alerting me to this and making my topic choice for the day that much easier!). This article is essentially an advertisement for the meta-analysis on psychodynamic therapy written by Jonathan Shedler and published in the American Psychologist that we discussed recently on PBB (click here for our coverage of the piece as well as free access to the journal article itself). Now, keep in mind as you read my article today that I have nothing against Dr.Shedler as a person or psychodynamic therapy as a general concept; however, I do have strong negative opinions about sloppy interpretations of data and misrepresentations of facts and the article covered today is weighed down heavily by both of those things. Before reading my discussion of the article, you might find it useful to read it for yourself so that you can form your own impressions first. Click here to read it on the APA website and here to read the same text on Health Canal (you can leave comments on the article at Health Canal). My approach today will be to quote the text of the article (in bold and italics) and then to reply below the quote. ***
*** To reach these conclusions. ³In contrast. the claims put forth in this sentence are.´ said study author Jonathan Shedler.. and they last. When the APA is publishing material on its site. when patients were re-evaluated nine or more months after therapy ended. Shedler reviewed eight meta-analyses comprising 160 studies of psychodynamic therapy. the vast majority of readers of that article are unaware of this fact and will never encounter those data. the first sentence (and the headline) will be an attention grabber and that's fine. anxiety. As such. of the University of Colorado Denver School of Medicine. When discussing science. The effect size increased by 50 percent. I understand that including quotes by the authors of the article is a useful tool. even after therapy has ended.´ Shedler said. the benefits of other µempirically supported¶ therapies tend to diminish over time for the most common conditions.. plus nine meta-analyses of other psychological treatments and antidepressant medications. ³The consistent trend toward larger effect sizes at follow-up suggests that psychodynamic psychotherapy sets in motion psychological processes that lead to ongoing change. the writer of this article should be applauded for including actual data here and attempting to explain the meaning of effect size. Most mental health articles choose to skip such . The benefits are at least as large as those of other psychotherapies.31. the flagship journal of the American Psychological Association. including depression. The thing is. Unfortunately. to 1. Effect sizes were impressive even for personality disorders²deeply ingrained maladaptive traits that are notoriously difficult to treat. representing the best available scientific evidence on psychodynamic therapy. *** ³The American public has been told that only newer.80 is considered a large effect in psychological and medical research. he said. such a non-scientific approach is highly unimpressive. The effect size for the most widely used antidepressant medications is a more modest 0.´ First off. An effect size of 0. in large part.´ As with the sentence discussed above. all showed substantial treatment benefits. symptom-focused treatments like cognitive behavior therapy or medication have scientific support. Shedler focused on effect size. but here again. panic and stress-related physical ailments. The eight meta-analyses. this is so far from the truth that it would be difficult to overstate.but the reader is left to believe that they are high quality data that produce irrefutable results. and the benefits of the therapy grow after treatment has ended. One major meta-analysis of psychodynamic therapy included 1. As we discussed in detail in our original coverage of the Shedler piece.51."Psychodynamic psychotherapy is effective for a wide range of mental health symptoms. based upon horrifically flawed data. a conclusion is stated and a passing reference is made to data.97 for overall symptom improvement (the therapy was typically once per week and lasted less than a year). this is a dangerous approach. ³The actual scientific evidence shows that psychodynamic therapy is highly effective.431 patients with a range of mental health problems and found an effect size of 0. The findings are published in the February issue of American Psychologist. like depression and generalized anxiety. according to new research published by the American Psychological Association" In just about any news article. PhD. which measures the amount of change produced by each treatment. their lasting impression of this issue is an unfounded claim. according to Shedler.
*** ³Pharmaceutical companies and health insurance companies have a financial incentive to promote the view that mental suffering can be reduced to lists of symptoms. If this is the best that is available (an arguable assertion). substance withdrawal. The claims of follow-up results are also not supported by the evidence.. emotional suffering is woven into the fabric of the person¶s life and rooted in relationship patterns. repeated by those who oppose the EST movement often enough that . This is what psychodynamic therapy is designed to address.´ This argument is one that consistently makes me irate. hopelessness. ³But more often. It is simply a talking point. click here to read a guest article written by John Ludgate. Second. it belittles the importance of symptoms. Here's the thing. That being said. There is no evidence that CBT does this or that psychodynamic therapy does it any less. For some specific psychiatric conditions. again.except that this evidence is of poor quality." That's fine. suicidal ideation. First. Shedler referred to these studies as the "best available scientific evidence on psychodynamic therapy. That being said.´ he added. but the results reflect poor data and low quality studies included in Shedler's meta-analysis. the inability to experience pleasure (anhedonia).information. If anyone takes the time to look at the actual results of the studies included in Shedler's work. they'll see a completely different picture and. The numbers cited by the author sound highly compelling. binge eating. symptoms like panic attacks. and that treatment means managing those symptoms and little else. he implies that empirically supported treatments such as cognitive behavioral therapy ignore everything except the symptoms listed in the DSM. than what is available is not good enough. Helping a client to no longer experience those things is no small feat and nobody benefits when we act as though this is not the case.. I encourage you to read our earlier article on this topic to see precisely what I mean here (I describe the actual studies themselves in detail). this makes sense. Ph. non-suicidal self-injury. which is a disservice to the reader. I realize that my opinion on meta-analysis is not universally accepted. It does so for a number of reasons.D on relapse in cognitive behavioral therapy (CBT). inner contradictions and emotional blind spots. but it would be difficult to argue against the point that many of the studies included in Shedler's study were of low quality and that the results directly comparing empirically supported treatments to psychodynamic therapy actually directly contradicted his conclusions. Moving beyond this point. here is where the use of metaanalysis caused some real problems. and antisocial behavior are actually quite important. That text will provide you with a more thorough understanding of what we know about this topic.
they take less time and cost less money than most psychodynamic approaches). rather than the old school psychoanalytic preference of charging hundreds of dollars per session for multiple sessions per week over a period of several years.. Empirically supported treatments are not motivated by profits (by the way. When people simply make claims like this without any sort of support and we take them at their word for it. But it can be done. harder to measure deeper personality changes. Still. If a particular set of symptoms is treated and the client still has unresolved problems that he or she wants to address. who rigorously test their theories through systematic investigations. so an answer is needed. as such. What answer is offered? Evidence does not capture psychodynamic therapy. whether or not "more often.´ This comment mystifies me and it's a great example of how our own biases can cause us to put forth arguments based upon contradicting points.. scientists are suddenly the "bad guys. the entire foundation of the field collapses inward and those most in need of help . *** Shedler also noted that existing research does not adequately capture the benefits that psychodynamic therapy aims to achieve. Additionally. we have been led to believe that the evidence supports psychodynamic therapy. by the way." the treatment that makes those symptoms disappear and thereby improves the individual's quality of life is the better choice. Nobody is kicked out of therapy or forced to ignore their own problems . inner contradictions. Up until this point of the article. which are highly unpopular with most people. The thing is. evidence-based approaches specifically target the issues that prompted the individual to come in for treatment and address them in an effective. EST researchers examine the impact of treatment on a vast array of outcomes unrelated to DSM symptoms." pushing an agenda upon the people whereas psychodynamic therapists are the anti-establishment offering freedom from oppressive interventions. emotional suffering is woven into the fabric of the person's life and rooted in relationship patterns. They represent the belief system of scientists. which is such a popular conceptualization of mental illness and psychotherapy that it hard to find any media representation that takes any other approach. EST's simply prioritize things like reducing suicide risk as quickly as possible and I'm not certain I understand the counter argument to that priority. As such. As it turns out. The bottom line is. time-limited manner.the millions of people suffering from mental illnesses are harmed. Psychology is a science and. our conclusions need to be founded upon evidence. This group had so much control over this field for so long that the initial two versions of the DSM used their jargon and directly asserted that mental illnesses were best thought of in those terms. it's hard to be more closely associated with the establishment than psychodynamic therapy.people have come to believe that it is true. the phrasing used by Shedler lumps proponents of ESTs in with the pharmaceutical and insurance industries. So. and emotional blind spots. nor are they associated with unpopular industries. ³It is easy to measure change in acute symptoms.instead. and a quick search through our articles on these treatments will provide you with numerous examples of this. practitioners who utilize ESTs are more than happy to address them.first they say look at all of this incredible supporting evidence and then they attempt to disarm their opponents . there is a substantial research base left untouched (or misrepresented) in the Shedler analysis.. the newer.
so rather than rehash them here. In fact. the better the outcome. In this article. by the way). Quotes number 1 and 2 are from Drew Westen.pick one. He himself said in the original paper that "qualitative analyses of the verbatim sessions transcripts suggest that the poorer outcomes associated with cognitive interventions were due to implementation of the cognitive treatment model in dogmatic.. falsehoods. He referred to those who favor the use of ESTs as "largely people who not only don't practice themselves -and therefore have no idea what would be relevant to practice -. and that EST researchers have disdain for practitioners are. I'll again encourage you to read the original article.but have a tremendous disdain for people who do practice. This is the antithesis of science. rigidly insensitive ways by certain of the therapists.´ Four studies of therapy for depression used actual recordings of therapy sessions to study what therapists said and did that was effective or ineffective. *** The research also suggests that when other psychotherapies are effective. The more the therapists acted like psychodynamic therapists. that EST supporters do not practice. Additionally. quite frankly. published in the LA Times (click here to read it). let me shine a light on bad data that appears to support my case but simultaneously disavow data so that when people point out all of the evidence against my conclusions. The studies upon which Shedler's conclusion here is based are so flawed it is actually mind-boggling. You can not simply shine a light on the results that (appear to) support you and disregard the results that contradict your conclusions. Many researchers are active clinicians. ³This was true regardless of the kind of therapy the therapists believed they were providing. it turns out they are doing what psychodynamic therapists have always done²facilitating self-exploration.´ I discussed the flaws of this analysis in great depth in our original piece on this topic. whereas all researchers are trained in therapy as . In other words. examining emotional blind spots.by pointing out that evidence in general is not useful." Westen's first points. I can say that it only contradicts me because it is not capable of seeing the truth. Shedler said. I want to provide you with another link." *** Before concluding today. brought to my attention through an email sent though the listserv of the Society for a Science of Clinical Psychology. I would bet heavily that the proportion of researchers who practice is substantially higher than the proportion of clinicians who conduct and read research. understanding relationship patterns. the results of the Baker et al (2009) report on the use of science in clinical psychology. a vocal critic of the EST movement. This is a distinct but related issue and I wanted to call attention to three quotes from the LA Times article to drive home my point for today. Either science is good or it isn't (it is." He also said that "[Cognitive-behavior therapy] is deliberately designed to ignore any relevant features of the personality of the individual. are debated (click here for our coverage of the Baker et al report).. that practicing is required in order to understand therapy. ³When you look past therapy µbrand names¶ and look at what the effective therapists are actually doing. it may be because they include unacknowledged psychodynamic elements.
dialectical behavior therapy) have been shown to be effective in the treatment of personality disorders. Lambert actually explained one of the primary reasons why ESTs are so important. If experience is required to understand one. but they represent a sloppy. what evidence does he have that clinicians have less disdain for researchers? Westen's second point. I will always provide you with the citations upon which my points are based and will openly discuss the data with you. When organizations like APA post this type of thing on their website. The entire premise of the EST movement is that certain treatments have been shown. nobody believes everyone will respond in the same manner to the same treatment and. not all clinicians are trained in research. as such. that CBT was designed to ignore personality. *** So. have no idea the degree to which their treatment choice is effective. it is easy for the authors to paint a picture that supports their point and to make very compelling statements that sound intuitively profound. This. CBT is designed to address aspects of mental illnesses that have been shown to be common across individuals in order to maximize symptom relief. along those lines. and Lambert are not a malicious attempt to mislead the populace. to produce better results for particular diagnoses. there is plenty of flexibility to work with the client as an individual. These treatments are thus considered the best choice. Regardless. Opponents of EST's on the other hand. I care whether they're responding. is absurd. the point is that when we hear somebody say something is a certain way. Here on PBB. we need to examine the evidence ourselves. is absurd. assessment is required in order to ensure that improvement is happening. He also sheds a light on why the research base upon which Shedler built his case is so flawed." He said this in an effort to point out that proponents of ESTs care more about providing a particular treatment than they do about clients responding to treatment.what's my overall point today? Quite simply.. Quote number 3 is from Michael Lambert. than aren't non-scientific clinicians the only ones incapable of understanding the entire picture? Finally. Westen. however. we need to always ask how they arrived at that conclusion and. In a meta-analysis like the one conducted by Shedler. If you know of data I overlooked that calls my point into question. it seems a bit off base to say that personality is not addressed in CBT. eschew assessments and. they make the problem even worse. The media stories covering such claims result in a form of deception that makes me remarkably upset. given than CBT and variants of CBT (e. ironically. in his attempt to criticize the EST movement. we are vulnerable to falling into traps. non-scientific approach to understanding mental health and psychotherapy. an enormous research base has developed enabling us to better understand the impact of treatment. as such. who said "I don't care what psychotherapy the person is getting. Because EST proponents believe in this so wholeheartedly.part of graduate school and internship. The claims made by people like Shedler. If we simply listen to their words or the descriptions of journalists who write about them.. however. So. we will not see things accurately. upon what evidence does he base the claim that researchers have disdain for clinicians and. when possible. The thing is. again. I .g. If we do not look at the evidence underlying their conclusions. on average. I likely say things readers disagree with on a daily basis..
and that research is often then publicized as accurate and factual. make sure that what the article says is actually supported by valid evidence.is more effective than short-term psychotherapy. PBB guest contributor Jim Coyne. Wading through a sea of bad science: A closer look at a meta-analysis comparing long-term and short-term psychotherapy by Michael D. Leichsenring and Rabung published a meta-analysis in the highly influential Journal of the American Medical Association (JAMA) in which they claimed to demonstrate that long-term psychotherapy defined as at least one year or 50 sessions of psychotherapy . Lisa Jewett. and here for examples of this). these flaws were discussed in great detail. This study became a popular piece of evidence for individuals who already believed this to be the case. Joye and I believe it is extremely important to fight against misinformation.S. In the meantime. In a paper just published in Psychotherapy and Psychosomatics by Sunil Bhar. a lot of bad research gets published . when you read about topics like this. here. As readers of PBB have likely come to realize over the past year. On a number of occasions. writing articles discussing the important flaws in certain research.sometimes in really strong journals . Brett Thombs. but this only happens when I am made aware of evidence that makes my previous position a worse reflection of reality than an alternative position. including Shedler's (2010) piece. and Aaron Beck. particularly when that research has become a popular talking point (click here.encourage you to mention it in the comment section so that we can all have a civil discussion about these things. Monica Pignotti. In 2008. I have had to sit on these results for . and was cited in a number of journal articles. Marielle Bassel. we have covered this issue. The next time I change my mind on an issue won't be the first. which was the subject of one of the articles linked to above. please. Anestis. As you might guess from the opening of this article. M. Unfortunately. it turns out that the Leichsenring and Rabung (2008) article was full of substantial flaws that completely negate the conclusions they drew.
Calculation errors The biggest issue with the Leichsenring and Rabung (2008) meta-analysis is that they ran the wrong analysis.45. others for borderline personality disorder. If the data supported this claim. and personality functioning" (p 1563). In other words. so I am excited to finally have the opportunity to write about them! In their meta-analysis. First of all. obviously this is a fairly obscure statistical reference. The data did not. but let me explain the consequences of this: even though no single study in the analysis demonstrated an overall standardized mean difference greater than 1. To explain my point. the authors compared studies in which participants were being treated for a wide variety of conditions and combined those results into one outcome. Bhar and colleagues (2010) explained that the authors. the combined effect size was calculated as 1. because of this. it would be a stunning reversal of clinical research conducted over the past several decades.8. some of the clients were being treated for anorexia nervosa. Additionally. Now. Asking whether one treatment is better than another for everything is a broad and essentially . used the wrong conversion formula. some for "neurosis. their estimates severely overstated the case. they generated a between-group effect size of 6. Leichsenring and Rabung (2008) analyzed 8 studies comparing long-term psychodynamic psychotherapy (LTPP) to a variety of other interventions for a number of diagnoses and concluded that LTPP was "significantly superior to shorter-term methods of psychotherapy with regard to overall outcome. The formula they used is intended for conversions of between-group point biserial correlations to standardized difference effect sizes. in reality. which created faulty results." and others for a now defunct diagnosis: self-defeating personality disorder. Issues with the comparisons There are actually several issues here. one of which we've discussed a number of times before. I'll briefly summarize each of the areas addressed by Bhar et al (2010). What does this mean? It means that they concluded that LTPP drastically outperformed control conditions when. however. target problems. These miscalculations would be equivalent to earning a C on every exam you take during a semester and then concluding that your average was a B+. in calculating effect sizes (remember. do anything of the sort. which means that 93% of the variance was explained. That is essentially impossible.several months now as the paper awaited publication. but the authors used within-group effect sizes.9. effect sizes are a measure of how powerful a finding is).
Potential bias in studies . results become artificially inflated. LTPP did not outperform either treatment. As such. Leichsenring and Rabung (2008) believed that publication bias was not an issue because non-significant correlations between effect size and sample size. coming up with non-significant results. In other words. as such. In addition to comparing studies measuring the treatment of different diagnoses. no treatment at all!!!) Nutritional counseling Standard psychiatric care Low contact routine treatment Treatment as usual in the community Referral of alcohol rehabilitation Provision of a therapist phone number Looking over that list. Think of it this way: journals don't tend to publish results that are not statistically significant. LTPP was compared to dialectical behavior therapy (DBT) for borderline personality disorder (BPD) and. The final issue with the comparisons in the Leichsenring and Rabung (2008) study was that they were so severely underpowered. Analyses have indicated that. In one. There is an abundance of research indicating that particular treatments are better than others on average for particular conditions. in the other. perhaps not shockingly. Leichsenring and Rabung (2008) also combined completely different treatments into single groups. a minimum of 50 participants need to be in each treatment group.. Included in the short-term psychotherapy group were: y y y y y y y Waitlist control condition (e.g. it was being compared to no therapy at all or an unvalidated treatment. you are glossing over those results and essentially combining apples and oranges and. however.useless question. adding a huge number of sessions and a large amount of time did not result in any benefits (although it almost certainly cost substantially more money to the client). The issue of power and publication bias is a tricky but important one. Those are hardly compelling results. only these extreme examples end up being published. Because only 8 studies were used. do you think that represents a strong example of what typically occurs in empirically supported short term psychotherapy? There were only two examples in which LTPP was compared to an empirically supported treatment. The only time LTPP outperformed another form of therapy in any of the trials. Because a small sample size requires a HUGE effect in order to be statistically significant. in order for a treatment study to be able to actually answer the questions it asks. The general comparison in their study was LTPP versus short-term psychotherapy. there was anywhere from 15 to 30 participants in each group. LTPP was compared to family-based treatment for anorexia nervosa. When you combine the results of treatments for a number of diagnoses together. the absence of one is essentially meaningless. a significant correlation was nearly impossible to find and. In the Leichsenring and Rabung (2008) study.
psychotherapybrownbag. and that the authors from the original studies actually included all of the relevant outcomes. Summary Ultimately. leading to impossible results completely unrepresentative of reality. None of the studies included in the Leichsenring and Rabung (2008) study properly assessed treatment integrity. research simply does not support the claims that LTPP is more effective for short-term psychotherapy (and those distinctions aren't very useful anyway). that the analyses they ran were incorrect. that assessors were blind to condition post-treatment. what can be done to keep people from simply accepting the results of meta-analyses as though they are representations of fact rather than studies full of at least as much bias and flaws as any single study? The bottom line is. A number of thoughts jump out at me when I think about this issue. meaning we have no idea to what degree therapists actually administered the treatments as they are designed. that randomization sequencing was concealed from people making assessments. what Bhar and colleagues (2010) found was that Leichsenring and Rabung (2008) used too few studies. that missing data were analyzed appropriately. Additionally. http://www. Metaanalyses like this that make broad claims based upon weak studies and miscalculations are a real problem unless readers are willing to go to the original studies and double check the claims being made by authors. unfortunately. That. that those studies were methodologically weak.com/psychotherapy_brown_bag_a/2010/01/abandoning-scienceand-logic-in-the-pursuit-of-an-agenda. how did the Leichsenring and Rabung (2008) study get published in JAMA? Secondly. is not a realistic expectation.html . that diagnoses and treatments were combined into groups that made no sense. The authors concluded that few of the studies took appropriate safeguards to ensure that participants were properly randomized. there was great variety in the number and frequency of treatment sessions and the presence of medication augmentation. First of all. and perhaps worst of all. making it very difficult to make valid comparisons. that some short-term psychotherapies actually did not involve any therapy.Bhar et al (2010) then shifted their focus to the lack of reasonable assessments of bias in the studies included in the Leichsenring and Rabung (2008) analysis.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.