You are on page 1of 17

Measures Assessing the Quality of Case Conceptualization:

A Systematic Review
Sandra Bucci, Lorna French, and Katherine Berry
The University of Manchester
Context: The quality of case conceptualization differs across clinicians. It is unclear which case
conceptualization quality assessment measure researchers, clinicians, and trainers might use for their
specific purpose. Objective: We evaluated measures that purport to assess the quality of case
conceptualizations. Method: We searched EMBASE, PubMed, Medline, and PsycINFO databases
with the terms case formulation* OR case conceptuali*ation*. Further specific terms were then used to
narrow the search. Results: Of all the articles reviewed, 8 measures of case conceptualization met
inclusion criteria. There is no single measure that has been validated across a range of different settings.
However, the Case Conceptualisation Coding Rating Scale, Case Formulation Content Coding Method,
and Case Formulation Quality Checklist have been most robustly tested. Conclusion: Further
research is required to test the psychometric properties of measures so that robust quality measures
can be used across different settings/client groups.  C 2016 Wiley Periodicals, Inc. J. Clin. Psychol.

72:517–533, 2016.

Keywords: case formulation; case conceptualization; reliability; validity; quality

Case conceptualization, also known as case formulation, is a way of drawing on psychological


theory to describe and explain individual clinical presentations in a way that is coherent and
personally meaningful to clients (Dudley, Park, James, & Dodgson, 2010). According to the
British Psychological Society’s Good Practice Guidelines on the use of psychological formu-
lation (Johnstone, Whomsley, Cole, & Oliver, 2011), all case formulations aim to explain the
development and maintenance of clients’ difficulties and inform a plan of intervention based
on the psychological processes identified. Case conceptualization is a central process in most
schools of psychotherapy. It is a core competency in clinical psychology in the United Kingdom
(2013) and is consistent with evidence-based practice in psychology as adopted by the American
Psychological Association (Anderson, 2006).
According to Johnstone and Dallos (2013), formulations across therapeutic schools, and
across countries, differ in terms of the factors viewed as most relevant (e.g. thoughts, feelings,
behaviors, and so on), the explanatory concepts they draw on (e.g. schemas, the unconscious,
forms of feeling), the emphasis placed on reflexivity, the degree to which an expert versus
collaborative stance is adopted, their position in relation to psychiatric diagnosis, and the
way in which the conceptualization is developed, shared, and used within therapy. However,
irrespective of the school of thought or country of practice, a case conceptualization approach to
psychotherapy initially involves the clinician deciding on a theoretical perspective and developing
a hypothesis about the factors that cause and maintain the problems to then guide a personally
meaningful intervention, test the hypothesis, and revise the intervention accordingly (Johnstone
& Dallos, 2013; Persons, Beckner, & Tompkins, 2013).

We thank all the authors who responded to our e-mail requests for further information in relation to the
scales reviewed in the current paper and the blind reviewers of this article for their insightful comments.

Please address correspondence to: Dr. Sandra Bucci, School of Psychological Sciences, The University
of Manchester, 2nd Floor, Zochonis Building, Brunswick Street, Manchester M13 9PL, UK. E-mail:
Sandra.bucci@manchester.ac.uk

JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 72(6), 517–533 (2016) 


C 2016 Wiley Periodicals, Inc.

Published online in Wiley Online Library (wileyonlinelibrary.com/journal/jclp). DOI: 10.1002/jclp.22280


518 Journal of Clinical Psychology, June 2016

Conceptualization1 is a core skill for clinical psychologists at all levels and in all specialties, is
fundamental to individualized treatment planning, and has been recognized as a clinical skill in
its own right (Beck, 1985; Dudley et al., 2010). Case conceptualization is particularly important
when working with complex presentations or when standard, problem-specific conceptualiza-
tions and interventions are ineffective (Davidson, 2008). Case conceptualization is defined by its
flexible, person-specific approach and is the clinical psychologist’s alternative to more medically
oriented diagnostic approaches to mental health problems. This is particularly the case for the
so-called functional psychiatric diagnoses such as schizophrenia, bipolar disorder, and person-
ality disorder (Bentall, 2006). Rather than describe mental health problems within the context
of an illness model, case conceptualization can offer benefits over and above that of diagnostic
approaches because its focus is creating personal meaning, agency and hope (Johnstone et al.,
2011). This approach has been shown to counter some of the negative consequences of receiving
a psychiatric diagnosis, including increasing client’s sense of powerlessness and worthlessness
(Honos-Webb & Leitner, 2001), increasing psychiatric staffs’ understanding of clients problems
resulting in more optimism about treatment (Berry, Barrowclough, & Wearden, 2009), and
decreasing client challenging behavior (Ingham, 2011).
Researchers have investigated the clinical effectiveness of a case conceptualization approach
to psychotherapy using various therapeutic modalities (e.g., cognitive behavioral therapy, psy-
chodynamic psychotherapy, interpersonal therapy) across different clinical groups. Benefits in
outcomes as a result of meaningful and explanatory case conceptualizations have been noted
among clients with anxiety and depression (Persons, 2006), psychosis (Chadwick, Williams, &
Mackenzie, 2003), and bulimia nervosa (Hendricks & Thompson, 2005), to name a few. How-
ever, despite these benefits, review studies have shown mixed evidence regarding the efficacy of
case conceptualizations, and reliability of cognitive case conceptualizations is modest, at best
(for a detailed review, see Aston, 2009; Ghaderi, 2011; Mumma, 2011; Persons, 2006).
Whether or not clinicians agree with regard to case conceptualization for individual clients
has been a notable problem for some time, with past research primarily evaluating the quality
of case conceptualizations embedded in psychoanalytic and psychodynamic traditions. Seitz’s
(1966) study illustrated that independent conceptualizations of psychoanalytic process of five
analysts resulted in zero consensus. Building on this early work, Luborsky (1977) developed the
core conflictual relationship theme (CCRT) method to more reliably capture the psychodynamic
narrative and overcome the problems inherent in reliably and validly measuring the quality of
case conceptualizations. Since that time, other researchers (e.g., Caston, 1993; Horowitz, Rosen-
berg, Ureño, Kalehzan, & O’Halloran, 1989) have carried out empirical studies investigating the
complexities of measuring the quality of case conceptualizations, with researchers building on
this earlier work to find, in some cases, excellent levels of agreement regarding the key elements
of psychodynamic conceptualization (mean of 0.92; Horowitz & Rosenberg, 1994). However,
such positive findings are not consistent across the literature.
Research shows that while there is good reliability for the descriptive elements of cognitive
models of conceptualization, the explanatory or mechanistic elements of conceptualization have
received less support (Bieling & Kuyken, 2003; Fothergill & Kuyken, 2002; Persons, Mooney, &
Padesky, 1995). For example, Fothergill and Kukyen (2002) found increased reliability on some
but not all inferential aspects of cognitive case formulation, and more experienced therapists
were not consistently more reliable than less experienced raters using a systematically described
cognitive case formulation method. Using an atheoretical model of case conceptualization, Eells,
Kendjelic and Lucas (1998) found similar results: There was good support for the descriptive
but not the inferential components of formulation. Furthermore, some studies have reported
negative experiences of case conceptualization. For example, Chadwick et al. (2003) examined the
effect of case formulation in cognitive behavioral therapy (CBT) for psychosis on the perception

1 We acknowledge that the terms case conceptualization (used in the United States) and case formulation
(used in the United Kingdom) both refer to the same underlying principles and we therefore use both terms
in this article
Quality of Case Conceptualizations 519

of the therapeutic alliance, strength of delusional, and self-evaluative beliefs. The authors found
a degree of ambivalence about formulation: Six clients reported a negative emotional response
(e.g., saddening, worrying, upsetting) to formulation.
From a clinical perspective, the quality of case conceptualization is largely dependent on the
quality of the clinician’s assessment and the information derived from it (Johnstone et al., 2011).
A consequence of this is that the quality of case conceptualization can differ across clinicians
and individual cases. Recognition of variations in the quality of conceptualization has led to the
development of more robust methods for assessing the quality of case conceptualizations, specifi-
cally in terms of reliability and validity. Reliability refers to the extent to which conceptualization
can be replicated by other clinicians, such as consistency in developing a case conceptualiza-
tion across clinicians with varying levels of expertise. Although in other contexts reliability is
often measured through test-retest reliability, a measure of consistency over time, this is not a
relevant measure for case conceptualizations because conceptualizations are expected to evolve
over time as new information comes to light or the client makes progress. Validity encompasses
content, construct, criterion, and predictive validity.
In the context of case conceptualization, content validity refers to the extent to which the
relevant target and causal variables and the relationships among the variables are included
and represented in the conceptualization (Mumma, 2011). Construct validity refers to the ex-
tent to which a conceptualization relates to theoretically important concepts such as symptom
severity, or the extent to which a conceptualization provides a coherent account of individuals’
presenting problems, as well as discriminant validity. Criterion validity is the extent to which
a conceptualization matches an expert conceptualization, and predictive validity refers to the
extent to which conceptualization predicts outcomes and/or mechanisms of change in therapy.
Whereas reliability studies of the quality of a generated case conceptualization focus on how
reliably a conceptualization can be replicated by another clinician, and validity studies largely
focus on how well a case formulation predicts psychotherapy outcomes, published measures
examining the quality of case conceptualizations have also used various other methods to ex-
amine “quality.” For example, some quality measures are designed to assess clinician skill at
generating a case conceptualization, and other measures examine clinicians’ ability to generate
a conceptualization based on vignette methods. Some case conceptualization scales have also
been designed for training purposes, whereas others are designed for supervision purposes. These
different, although related, aspects of case conceptualization will be the subject of the current
review.
Despite general agreement that it is essential to have a measure to evaluate the quality of case
conceptualizations (Eells, 2010), a recent Delphi survey of professionals reached no consensus
on how best to assess the quality of case conceptualizations (Völlm, 2014). Indeed, approaches
for assessing the quality of case conceptualizations have developed relatively independently
across a range of different settings and client and staff groups, and they differ in how they define
and assess quality (in terms of reliability and validity). Furthermore, case conceptualization
scales have been developed and used for a range of different purposes; some quality measures
have been developed for training purposes as benchmarked either against expert-developed
clinical vignettes or via clinician-generated formulations, and others developed for research
purposes. As a result, researchers, clinicians and trainers are unclear as to which quality case
conceptualization measure to use for their specific purpose. Therefore, the aims of this study
are to (a) systematically identify and review published measures to assess the quality of case
conceptualizations, (b) evaluate their relative strengths and weaknesses, (c) identify the context in
which measures identified can be used, and (d) identify areas for future research in the assessment
of case conceptualizations.

Method
We conducted this review in accordance with the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses guidelines, which are consensus-generated best practice guidelines
for review papers (Liberati et al., 2009).
520 Journal of Clinical Psychology, June 2016

Search Procedure
A literature search was conducted using the electronic databases EMBASE, PubMed, Medline
and PsycINFO. Major texts and recent book chapters on case formulation/conceptualization
were also reviewed. The initial terms case formulation* OR case conceptuali*ation* were used to
retrieve studies. The search was then narrowed using further terms case formulation* OR case
conceptuali*ation* AND; validity; reliability; quality; assess; evaluat*; standard*; competen*;
measure*; tool; checklist; criteria; scor*; rat*; psychometric propert*; internal consistency. Terms
were searched under ‘keyword’ in OVID (EMBASE, Medline, and PsycINFO) and “all fields”
in PubMed. The limit of “English language” was then applied to the searches and references that
were sourced from Dissertation Abstracts International were removed. The reference manager
software EndNote was used to remove duplicates. The titles and abstracts of the articles were
examined to exclude studies from full-text examination. The remaining articles obtained from
the database search were retrieved in full and assessed further to determine whether the inclusion
criteria were met. For completeness, all authors of included measures were contacted to request
further information regarding updates or additional validation investigations to the measures.
A response was received from all authors and additional material was included in the review,
where relevant.
Figure 1 shows a diagram detailing the flow of studies through the different phases of the
systematic search. In total, the database searches produced 2,617 articles. Duplicates (n = 1026)
were removed using Endnote reference manager, resulting in 1,590 articles retained. Of the 1,590
articles, 1,561 articles were excluded following title and abstract screening for not meeting the
inclusions criteria. The remaining 28 studies were screened in full text. A total of 21 articles
were excluded because they did not evaluate the quality of case conceptualizations or were
not developed to assess psychological conceptualizations used by psychological therapists, or
both. Three scales were excluded because the original articles were written in Spanish (Caycedo
Espine, Ballesteros de Valderrama, & Novoa Gómez, 2008; Munoz-Martinez & Novoa-Gomez,
2011) and German (Holtforth & Grawe, 2000). The reference lists of all papers retrieved were
scanned and one additional reference that met the inclusion criteria was found. Throughout the
searches, no similar systematic review was identified.

Inclusion and Exclusion Criteria


Studies were reviewed up to and including March 2014. Studies were included if they conformed
to the following criteria: (a) included a rating scale or measure to evaluate the quality of a case
conceptualization; (b) related to psychological, as opposed to psychiatric, case conceptualiza-
tion; (c) used English language; and (d) involved mental health or forensic populations. SB and
KB made decisions about whether articles met inclusion criteria. Articles were included only if
both authors were in agreement.

Criteria for Evaluating Measures of Case Conceptualization


Drawing on our combined clinical experience and knowledge of the case conceptualization
literature, measures were evaluated based on the following: ease of administration, the extent
to which they can be generalized to different populations and settings, the extent to which they
measure the reliability and validity of conceptualizations, and the extent to which the measure
themselves are reliable and valid. Each of these criteria is described below.

Ease of administration. Given time and resources pressures, the length of time and level
of training required to administer a measure is of central importance to health professionals in
clinical practice. Thus, tools used in routine clinical practice are most practical when they are
simple to administer, with time-limited training required.

Generalizability. This domain includes an evaluation of whether the measure can be used
across different mental health or forensic groups, in addition to whether it can be used for
training, research, and clinical practice purposes.
Quality of Case Conceptualizations 521

Records identified through database

Identification
searching
(n = 2617)

Records after duplicates removed


(n = 1590)
Screening

Records screened Records excluded


(n = 1590) (n = 1561); do not meet
inclusion criteria
Eligibility

Full-text articles assessed Full-text articles excluded


for eligibility (n = 21)
(n = 28)

Obtained from reference


lists (n = 1)

Studies included in review


Included

(n = 8)

Figure 1. Flow diagram of systematic search.

Reliability and validity of conceptualizations. Scales can measure the reliability of a


conceptualization in terms of whether it can be replicated by different clinicians, in addition to
multiple types of validity, including content, construct, criterion, and predictive validity.

Psychometric robustness of Case Conceptualization Scales. This domain includes an


evaluation of whether the measure itself (not the conceptualization) demonstrates sound psy-
chometric properties, including internal consistency, inter-rater reliability, test re-test reliability,
and validity.
Results
Selection and Overview of Measures
Of all the articles reviewed, eight measures met eligibility criteria. Table 1 provides a description
of each case conceptualization measure and provides summary details regarding the purpose, val-
idation process, robustness, and strengths, and limitations of each measure. Studies are grouped
in Table 1 as follows: (a) measures that rely on vignettes from which the ‘to-be-rated’ concep-
tualization is developed and hence provide utility in evaluating case conceptualization skills of
trainees and (b) measures that can be applied to clinical practice or for research purposes.
Table 1
Measures Assessing the Quality of Case Conceptualization
Measure, Description Purpose Formulated Psychometric Measure of Strengths of Limitations
522

authors of measure samples in robustness quality measure of measure


validation studies

Vignette-based ratings
Case Formulation Scoring
r Six domains of evaluation: r Assess and benchmark r Training vignette r Reliability r Content r Easy to administer r Weak
Criteria (Page et al., 2008) problem list, predisposing skills in clinical training assessed but validity
r Little training psychometric
factors, precipitating programs to provide questionable r Construct required properties
factors, perpetuating psychology trainees with validity
factors, provisional formative feedback
r Reliability
conceptualization, and
problems potentially
hindering treatment and
strengths and assets
r 0–5 scale for each domain,
with a higher score
representing a better
quality conceptualization
Scoring of a Case
r Eight component levels of r Assess case r Delusional beliefs r Only inter-rater r Content r Relatively easy to r Based on
Formulation Method the conceptualization conceptualization skills r Video vignette of reliability validity use video vignette
(Dudley et al., 2010) (e.g., stressors/triggers), by way of level of a clinical session established r Criterion r Clear scoring and might not
with items scored at 0 clinician agreement with validity criteria generalize to
(inaccurate), 1 (theme three expert ratings
r Good inter-rater clinical
identified), or 2 (accurate) reliability practice
r Grounded in CBT
framework
The Cognitive Behavioral
r Four CBT-based r Used in conjunction r Four clinical case r Psychometric r Content r Adequate scoring r Based on
Therapy Case categories: problem list, with two other rating vignettes: two properties of the validity criteria in vignette and
Journal of Clinical Psychology, June 2016

Conceptualization Rating diagnostic, working scales (Eells et al., 1998; describing major scale not assessed r Criterion difference between might not
Scale (Haarhoff et al, 2011) hypothesis, and treatment Fothergill & Kuyken, depressive validity numbers on scale r Generalize to
planning 2002) to evaluate the disorder and two clinical
r Categories rated on a content and quality of describing practice
10-point scale (0 = absent, case conceptualizations generalized r Requires
10 = excellent) produced by novice anxiety disorder psychometric
r Local expert provides four CBT clinicians evaluation
benchmark case
conceptualizations from
the same case vignettes as
clients
r Percentage of agreement is
calculated

(Continued)
Table 1
Continued
Measure, Description Purpose Formulated Psychometric Measure of Strengths of Limitations
authors of measure samples in robustness quality measure of measure
validation studies

Case Formulation Quality


r Ten items rated on a r Designed to evaluate r Forensic r Acceptable r Content r Measure has been r Only applied
Checklist (McMurran et al., 4-point scale quality of forensic case population inter-rater validity established as to forensic
2012)
r Total scores and a single conceptualizations r Vignette based reliability, r Construct reliable case vignettes
rating of quality are
r Designed for test-retest validity
r Relatively easy to and might not
provided training purposes reliability, and use generalize to
r Four ratings vary from internal other settings
“very poor” to “excellent” consistency
r Uses the five Ps
formulation template
Measures used for clients seen in ongoing practice or for research purposes
Collaborative Case
r Examine three principles r Reliably rate r Validated with r Established r Content r Comprehensive r Complex to
Conceptualization Rating or domains of CBT case conceptualization resistant reliability, validity scoring criteria rate
Scale (Padesky et al., 2011) conceptualization (levels process and skill of CBT depression clients inter-rater r Convergent r Good to excellent r Requires
of conceptualization, therapists r Based on a reliability, and validity psychometric training
collaborative empiricism,
r Supervisors provide clinician- internal
r Face validity properties
r Requires
strengths/resilience focus) formative feedback to generated consistency
r Construct validation
and to what extent clinical trainees or conceptualization validity with clinical
competency in these areas researchers groups other
are demonstrated in than resistant
r therapy sessions r r r r r r depression
Case Formulation Content To assess content: each Provides a tool for Fifty-six intake Established Content Comprehensive Time
Coding Method (Eells et al., clinically relevant category reliably and evaluations inter-rater validity r Applicable to a consuming
1998, 2005) is given one of three codes comprehensively randomly selected reliability across r Construct variety of clients r Resource
Quality of Case Conceptualizations

(absent, somewhat categorising the from an content and validity that a clinician intensive
present, clearly present) information that a outpatient clinic quality categories might work with
r Reliability
r To assess quality: quality clinician uses in r Based on a established
ratings for the conceptualizing a client clinician- across raters
conceptualization as a generated but not other
whole, for each major conceptualization aspects of
subcategory, and for the validity
complexity of the r Considerable
conceptualization, the training
degree of inference used, involved
and the precision of
language
r Designed to be
523

theoretically neutral

(Continued)
524

Table 1
Continued
Measure, Description Purpose Formulated Psychometric Measure of Strengths of Limitations
authors of measure samples in robustness quality measure of measure
validation studies

Quality of Cognitive Case


r Overall quality is r Measure inferential r Training vignette r Inter-rater r Content r Simple and easy r Limited
Formulation Rating Scale considered with a single aspects of a CBT based on the case reliability was validity to use scoring
(Fothergill & Kuyken, 2002) quality score assigned (1 = conceptualization and of “Anna,” a assessed, with a r Construct criteria
very poor, 2 = poor, 3 = the quality of case woman with good level of validity
good enough, 4 = good) conceptualizations depression and agreement
for each item personality obtained
disorder
Rating the Quality of Case
r Six dimensions for which r Used to examine the r Clients with r Inter-rater r Content r Relatively easy to r Further
Formulation for inter-rater reliability could effect of CBT training obsessive reliability validity complete psychometric
Obsessive-Compulsive be measured: relevance of compulsive established
r Construct evaluation
Disorder (Zivor et al., 2013) information, accuracy, disorder validity needed
categorization of data r Clinicians r Limited to use
within the attending a within
conceptualization, threat workshop obsessive
appraisal, watched a video of compulsive
conceptualization maps an assessment and disorder
client’s experience developed a
Journal of Clinical Psychology, June 2016

according to CBT formulation


understanding, integration
r A seventh dimension of
the “overall quality of the
conceptualization”

Note. CBT = cognitive behavioural therapy.


Quality of Case Conceptualizations 525

The studies were conducted across a number of countries: United States (N = 1), United
Kingdom (N = 6), Australia (N = 1), and New Zealand (N = 1). Sample size ranged from 2 to
115 participants. The majority of studies (N = 8) used a mental health sample and McMurran,
Logan, and Hart (2012) used a forensic mental health sample. Some studies examined psycho-
metric properties of the scale against clinical vignettes (N = 6), and others used a clinical sample
or single case study (N = 2). The majority of studies (N = 6) evaluated cognitive therapy models
of case conceptualization. Results are discussed in terms of each case conceptualization scale
and evaluated against the criteria outlined above.

Vignette-Based Ratings
The Case Conceptualization Scoring Criteria (CFSC; Page, Stritzke, & Mclean,
2008). The CFSC was developed to assess and benchmark clinical psychology trainee case
conceptualization skills. Content and construct validity are measured through scores of 0–5 for
six domains of evaluation (problem list, predisposing factors, precipitating factors, perpetuating
factors, provisional conceptualization, problems potentially hindering treatment, and strengths
and assets). The CFSC permits feedback to trainees and is simple and easy to complete with
little training required. The rating pertains to one standard case vignette, with a future aim that
there will be normative data for several standardized case vignettes. However, the measure is
not designed to rate conceptualizations developed with clients in routine clinical practice, and
by literature search and contacting the corresponding author, we were unable to identify any
further published standardized case vignettes. The simplicity of the scale and the anchor points,
which do not refer to specific problems or mechanisms, also render the measure rather vague
and thus potentially problematic in terms of inter-rater reliability.
In validating the measure, conceptualizations of first-year trainees were assessed by two
raters: a senior clinical doctoral student and an academic with clinical psychology and teaching
experience. Not surprisingly, the total scores were only moderately correlated (r = .46; p < .05),
suggesting the scale is unreliable in evaluating case formulations. There was also poor inter-
item correlation within the measure, which can be problematic if the overall measure of quality
is included in the assessment because one total score may not be representative of subscale
scores. Bearing in mind the weak psychometric properties, the measure can be used to assess
and benchmark skills in clinical training programs to provide psychologists in training with
formative feedback. The measure could also be used for psychological practitioners in training
and supervision purposes in research studies, which require an assessment of case formulation
skills.
The measure is presented as a generic assessment of case formulation skill and as such could
be applied to assess formulations developed within different therapeutic approaches. However,
CBT criteria are applied in relation to the perpetuating factors domain, and in fact the authors
(Page et al., 2011) acknowledge that the trainee clinical psychologists for whom the measure was
designed to assess were trained in cognitive behavioral methods.

Scoring of a Case Conceptualization Method (SCFM; Dudley et al., 2010). Dudley


et al. (2010) used the SCFM to assess agreement between clinicians and experts when formulating
clients’ psychotic beliefs from a video vignette of a clinical session. The scoring manual of the
SCFM contains eight component levels of the conceptualization, each grounded in a CBT
framework (e.g., dysfunctional assumptions or physical symptoms), with each item scored at 0
(inaccurate), 1 (theme identified), or 2 (accurate). Scores are summed to produce a score for each
component level, which are then summed to create further subscales such as the “inferential”
or “stress-vulnerability” subscales. Content and criterion validity of the conceptualization are
measured through the extent to which relevant variables and relationships are present, and how
well conceptualizations match an “expert” conceptualization derived by three experts.
The scale is relatively easy to use because of the clear scoring criteria provided. However,
because the scale is based on a video vignette of one client with delusional beliefs, it cannot
realistically be applied to rate the quality of conceptualizations developed with a range of clinical
presentations found in routine clinical practice. The use of a video vignette increases the clinical
526 Journal of Clinical Psychology, June 2016

validity of the measure but does necessitate the use of a video-playing device, which may limit
the ease with which the measure can be administered. Inter-rater reliability of the SCFM was
established with three randomly selected conceptualizations being scored independently by three
researchers (kappa = > .85). This measure is a good tool for training and research purposes due
to the standardized scoring of the vignette-based material.

The Cognitive Behavioral Therapy Case Conceptualization Rating Scale (CBT CC


rating scale; Haarhoff, Flett, & Gibson, 2011). The CBT CC rating scale has been used
in conjunction with two other rating scales, the CFCCM (Eells et al., 1998; Eells, Lombart,
Kendjelic, Turner, & Lucas, 2005) and the QCCFRS (Fothergill & Kuyken, 2002), to evaluate
the content and quality of case conceptualizations produced by novice CBT clinicians in relation
to four case vignettes. The CBT CC rating scale comprises four categories (the “problem list,”
“diagnostic,” “working hypothesis,” and “treatment planning”) that are rated on a 10-point
scale, with anchor points based on the Cognitive Therapy Scale (CTS; Young & Beck, 1980) and
ranging from 0 (absent) to 10 (excellent).
An expert clinician developed the four benchmark conceptualizations based on the vignettes,
and the quality of the participants’ case conceptualizations was assessed by calculating the
percentage of agreement for information matching the categories selected by the expert. The
scale measures quality through content and criterion validity and each of the categories is
relatively well described. Although there are four vignettes, the scale has not been developed
to rate conceptualizations developed in routine clinical practice, thus limiting the scope of
its use. The psychometric properties of the measure were not reported, including concurrent
validity with the CFCCM and the QCCFRS. Although a psychometric evaluation of the scale is
required, the measure could be particularly helpful for clinicians early on in training because of
the comparisons of case conceptualizations available with expert-developed conceptualizations.

Case Formulation Quality Checklist (CFQC; McMurran et al., 2012). The CFQC
was developed to evaluate the quality of forensic case conceptualizations but more specifically
to measure the effect of training and consultation of typical offending cases on a probation
officer’s caseload. Using the five “Ps” formulation template (presenting problem, predispos-
ing factors, precipitating factors, perpetuating factors, and protective/positive factors; or also
known as the PPPP approach), Minoudis et al. (2013) during validation of the measure created
two fictitious vignettes and asked probation officers to develop a case conceptualization for these
vignette before and after training. Four 2-hour training sessions were provided for probation
officers over a 6-month period. The checklist assesses internal consistency, content, construct
validity, inter-rater reliability, and test-re-test reliability and comprises 10 items, each rated on a
4-point scale ranging from 1 (does not meet this criterion) to 4 (meets this criterion exceptionally
well).
The checklist is relatively easy to use because the scoring is guided by a template, the scale
does not require a specific theoretical approach, and the scale comprises a clear checklist and
categorical rating system. The reliability of the scale is well established through inter-rater relia-
bility, test re-test reliability, and internal consistency assessments. Inter-rater reliability between
two consultant clinical psychologists for both vignette one (intraclass correlation [ICC] = .63,
p = .05) and vignette two (ICC = .75, p = .001) was good. Test re-test reliability (re-scored after
1 week) was excellent for rater one (ICC = .85, p = .002) and rater two (ICC = .99, p = .001).
Finally, Cronbach’s alpha (α = .92) demonstrates excellent internal consistency of the checklist.
The five Ps approach has been criticized as a formulation model for two reasons: (a) it does
not require psychological factors to be integrated in a coherent narrative and (b) the template
does not necessarily include the personal meaning of the factors and life events described by a
client (Johnstone et al., 2011). In light of this critique, the CFQC may not be considered a “true”
formulation-driven measure. As yet, the measure has also been applied only against forensic
case vignettes, and as such we cannot be certain about the validity of the measure in mental
health settings. Further evaluation in relation to actual clinical cases and outside the forensic
setting is needed. This scale is particularly relevant for, and specific to, forensic populations and
Quality of Case Conceptualizations 527

probation and other professionals working with offenders. However, with further adaptation
and evaluation the measure has the potential to be extended to mental health settings.

Measures Used for Clients Seen in Ongoing Practice or for Research Purposes
Collaborative Case Conceptualization Rating Scale (CCC-RS; Padesky, Kuyken,
& Dudley, 2011). The CCC-RS was developed to enhance CBT training in case conceptual-
ization and for use in research trials assessing the relationship between competence in concep-
tualization and therapy outcome. The measure assesses the content and construct validity of
CBT conceptualizations, but it could also be used to assess the reliability of different clinicians’
conceptualizations, compare the extent to which clinicians formulate a case in line with an ex-
pert, or assess the extent to which conceptualizations predict outcomes. Raters examine three
principles or domains of CBT case conceptualization (levels of conceptualization, collaboration
and empiricism, and strengths/resilience focus). The measure consists of 14 items scored on a
4-point Likert-style scale (0 to 3) to assess the presence and degree of a number of specific case
conceptualization-related skills in relation to established criteria for competency.
In comprehensively examining key components of the CBT model, the CCC-RS is relatively
complex to rate because of the level of detail raters need to consider in making a rating.
Furthermore, appropriate training to obtain high levels of inter-rater reliability is required.
Data used to evaluate the psychometric properties of the measure were derived from a large
trial with adults experiencing resistant depression. The CCC-RC has shown excellent internal
consistency (α = .94), reliability (split-half = .82), and inter-rater reliability (ICC = .84; Kuyken
et al., 2016). Evaluation of the scale further demonstrated adequate face, content, and convergent
validity. Furthermore, Gower (2011) found the measure to have moderate convergent validity
with a general measure of CBT competence (r = .44, p = .002). The scale has been validated in the
context of CBT for resistant depression, but it could also be potentially used to assess the quality
of CBT conceptualizations across a range of clinical groups. Although time consuming, which
might be prohibitive in routine clinical settings, the scale is particularly helpful for supervising
trainees, and because of its excellent psychometric properties, it is a useful tool for research
purposes. Further research is needed to extend the findings to other clinical populations.

Case Formulation Content Coding Method (CFCCM; Eells et al., 1998, 2005).
Initially developed in 1998, the CFCCM was designed to be theoretically neutral so that infor-
mation from any theoretical perspective on formulation could be organized. The CFCCM was
further developed (Eells et al., 2005) to reliably and comprehensively categorize the information
that a clinician uses when developing a psychotherapy conceptualization. The CFCCM contains
four major content categories: symptoms and problems, precipitating stressors or events, pre-
disposing life events/stressors, and inferred mechanism (the clinician’s hypothesis of the cause
of the person’s current difficulties).
The CFCCM also produces quality ratings for the conceptualization as a whole, including each
major subcategory, the complexity of the conceptualization, the degree of inference used, and the
precision of language. The scale assesses the content and construct validity of a conceptualization
and could be used to assess the reliability of different clinicians’ conceptualizations, compare the
extent to which clinicians formulate a case in line with an expert, or assess the extent to which
conceptualizations predict outcomes. The scale is therefore comprehensive, but the measure
focuses on the content of the conceptualization and does not include a measure of the process
of developing the conceptualization.
The measure is also limited in terms of ease of administration as it has extensive scoring
criteria that require time and training to complete and as such might be prohibitive in routine
clinical settings. However, because conceptualizations developed for a range of clients were
assessed using the CFCCM, it is applicable to the variety of clients a clinician might expect to
see on their caseload. Because the scale is not based on any one theoretical model, it can be
used across a multitheoretical service with a range of clinical pretentions. However, it may not
be particularly relevant for clinicians who use more problem-specific, model-driven therapeutic
528 Journal of Clinical Psychology, June 2016

approaches. The coding manual showed good inter-rater reliability (mean kappa = .86) across
content and quality categories.
Quality of Cognitive Case Formulation Rating Scale (QCCFRS; Fothergill &
Kuyken, 2002). The QCCFRS was developed to assess the quality of CBT case concep-
tualizations. The authors defined criteria for what constitutes a “good enough” conceptualiza-
tion on the basis of clinical experience, leading to the operationalization of the properties of
“good,” “good enough,” “poor,” and “very poor” conceptualizations. Quality is defined as a
parsimonious, coherent, and meaningful account of a client’s presenting problems, focusing on
construct validity as a measure of quality. It could be used to assess the reliability of different
clinicians’ conceptualizations, compare the extent to which clinicians formulate a case in line
with an expert, or assess whether conceptualizations predict outcomes.
The QCCFRS is simple and easy to use as it has four clearly defined anchor points with a
strong emphasis on how well key elements of the formulation are integrated, but this results in
limited scoring criteria despite the extensive descriptions within each category. As the scale was
developed to assess CBT conceptualizations, it is applicable to clinicians training in, or practising,
a problem-specific CBT model of therapy and might therefore not be relevant for clinicians who
use a more integrative theoretical approach with clients with complex presentations. After
development of the scale, two raters assessed the inter-rater reliability and acceptable rate of
agreement was obtained (kappa = 0.85), but as yet other psychometrics properties have not been
reported.
Rating the Quality of Case Formulation for Obsessive-Compulsive Disorder
(RQCFO; Zivor, Salkovskis, Oldfield, & Kushnir, 2013). The RQCFO was developed
from the QCCFRS to examine the effect of training in CBT conceptualizations for obsessive-
compulsive disorder (OCD). The scale consists of six dimensions based on the content of the
conceptualization and a seventh dimension of the “overall quality of the conceptualization,”
which serves as a measure of construct validity.
Given that this scale was based on the QCCFRS, it is relatively easy to complete in that it has
clear anchor points and comprehensively defined dimensions. However, RQCFO’s use within
OCD is limited and is therefore not easily applicable in routine clinical services and to clients
who present with a range of mental health problems. Zivor et al. (2013) found the correlation
between two blind raters on the “overall quality” dimension to be significant (r = .70, p < .005).
The scale has not undergone rigorous psychometric evaluation, limiting conclusions that can be
drawn about the validity of the measure.

Discussion
The purpose of this study was to systematically review and evaluate available measures of
case conceptualization. Because case conceptualization is identified as a core skill for clinical
psychologists (Johnstone et al., 2011), it is important to have reliable and valid measures that can
assess the quality of case conceptualizations in a variety of settings and across clinical groups.
Case conceptualization is viewed as a “first principle” of CBT (Beck, 1985; Johnstone & Dallos,
2013). As such, most published measures designed to assess the quality of case conceptualizations
are grounded in the CBT tradition and this ended up being the focus of the current review; the
majority of quality scales reviewed were grounded in a CBT framework (Dudley et al., 2010;
Fothergill & Kuyken, 2002; Haarhoff et al., 2011; Padesky et al., 2011; Page et al., 2008; Zivor
et al., 2013), with one scale theoretically neutral (Eells et al., 2005) and another embedded in
the five Ps formulation template (McMurran et al., 2012).
Drawing on our combined clinical experience and knowledge of the case conceptualization
literature, the measures were evaluated in terms of their ease of administration, generalizability,
ability to evaluate the reliability and validity of case conceptualizations, and the psychometric
robustness of the scales themselves. Given the limited data available, recommendations for each
of the measures reviewed are cautiously provided. Indeed, on the basis of this review, there is no
single measure that has been validated across a range of different settings. Nevertheless, certain
measures can be recommended for use in particular contexts.
Quality of Case Conceptualizations 529

The CCC-RS (Padesky et al., 2011) appears to be the most reliable and valid measure for
evaluating CBT conceptualizations when the rater has access to live therapeutic material. It
can be used for both clinical and research purposes, and although it has been validated in
resistant depression, it could potentially be used to measure the quality of conceptualizations
in other contexts. The CFCCM (Eells et al., 2005) can also be used to rate the quality of
conceptualizations in relation to different clients the therapist may be working with and has
reasonable inter-rater reliability. The QCCFRS (Fothergill & Kuyken, 2002) is a further measure
that can be used to assess conceptualizations with a range of different clients. However, it
is limited in terms of its scoring criteria. Further work is needed to assess its psychometric
properties.
Of particular emphasis in this review is the ability of measures to sufficiently address qual-
ity by the extent to which the conceptualization is both reliable and valid. According to the
DCP (Division of Clinical Psychology) guidelines on case formulation (Johnstone et al., 2011),
conceptualization in clinical psychology should be grounded in psychological theory, should be
evidence-based, and, among other important principles, should draw from a range of psycho-
logical models and causal factors. Adherence to these principles is tested through the capacity
of the scale to measure content, construct, criterion, and predictive validity.
In addition, reliability of the conceptualization is assessed through its ability to be replicated
by other clinicians. The majority of scales focus on the content, construct, and in some cases
criterion validity, and although several of the measures could be used to assess predictive
validity, none of the measures addresses the issue of predictive validity in the quality ratings.
The ability of a conceptualization to predict not only outcomes but also suboutcome measures
is important. That is, in the context of a CBT model, it is important that cognitive behavioral
conceptualizations in particular test and validate to what extent case conceptualization measures
predict not only the effect of therapy (i.e., outcome), but more specifically the effect of therapy
on mechanisms driving client change processes (Mumma, 2011). Therefore, greater attention to
predictive validity in the case conceptualization literature, in addition to other aspects of validity
and reliability, is needed.
Measures varied in the extent to which their psychometric properties were evaluated and
reported. The CCC-RS (Padesky et al., 2011) and the CFQC (McMurran et al., 2012) were
the most robustly tested measures. Both measures were assessed for internal consistency and
inter-rater reliability. The CCC-RS was assessed for content, convergent, face, and construct
validity, and the CFQC was assessed for test-retest reliability. In establishing a psychometrically
sound measure, it is important that researchers report on the reliability and validity of the scales,
including internal consistency, inter-rater reliability, test re-test reliability, and all aspects of
validity.
A key issue arising from this review is the lack of a quality team conceptualization measure.
Team conceptualization is in keeping with the clinical psychology profession’s wider remit to
work at a team, service, and organizational level (Johnstone et al., 2011). As such, there is
a growing practice of using conceptualization within multidisciplinary teamwork, in both in-
patient and community settings. In line with our systematic search, there are no measures we
identified that assess the quality of conceptualizations developed collaboratively with teams.
Given the clinical psychologists’ role and the rising use of developing team conceptualizations
in clinical practice (Johnstone et al., 2011), it would be beneficial for a reliable and valid team
conceptualization measure to be developed that can cover a wide range of possible settings and
client groups. Also, “transference and counter-transference” are often downplayed in concep-
tualizations, yet such factors are especially relevant in team conceptualizations because sharing
the conceptualization may enable team members to reflect on their feelings about the client
and identify potential counter-transference issues (Meaden & Van Marle, 2008). Indeed, these
factors more broadly are currently not taken into account in existing case conceptualization
scales.
Limitations
As some recommendations have been made regarding measures that are suitable for use in
different contexts, this review is limited by the small number of studies that have robustly
530 Journal of Clinical Psychology, June 2016

validated the quality of case conceptualizations in a clinical setting. Also, studies vary widely on
validation methods and sampling and are therefore not easily comparable. Reliability and validity
are recognized as important components of ensuring a measure is valid, but we acknowledge
that these factors do not provide a complete picture of case conceptualization and it is important
that measures are validated against a clinical conceptualization session or sessions, particularly
when they are being used in a new population (Meades & Ayers, 2011). In addition, studies that
have evaluated psychometrics properties of conceptualization measures have largely focused on
elements such as inter-rater reliability and some aspects of validity, but on the whole papers do
not report discriminant validity, an important element of construct validity.
Furthermore, the current review may be limited by the inclusion of studies in English language
only; quality case conceptualization measures may have been missed due to the limitation of
the search strategy. Although we attempted to provide an objective evaluation of the quality
of each case conceptualization tool, it is possible that an element of subjectivity influenced the
findings of the review and the conclusions drawn. That is, the authors drew on their combined
clinical experience and knowledge of the case conceptualization literature to evaluate the quality
of measures reviewed as, to the authors’ knowledge, no objective assessment tool was available
on which to evaluate the included measures.
It is also worth noting that some scales are not published or they were difficult to acquire, and
some do not have scoring manuals to enable consistent scoring of items. This demonstrates that
rigorous and systematic methods researchers use when developing, evaluating, and publishing
case conceptualization scales are significantly limited.

Recommendations for Future Research


This review has demonstrated that there are some measures that have assessed the quality of case
conceptualizations, but overall there is a paucity of a pool of psychometrically evaluated tools to
assess case conceptualization. Researchers and clinicians can use this assessment to determine
which of the measures is most helpful for their specific purpose. Relatively little empirical work
has focused on developing and validating conceptualization scales. Therefore, further work is
needed regards three key areas.
First, researchers need to assess the reliability and validity of existing scales and modify
the scales or improve the scoring manuals for those that fall short for reliability and validity.
Second, in some instances it may also be necessary to develop and validate new scales. In
particular, there is a paucity of measures that assess case conceptualization in schools of therapy
outside of CBT. For example, case conceptualization is integral to cognitive analytical therapy
and psychodynamic therapy, but there are no validated conceptualization scales specific to these
therapeutic approaches. The validation process should be rigorous and assess all aspects of
reliability and validity, including inter-rater reliability between two or more raters, test re-test
reliability, internal consistency, and validity. To improve the scientific integrity of the validation
process, both the teams of researchers who develop the measure and the independent researchers
who do not have a vested interest in the tool should evaluate scales.
Third, there is a need to develop a measure that reliably and validly evaluates team conceptu-
alizations, which is increasingly being advocated in both national policy documents (Johnstone
et al., 2011) and clinical settings. Team formulations involve a psychological therapist working
with a team of mental health workers involved in a clients’ psychiatric care to develop a psycho-
logical understanding of an individual client’s psychological needs, which can be used to inform
treatment plans. There is preliminary empirical evidence to suggest that team formulation can
improve staff and patient relationships and reduce staff burn out (Berry et al., 2015).
Depending on the scale, the measures described in the current paper can be used for clin-
ical, research, and training purposes. From a clinical perspective, quality measures should be
applicable to a wide range of clinical cases, although vignette based measures may have utility
in the context of therapist training. From a research perspective, it is particularly important
that measures have robust psychometric properties, with particular attention given to inter-rater
reliability. Regards training, case conceptualization can be used to train clinicians either indi-
vidually or as a group. Through taping clinical sessions, individual clinicians can generate their
Quality of Case Conceptualizations 531

own case formulation based on live clinical material, and in supervision the supervisor can inde-
pendently generate a case formulation, which will allow for comparison between formulations
and discussion around the development of the formulation, further clinical assessment required
and strategies for intervention. Training can also occur in groups either by using live clinical
material or through rating vignettes and comparing and contrasting formulations either between
clinicians or as rated against expert or “benchmarked” formulations.
Mumma (2011) suggested that perhaps a more robust way to measure the validity of case
conceptualizations is to measure whether the mechanism of change hypothesized in the con-
ceptualization was tested, or is testable. As psychology is an empirical science, it seems that
measures of the quality of case conceptualizations should assess processes and mechanisms
embedded in the conceptualization. Given the particularly wide spread use of cognitive behav-
ioral conceptualization, we would particularly recommend research empirically evaluating the
hypothesized relationship between cognitions and distress in the case conceptualization scale
(Bieling & Kuyken, 2003). However, as case conceptualization is not limited to CBT-informed
practice and is a central process across therapeutic schools, measures that specifically assess the
quality of case conceptualization across theoretical perspectives are needed.

Conclusion
Case conceptualization is a core skill in most schools of psychotherapy. Clinically speaking, the
clinician’s assessment and information derived from this assessment largely drives the quality
of case conceptualization. As we have outlined above, measuring the quality of case concep-
tualizations produced by clinicians is difficult for a range of reasons. We hope this paper will
help clinicians, researchers and trainers determine which quality case conceptualization measure
will be suitable for their specific purpose. As the published literature currently stands, there is
no single measure that has been validated across a range of settings. However, whilst further
research is required to more closely examine the psychometric properties of measures to ensure
robust quality measures are available across settings and client groups, the Case Conceptualisa-
tion Coding Rating Scale, Case Formulation Content Coding Method, and Case Formulation
Quality Checklist, have to date been most robustly tested in the published literature.

References
Anderson, N. B. (2006). Evidence-based practice in psychology. American Psychologist, 61(4), 271–285.
Aston, R. (2009). A literature review exploring the efficacy of case formulations in clinical practice. What
are the themes and pertinent issues? The Cognitive Behavior Therapist, 2(2), 63–74.
Beck, A. T. (1985). Anxiety disorders and phobias: A cognitive perspective. New York: Basic Books.
Bentall, R. (2006). Madness explained: Why we must reject the Kraepelinian paradigm and replace it with a
‘complaint-orientated’approach to understanding mental illness. Medical Hypotheses, 66(2), 220–233.
Berry, K., Barrowclough, C., & Wearden, A. (2009). A pilot study investigating the use of psychological
formulations to modify psychiatric staff perceptions of service users with psychosis. Behavioral and
Cognitive Psychotherapy, 37(1), 39–48.
Berry, K., Haddock, G., Kellett, S., Roberts, C., Drake, R., & Barrowclough, C. (2015). Feasibility of a ward-
based psychological intervention to improve staff and patient relationships in psychiatric rehabilitation
settings. British Journal of Clinical Psychology.
Bieling, P. J., & Kuyken, W. (2003). Is cognitive case formulation science or science fiction? Clinical Psychol-
ogy: Science and Practice, 10(1), 52–69.
Caston, J. (1993). Can analysts agree? The problems of consensus and the psychoanalytic mannequin: I. A
proposed solution. Journal of the American Psychoanalytic Association, 41(2), 493–511.
Caycedo Espine, C. C., Ballesteros de Valderrama, B. P., & Novoa Gómez, M. M. (2008). Analysis of a
clinical case formulation protocol from psychological well-being categories. Universitas Psychologica,
7(1), 231–250.
Chadwick, P., Williams, C., & Mackenzie, J. (2003). Impact of case formulation in cognitive behavior therapy
for psychosis. Behavior research and therapy, 41(6), 671–680.
Davidson, K. M. (2008). Cognitive–behavioral therapy for personality disorders. Psychiatry, 7(3), 117–120.
532 Journal of Clinical Psychology, June 2016

Dudley, R., Park, I., James, I., & Dodgson, G. (2010). Rate of agreement between clinicians on the content
of a cognitive formulation of delusional beliefs: The effect of qualifications and experience. Behavioral
and Cognitive Psychotherapy, 38(2), 185–200.
Eells, T. D. (2010). The unfolding case formulation: The interplay of description and inference. Pragmatic
Case Studies in Psychotherapy, 6(4).
Eells, T. D., Kendjelic, E. M., & Lucas, C. P. (1998). What’s in a case formulation?: development and use of
a content coding manual. The Journal of Psychotherapy Practice and Research, 7(2), 144.
Eells, T. D., Lombart, K. G., Kendjelic, E. M., Turner, L. C., & Lucas, C. P. (2005). The quality of
psychotherapy case formulations: A comparison of expert, experienced, and novice cognitive-behavioral
and psychodynamic therapists. Journal of Consulting and Clinical Psychology, 73(4), 579.
Fothergill, C. D., & Kuyken, W. (2002). The quality of Cognitive Case Formulation Rating Scale (Unpub-
lished manuscript). University of Exeter, UK.
Ghaderi, A. (2011). Does case formulation make a difference to treatment outcome? Forensic Case Formu-
lation, 61–79.
Gower, P. (2011). Therapist competence, case conceptualisation and therapy outcome in cognitive behavioral
therapy (Unpublished manuscript). University of Exeter, UK.
Haarhoff, B. A., Flett, R. A., & Gibson, K. L. (2011). Evaluating the content and quality of cognitive-
behavioral therapy case conceptualisations. New Zealand Journal of Psychology, 40(3), 104–114.
Hendricks, P. S., & Thompson, J. K. (2005). An integration of cognitive-behavioral therapy and interpersonal
psychotherapy for bulimia nervosa: A case study using the case formulation method. International
Journal of Eating Disorders, 37(2), 171–174.
Holtforth, M., & Grawe, K. (2000). Questionnaire for the analysis of motivational schemas. Zeitschrift Fur
Klinische Psychologie-Forschung Und Praxis [Questionnaire for the Analysis of Motivational Schemas],
29(3), 170–179.
Honos-Webb, L., & Leitner, L. M. (2001). How using the DSM causes damage: A client’s report. Journal
of Humanistic Psychology, 41(4), 36–56.
Horowitz, L., & Rosenberg, S. (1994). The consensual response psychodynamic formulation: Part 1. Method
and research results. Psychotherapy Research, 4(3-4), 222–233.
Horowitz, L. M., Rosenberg, S. E., Ureño, G., Kalehzan, B. M., & O’Halloran, P. (1989). Psychodynamic for-
mulation, consensual response method, and interpersonal problems. Journal of Consulting and Clinical
Psychology, 57(5), 599.
Ingham, B. (2011). Collaborative psychosocial case formulation development workshops: A case study with
direct care staff. Advances in Mental Health and Intellectual Disabilities, 5(2), 9–15.
Johnstone, L., & Dallos, R. (2013). Formulation in psychology and psychotherapy: Making sense of people’s
problems. New York: Routledge.
Johnstone, L., Whomsley, S., Cole, S., & Oliver, N. (2011). Good practice guidelines on the use of psycho-
logical formulation. Leicester: British Psychological Society.
Kuyken, W., Beshai, S., Dudley, R., Abel, A., Görg, N., Gower, P., . . . Padesky, C. A. (2016). Assessing
competence in collaborative case conceptualization: Development and preliminary psychometric prop-
erties of the Collaborative Case Conceptualization Rating Scale (CCC-RS). Behavioral and Cognitive
Psychotherapy, 44(2), 179–192.
Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P., Moher, D. (2009). The
PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health
care interventions: Explanation and elaboration. BMJ, 339(b2700). doi: 10.1136/bmj.b2700.
Luborsky, L. (1977). Measuring a pervasive psychic structure in psychotherapy: The core conflictual rela-
tionship theme. In N. Freedman & S. Grand (Eds.), Communicative structures and psychic structures
(pp. 367–395). New York: Springer.
McMurran, M., Logan, C., & Hart, S. (2012). Case formulation. Quality checklist. Institute of Mental
Health: Nottingham.
Meaden, A., & Van Marle, S. (2008). When the going gets tougher: The importance of long-term supportive
psychotherapy in psychosis. Advances in Psychiatric Treatment, 14(1), 42–49.
Meades, R., & Ayers, S. (2011). Anxiety measures validated in perinatal populations: A systematic review.
Journal of Affective Disorder, 133(1), 1–15.
Minoudis, P., Craissati, J., Shaw, J., McMurran, M., Freestone, M., Chuan, S. J., & A. Leonard. (2013). An
evaluation of case formulation training and consultation with probation officers. Criminal Behavior and
Mental Health, 23(4), 252–262.
Quality of Case Conceptualizations 533

Mumma, G. (2011). Validity issues in cognitive-behavioral therapy case formulation. European Journal of
Psychological Assessment, 27(1), 29–49.
Munoz-Martinez, A. M., & Novoa-Gomez, M. (2011). Reliability and validation of a behavioral model of
clinical behavioral formulation. Universitas Psychologica, 10(2), 501–519.
Padesky, C. A., Kuyken, W., & Dudley, R. (2011). The Collaborative Case Conceptualization Rating Scale
(CCC-RS) Coding manual, version 5. Retrieved from
Page, A., Stritzke, W. G. K., & Mclean, N. J. (2008). Toward science-informed supervision of clinical case
formulation: A training model and supervision method. Australian Psychologist, 43(2), 88–95.
Persons, J. B. (2006). Case formulation–driven psychotherapy. Clinical Psychology: Science and Practice,
13(2), 167–170.
Persons, J. B., Beckner, V. L., & Tompkins, M. A. (2013). Testing case formulation hypotheses in psychother-
apy: Two case examples. Cognitive and Behavioral Practice, 20(4), 399–409.
Persons, J. B., Mooney, K. A., & Padesky, C. A. (1995). Interrater relability of cognitive-behavioral case
formulations. Cognitive Therapy and Research, 19(1), 21–34.
Seitz, P. F. (1966). The consensus problem in psychoanalytic research. In L. A. Gottschalk, & A. H. Auerbach
(Eds.), Methods of research in psychotherapy (pp. 209–225). New York: Springer.
Völlm, B. (2014). Case formulation in personality disordered offenders–A Delphi survey of professionals.
Criminal Behavior and Mental Health, 24(1), 60–80.
Young, J., & Beck, A. T. (1980). Cognitive Therapy Scale: Rating manual. Unpublished manuscript.
www.beckinstitute.org.
Zivor, M., Salkovskis, P. M., Oldfield, V. B., & Kushnir, J. (2013). Formulation in cognitive behavior therapy
for obsessive–compulsive disorder: Aligning therapists, perceptions and practice. Clinical Psychology:
Science and Practice, 20(2), 143–151.

You might also like