You are on page 1of 23

SCRD Graph Construction in Counseling 1

A Graph Paints 1,000 Words But are they Valid?: A Systematic Review of SCRD

Graphs

1Cian L. Brown, Corey Peltier, David Y. Lee, Fanee R. Webster, Amal A. Shabibi
1Department of Educational Psychology, University of Oklahoma

Author Note

Correspondence concerning this article should be addressed to Cian L. Brown,

https://orcid.org/0000-0003-0837-0295, Department of Educational Psychology, University of

Oklahoma, 820 Van Vleet Oval, Norman, OK 73019. Email: clbrown@ou.edu.


SCRD Graph Construction in Counseling 2

Abstract

i. Objective: Single-case research designs are a useful methodology to investigate the effects of

treatments within counseling. A core component of the design is the presentation of data

collected repeatedly on a time-series graph. Research in the fields of special education and

behavior analysis suggests most graphs do not adhere to reporting guidelines and specific

graphical characteristics can impact decisions made by visual analysts.

ii. Method: In this pre-registered (https://doi.org/10.17605/OSF.IO/FU7GV) project we

conducted a systematic review of 42 journals in the field of counseling. A total of 50 single-case

studies, including 272 graphs were included in our analysis.

iii. Results: Most of the graphs did not meet recommended guidelines for graph construction.

iv. Conclusions: We provide a discussion on how graph construction may impact visual analysis

and suggestions for graph construction to enhance interpretability.

Keywords: counseling, single-case research design, graph construction, time-series

graphs, systematic review


SCRD Graph Construction in Counseling 3

A Graph Paints 1,000 Words But are they Valid?

The use of single-case research designs (SCRDs) is increasing in counseling research and

practice to establish evidence for treatment and intervention effectiveness. Although SCRD has

been widely used in behavioral disciplines, it is used sparingly in counseling research.

Traditionally, counseling research has relied upon research designs using parametric tests to

determine statistical significance. These designs require large sample sizes, usually a

randomized control or comparison groups to contrast between-group differences and determine

treatment effectiveness. However, criticisms of feasibility, generalizability, and practical

applicability towards implementing these research approaches have led to researchers to explore

alternative modalities using smaller samples and consider clinical significance (Lenz, 2015).

Johnson and Cook (2019) highlight SCRDs use for both confirmatory and exploratory analyses,

therefore researchers can examine existing practices and/or explore new interventions impact

on behavioral and mental health outcomes using a single subject (i.e., individuals, groups,

families). Therefore, interventions in SCRD focus primarily on what works for individual clients,

or clinical significance, as opposed to the ‘average’ group by parceling out non-respondents to

initial assessed outcomes or treatment (Mirza et al., 2017; Scuffham, et al., 2010). Thus, SCRDs

provide clinicians and researchers with a viable means to identify and explore agents of change

while measuring experimental outcomes (without confronting the dauting demands of achieving

statistical power).

A unique feature of SCRDs is the researcher’s ability to measure client outcomes multiple

times prior to an intervention to establish a baseline as a measure of control. Participants serve

as their own control, eliminating the need for control groups usually required for between-

subject designs. Once baseline has been established, the intervention or treatment condition is

introduced, the researcher continues to measure the dependent variable over several timepoints

to assess change in the client’s outcome. This design characteristic of SCRD highlights the

therapeutic or behavioral change a single subject experiences due to the intervention by


SCRD Graph Construction in Counseling 4

comparing their treatment data to collected baseline information. However, if a subject is

unresponsive to the intervention, the researcher has flexibility to modify the treatment.

Typically, treatment effectiveness is determined by evaluating the level, trend, and variability

within each phase of data collection and consider the level change, immediacy of effect, and

amount of data overlap data between phases (Ledford et al., 2018).

Prior to 2015, counseling research using SCRD was sparse. Woo et al. (2016) explored

the use of SCRDs in counseling research across the 20 American Counseling Association (ACA)

journals from 2003 to 2014, which yielded 7 studies across 5 journals. Notably, in 2015, the

Journal of Counseling & Development released a special issue highlighting SCRD and its

applicability within the counseling field. Since then, counseling researchers have employed

SCRDs to their work reinforcing SCRD as a practical application to use across clinical and

research settings to examine the effectiveness of treatment without having to employ non-

treatment groups or conduct large sample parametric tests. Therefore, this methodolody

substantially diverges from other design options (i.e., group design approaches) through the

collection of time series data and visual analysis of the graphed data to illustrate changes and

treatment effectivness, hallmark characteristics of SCRD. Recent work has found large

variability in the characteristics of time-series graphs reported in single-case designs (Kubina et

al., 2017; Kubina et al., 2022; Ledford, et al., 2019; Peltier et al., 2021; Peltier et al., 2022a;

Peltier et al., 2022b).

Framework for Graph Construction Variables

There are a host of decisions that must be made when constructing a time-series graph.

Dart and Radley (2018) provided a framework for researchers and clinicians to consider

classifying graphical elements as either aesthetic altering or analysis altering. Aesthetic altering

elements are those that change the way a graph looks but to date do not have evidence

suggesting when manipulated they change the decisions made by a person evaluating the graph.
SCRD Graph Construction in Counseling 5

Much like APA style for writing and reporting results in manuscripts, standardizing aesthetic

altering elements enhances graph reading. Analysis altering elements are those with evidence

that when manipulated will alter the decisions made by a person evaluating the graph. Thus,

particular attention must be paid to analysis altering elements because these impact conclusions

one draws from the time-series data.

There are two potential analysis altering elements for SCRD time-series graphs –

ordinate scaling and the DPPXYR. For ordinate scaling, Dart and Radley (2017) found that when

graphs displayed a truncated ordinate (i.e., setting y-max to 80%, 60%, or 40%) Type I errors

increased based on the severity of the truncation. The DPPXYR is a metric that captures the

ratio of the x:y axis lengths while also considering the density of the data points plotted along

the x-axis – it is computed using the following formula (x-axis length/y-axis length)/number of

data points that could be plotted along the x-axis. Radley and colleagues (2018) suggested an

ideal ratio of DPPXYR between 0.14 and 0.16, graphs with DPPXYR less than 0.14 led to inflated

Type I errors. We highlight these are potentially analysis altering because more replications are

needed; in a conceptual replication Peltier et al. (2022a) found no evidence for ordinate scaling

or DPPXYR impacting visual analyst decisions. We refer readers to Figure 1 to see examples of

graphs displaying manipulated ordinate scaling and DPPXYR.


SCRD Graph Construction in Counseling 6

Figure 1

Example of Graph Characteristics Impacting Perceived Data Patterns


SCRD Graph Construction in Counseling 7

Characteristics of SCRD Graphs

To date, we are unaware of a comprehensive review evaluating time-series graphs

published in the counseling literature – hence the purpose for the current project. However,

there have been several reviews of graph construction in the fields of behavior analysis and

special education.

Kubina and colleagues (2017) systematically sampled time-series graphs (n = 4,313)

reported in SCRDs across 11 behavior analytic journals from each journal's inception through

2011. The authors identified several characteristics that were inconsistent with current

recommendations: (a) failing to label the x- and y-axis, (b) misuse of tick marks on the x- and y-

axis, (c) inconsistent scaling of the axis, and (d) ratio of x-axis length to y-axis height did not

adhere to the recommended ratio (i.e., between 8:5 and 4:3). Kubina and colleagues (2021)

extended this work by evaluating time-series graphs (n = 1,228) published in 28 special

education journals. Results paralleled their review of behavior analytic journals, and the authors

highlighted the need to train and disseminate guidelines for time-series graph construction.

A separate research team conducted a similar project. Ledford and colleagues (2019)

evaluated time-series graphs (n = 1,123) reported in SCRDs across 11 prominent special

education journals. Like Kubina and colleagues (2017; 2021), the authors found large variation

in x:y axis ratio – both across types of SCRDs (e.g., multiple-baseline design vs. ABAB) and

within each design type. Ledford and colleagues (2019) aimed to investigate what may be

influencing varied graph construction and the lack of adherence to recommended guidelines.

They surveyed 50 editorial board members across the journals to determine if they preferred

graphs with an x:y ratio that (a) fit recommended guidelines, (b) was most prevalent in

published graphs, or (c) neither. Most respondents reported preferences for x:y axis that were

divergent from recommended guidelines and what was most prevalent in the published

literature.
SCRD Graph Construction in Counseling 8

Three reviews (Peltier et al., 2021; Peltier et al., 2022a; Peltier et al., 2022b) extended

prior reviews by focusing on the two potentially analysis altering elements: (a) ordinate

truncation (see above) and (b) DPPXYR (see above). Each review focused on evaluating a

decade’s worth of time-series graphs reporting SCRDs in specific subfields of special education:

Peltier and colleagues (2021) evaluated 315 graphs in the learning disabilities journals; Peltier et

al. (2022a) evaluated 258 graphs in the emotional or behavioral disorders journals; and Peltier

et al. (2022b) evaluated 2,675 graphs in autism journals. Findings across reviews reached

similar conclusions. The authors found few graphs displayed a truncated ordinate, which is

promising because ordinate truncating was shown to increase Type I errors. However, the x:y

axis ratio and DPPXYR varied dramatically across graphs with few meeting current

reccomendations.

Purpose

Graph construction impacts readability of graphs and may distort the visual analysis

process for SCRDs. Prior reviews have provided evidence and actionable suggestions to improve

graph construction in behavior analysis and special education. To date, there has not been a

review of time-series graphs in counseling, so little is known on the variability and level of

adherence to recommendations for time-series graph construction. The purpose for this review

is to provide descriptive information on characteristics of time-series graphs in SCRDs

published in counseling journals. Our research question follows:

What percent of SCRD time-series graphs in counseling articles follow suggested

guidelines for graph construction?

We hypothesize less than 10% of time-series graphs will have a DPPXYR value within the

recommended range (i.e., 0.14 to 0.16), and less than 10% of time-series graphs will have a x:y

axis ratio within the recommended range (i.e., 8:5 to 4:3). We will also explore additional graph

characteristics as noted by prior literature.


SCRD Graph Construction in Counseling 9

Method

The current project was pre-registered through Open Science Framework

(https://doi.org/10.17605/OSF.IO/FU7GV). We conducted a systematic hand search of 42

journals to provide descriptive data related to the graphical characteristics of SCRDs conducted

in counseling.

Search Procedure

Sample Selection

Like Woo et al. (2016), we selected ACA affiliated journals (n = 20) to include in the review.

However, we expanded our inclusion criteria to include additional counseling-related journals

considered relevant to the profession as determined through a prior survey of Counselor

Educators and Counseling practitioners and considering journals associated with regional

associations and professional organizations common among licensed counselors. This resulted

in 42 journals being included in the review. Journal issues were selected and screened between

the first issue published in 2015 to the last issue published in 2021. This period was selected

based upon the release of the special issue in Journal of Counseling & Development (see above)

and extended upon the SCRD content analysis review conducted by Woo et al. (2016) - they

searched from 2003 through 2014. Access the preregistered document link for included journal

list (https://doi.org/10.17605/OSF.IO/FU7GV).

Inclusion and Exclusion Criteria

To be included studies needed to meet the following criteria: (1) published or assigned to

an issue in years 2015 to 2021; (2) published in one of the specified journals above; (3) used a

SCRD; and (4) included a time-series graph. Articles were excluded if they (a) were published

before 2015 or after 2021; (b) were published in a journal outside the ones specified above; (c)

used a group design, case study design, or any other type of methodology not fitting a single-

case research design; or (d) did not provide a time-series graph.


SCRD Graph Construction in Counseling 10

Screening Procedures

Two graduate students completed screening activities. The lead authors provided a

training consisting of the following activities: (a) providing a clear definition for each inclusion

and exclusion criterion, (b) providing examples and non-examples for each inclusion criterion,

(c) providing opportunities for questions and clarifications, and (d) practice trials for a sample

of articles. Following the training, the graduate students screened 6,950 documents by

evaluating titles and abstracts. A total of 6,832 articles were excluded.

The next stage consisted of full text screening. The same graduate students screened 118

full texts against the inclusion. We identified 50 studies that met inclusion criteria (see Figure 2

for flow diagram depicting search process).


SCRD Graph Construction in Counseling 11

Figure 2

Flow Diagram Depicting Article Identification


Identification

Records identified from:


(n = 6,950)

Records screened Records excluded


(n = 6,950) (n = 6,832)
Screening

Full texts assessed for eligibility


(n = 118)
Included

Studies included in review JCD (n = 13)


IJPT (n = 12)
(n = 50)
JCAC (n = 4)
CORE (n = 3)
JSC (n = 3)
JSGW (n = 3)
JCMH (n = 2)
PSC (n = 2)
TPC (n = 2)
IJAC (n = 1)
JAOC (n = 1)
JCLA (n = 1)
JCRP (n = 1)
JHC (n = 1)
JMHC (n = 1)

Note. JCD = Journal of Counseling and Development; IJPT = International Journal of Play
Therapy; JCAC = Journal of Child & Adolescent Counseling; CORE = Counseling Outcome Research
& Evaluation; JSC = Journal of School Counseling; JSGW = Journal for Specialists in Group Work;
JCMH = Journal of Creativity in Mental Health; PSC = Professional School Counseling; TPC = The
Professional Counselor; IJAC = International Journal for the Advancement of Counselling; JAOC =
Journal of Addictions & Offender Counseling; JCLA = Journal of Counselor Leadership &
Advocacy; JCRP = Journal of Counseling Research & Practice; JHC = Journal of Humanistic
Counseling; JMHC = Journal of Mental Health Counseling
SCRD Graph Construction in Counseling 12

Inter-Rater Agreement for Screening Procedures. To evaluate agreement on

screening decisions, 12 of the 42 journals (i.e., 28.6% journals) including 2,359 documents

(34.5% of total documents) were independently screened by both graduate students. We used

the formula agreements/(agreements + disagreements) to report a metric to capture inter-rater

agreement (IRA). An agreement consisted of either (a) both students excluding the study or (b)

both students retaining the study for full text screening. Mean IRA across journals was 99.6%

(SD = 0.5%, Range = 98% to 100%). The 12 disagreements were resolved by the lead authors

assessing the documents to determine fit to inclusion criteria.

Data Extraction

We consulted previous research (Kubina et al., 2017; Ledford et al., 2019; Peltier et al.,

2021; Peltier et al., 2022a; Peltier et al., 2022b) to develop a coding guide. The coding guide

focused on capturing variables related to source information, experimental design

characteristics, and graph characteristics. The pre-registered coding guide included four

variables related to source information, nine variables related to experimental design, and 21

variables related to graph characteristics. Through coding activities, we identified five additional

variables that we added to the coding guide. These variables were not included in the pre-

registration and thus were not decided a priori. Please refer to the following link for operational

definitions for each variable (https://doi.org/10.17605/OSF.IO/3YAJ8).

Inter-Rater Agreement for Data Extraction

The research team consisted of two faculty members who held a doctoral degree and had

experience conducting SCRDs and three graduate students with coursework related to SCRDs.

For the training, a coding guide was developed with each coded variable identified and a

definition to support accurate coding. The team discussed each variable and identified examples

and non-examples from published graphs. Each coder independently coded two graphs and

agreements and disagreements were discussed. Coders were above 90% accurate on the training

graphs and were released to independent coding.


SCRD Graph Construction in Counseling 13

IRA for coding was estimated by comparing exact agreement for each coded variable

using the following formula agreements/(agreements + disagreements). One team of coders

independently double coded 17 of 50 (34%) studies including 81 of 272 graphs (29.8%). The

second team of coders independently double coded 49 of 272 graphs (18%) that were randomly

selected from 29 of 50 studies (58%). In total, we double coded 130 of 272 graphs (47.8%)

printed in 46 of 50 studies (92%). For the first team of coders, mean IRA at the variable level

was 89.3% (SD = 18.7%, Range = 30% to 100%) and mean IRA at the graph level was 89.3% (SD

= 4.4%; Range = 78.9% to 100%). For the second team of coders, mean IRA at the variable level

was 94% (SD = 11.2%, Range = 53% to 100%) and mean IRA at the graph level was 94% (SD =

4.9%; Range = 78.9% to 100%). For the first team of coders the two variables with low IRA

included ordinate truncation and whether the graph included a break in the data path across the

phase change line. For the second team of coders the variable with the lowest IRA was whether

the graph included labels for each phase. All disagreements were resolved by the two lead

authors.

Data Analysis

Once articles meeting criteria were screened and selected for inclusion, we examined all

graphs with a vertical and horizontal axis with longitudinal data. The graph must have contained

a maximum one data point per series on the horizontal axis interval excluding scatterplot graphs

corresponding to the subject. A unit of time or sessions must be present on the horizontal axis

along with a quantitative value on the vertical axis. To answer our research question, we

examined each included graph individually whether appearing alone or in the context of other

graphs. Each graph was scored for the essential structure of y and x axis, which is concerned

with representation of quantity and time (e.g., axis labels, min and max value range, ratio, unit

of time, sessions, truncation, etc.), and the presence or absence of time-series graph quality

features (e.g., axis label markers, tick marks present, tick mark style, scaling distance, data

points, data paths, condition change lines, condition break changes, condition labels, data
SCRD Graph Construction in Counseling 14

aligned with tick marks, figure captions), and common graph characteristics for SCRD (e.g., y-

axis floated, data on y-axis, horizontal lines present, equal session intervals, number of DVs

present, number of participants present) (See Kubina et al., 2017; Table 1). Additionally, we

examined the DPPXYR and standardized y:x ratio by using the formula (x-axis length/y-axis

length)/number of data points that could be plotted along the x-axis, to obtain the plot density.

Results

Results of our systematic search yielded a total of 272 graphs reported in 50 articles

printed in 12 counseling journals. Within each article, the median number of graphs included

was 4 and the mean number of graphs included was 4.66 (SD = 3.50; Range = 1-15).

Essential Structure

One essential structure of a time-series graph involves an ordinate axis (i.e., vertical) and

abscissas axis (i.e., horizontal). All graphs evaluated included these axes. Two other essential

features for a graph involve meaningful labels for the ordinate and abscissas axis so viewers

know the nature of the dependent variable and the time scale displayed along the abscissas. We

identified 43 graphs (15.8%) did not provide a label for the ordinate axis and 6 graphs (2.2%)

did not provide a label for the abscissas. See Table 1 for results.

Table 1

Essential Graph Characteristics

Variable Frequency Percent of Error


Label Ordinate Axis No: 43 15.80%
Yes: 229

Label Abscissas Axis None: 6 2.20%


Observation: 2
Other: 130
Session: 98
Week: 36
SCRD Graph Construction in Counseling 15

Quality Features

We coded seven features related to the ordinate axis. Two variables were specifically

related to the use of tick marks, and these had high rates of divergence from recommendations:

(a) no use of tick marks (45.2% error) and (b) style of tick mark used was not outside graph

space (57.35% error). We coded two variables related to the scaling of the ordinate axis, all

graphs used equal spacing when scaling the axis (0% error) but few graphs floated the ordinate

axis to prevent minimum values from sitting on the abscissas axis (98.2% error). Last, we coded

whether data points fell on the ordinate axis (0% error) and whether the graph included

horizontal lines at major ordinate axis units (65.8% error).

We coded five variables related to the abscissas axis. Three variables were specifically

related to the use of tick marks, and these had high error rates: (a) no use of tick marks (52.6%

error), (b) style of tick mark used was not outside graph space (45.6% error), and data points did

not align vertically with the tick mark on the abscissas (72.05% error). Two variables related to

scaling the abscissas, (a) the scaling distance was not equal interval along the abscissas (24.6%

error) and (b) session labels were not consistent with time unit of measurement systems (17.6%

error).

We coded five variables related to the data points and paths for each phase of the

experiment. We identified few errors for three variables, (a) data points were not present on the

graph (5.9% error), (b) data paths were not visible or provided for each condition (6.6% error),

and graph did not include vertical phase change lines to signal alterations in the experiments

phase (8.45% error). Two features that included higher error rates included (a) not providing a

break in the data path across phase change lines (40.8% error) and (b) failing to label conditions

on the graph (31.25% error). See Table 2 for full results.


SCRD Graph Construction in Counseling 16

Table 2

Quality Graph Features

Variable Frequency Percent of Error


Ordinate Axis
Tick Marks Present No: 123 45.20%
Yes: 149
Tick Mark Style None: 123 57.35%
Inside: 11
Cross: 9
Mixed: 13
Outside: 116
Scaling Distance Incorrect: 0 0%
Correct: 272
Axis Float No: 267 98.20%
Yes: 5
Data on Axis No: 272 0%
Yes: 0
Horizontal Lines No: 93 65.80%
Yes: 179

Abscissas Axis
Tick Marks Present No: 143 52.60%
Yes: 129
Tick Mark Style None: 143 45.60%
Inside: 5
Cross: 0
Mixed: 0
Outside: 124
Data Align Tick Mark No: 196 72.05%
Yes: 76

Scaling Distance Incorrect: 67 24.60%


Correct: 205
Session Intervals Consistent: 224 17.60%
Mixed: 48

Data Paths
Data Points Present No: 16 5.90%
Yes: 256
Data Paths No: 18 6.60%
Yes: 254
Condition Change Line No: 23 8.45%
Yes: 249
Break Condition Change No: 111 40.80%
SCRD Graph Construction in Counseling 17

Yes: 161
Condition Labels No: 85 31.25%
Yes: 187

Other
Number of DVs Present 1: 233
2:10
3:19
4:08
5:00
6:02

Number of Participants Data 1: 239


2:12
3:12
4:05
5:03
11:01

Axes Comparison

We coded two other elements related to the number of data paths presented on the

graph. First, we coded the number of dependent variables presented on the individual graph

space. Most graphs included one dependent variable (n = 233, 85.7%) although we did identify

graphs that displayed two dependent variables (n = 10), three dependent variables (n = 19), four

dependent variables (n = 8), and six dependent variables (n = 2). Second, we coded the number

of participants data presented on the individual graph space. Most graphs included one

participant’s data (n = 239, 87.9%) although we did identify graphs that displayed two

participants’ data (n = 12), three participants’ data (n = 12), four participants’ data (n = 5), five

participants’ data (n = 3), and 11 participants’ data (n = 1).We coded two variables that have

potential evidence suggesting they are analysis altering elements (i.e., ordinate truncation,

DPPXYR) – and a related element related to the x:y ratio. We identified 178 graphs (65.4%) that

displayed a scale that truncated the minimum and/or maximum ordinate values given the

measurement system employed in the experiment. We identified 64 graphs displayed a DPPXYR


SCRD Graph Construction in Counseling 18

value less than 0.14 and 162 graphs with a DPPXYR greater than 0.16, thus 83.1% were outside

the recommended guidelines (i.e., 0.14 to 0.16). Last, we coded the standardized x:y ratio, 11

were less than 1.33 and 248 were greater than 1.6, thus 95.2% were outside the recommended

guidelines (i.e., 1.33 to 1.6). See Table 3 for full results.

In addition to reporting the percentage within recommended guidelines, we evaluated

the distribution of standardized x:y values and the DPPXYR. The mean of the standardized x:y

values was 3.025 (95% CI = 2.875, 3.175; SD = 1.26) and the median was 2.56. The distribution

of standardized x:y values was moderately positively skewed (skewness = 0.65) with the tails of

the distribution not accounting for a significant amount of the data set (kurtosis = -0.54). The

mean of DPPXYR values was 0.23 (95% CI = 0.22, 0.25; SD = 0.14) and the median was 0.18.

The distribution of DPPXYR values was moderately positively skewed (skewness = 1.00) with

the tails of the distribution accounting for a significant amount of the data set (kurtosis = 1.51).

See Figure 3 for violin plots displaying the distribution of standardized x:y and DPPXYR values.

Table 3

Potential Analysis Altering Axes Comparisons

Variable Frequency Percent of Error


Ordinate Truncation No: 94 65.40%
Yes: 178
Standardized X:Y Ratio Less than 1.33: 11 95.20%
1.33 to 1.6: 13
More than 1.6: 248
DPPXYR Less than 0.14: 64 83.10%
0.14 to 0.16: 46
More than 0.16: 162
SCRD Graph Construction in Counseling 19

Figure 3

Violin Plots Displaying the Distribution of Standardized X:Y Ratio and DPPXYR

Discussion

While SCRD research is not new, the field of counseling is using the design at an

increasing rate as evidenced by the number of publications over the past six years. As

researchers continue to use SCRD in counseling it is imperative to consider the hallmark

characteristics of consistently implementing, analyzing, and reporting the measures and results

of the design. Regarding our hypotheses, less than 10% of the time-series graphs have a

standardized x:y axis ratio within the standardized ratio range (i.e., 8:5 to 4:3), with 95.2% of

graphs falling outside the recommended range. However, our hypothesis that less than 10% of
SCRD Graph Construction in Counseling 20

graphs will have a DPPXYR value within the recommended range (i.e., 0.14 to 0.16) was not

met; 83.1% fell outside the recommended range, meaning 16.9% of reported time-series graphs

met the recommended range. Additionally, 65.4% of the graphs were ordinately truncated,

shortening of the y-axis), which can skew the visual representation of data. These results mean a

majority of the SCRD studies included may have had analysis altering decisions and results

influenced by the visual analysis. Results for standardized x:y parallel prior reviews (Kubina et

al., 2017; 2021; Ledford et al., 2019; Peltier et al., 2021; 2022a; 2022b), which identified more

than 90% of graphs were not within the standardized x:y recommendations. However,

substantially more graphs in the counseling literature displayed a truncated ordinate compared

to prior reviews. This is likely influenced by the measurement systems used, with counseling

more frequently selecting standardized instruments that rely on self-report data.

In addition to the recommended ranges for axes comparison, we explored additional

graph characteristics to assess adherence to recommended procedures for time-series graph

construction known as essential structure and quality features. Regarding the ordinate axis,

nearly half of graphs did not present tick marks, of those that did, a little over 43% accurately

presented tick marks outside the axis. A large percent of the graphs are truncated, do not float

the y-axis, do not align data with tick marks, and display horizontal lines through the graph. For

the abscissas axis, similar percentages were found regarding tick marks being present and

properly displayed outside the axis. Notably, almost a quarter of the graphs did not use proper

scaling distance between session/time intervals. Regarding data paths, most graphs presented

data points and paths correctly with condition change lines present. However, 30% to 40% of

graphs did not display condition labels or present a break in the condition change.

Implications

Because the visual analysis process and graph readability are imperative to SCRD

research, evaluating graph characteristics and assessing adherence to graph construction

standards is important to informing future research. The reported graph characteristics results
SCRD Graph Construction in Counseling 21

indicate counseling researchers using SCRD methodologies can benefit from a supportive guide

for constructing and presenting SCRD graphs. This supportive guide outlining adherence to

graph construction standards will also be a valuable resource for future readers and reviewers

when visually inspecting time-series graphs. Additionally, journals may want to provide editorial

guidance or additional information on graph construction standards beyond file type and

resolution. Future research may consider performing a content analysis to better understand the

information presented in SCRD graphs and be used to inform procedural guidelines for SCRD

graph construction related to the counseling field. For instance, Ledford et al. (2019) argues for

consistent representation of time on the x-axis since the term ‘sessions’ is not an ordinal

representation of time.

Conclusion

The present study explored the descriptive information of 272 time-series graph

characteristics in 50 SCRD published studies in 15 counseling journals to understand the

variability and level of adherence to recommendations for graph construction. Graph

construction impacts readability of graphs and may distort the visual analysis process for

SCRDs. The results indicated several areas of improvement based on errors in graph

construction and visual display of data for counseling researchers and practitioners. For

instance, the greatest area of improvement would be the inclusion and use of tick marks and

data aligned, floating the y-axis, consistent y-scaling ranges to avoid truncation, and

considerable emphasis on scaling the axes to recommended DPPXYR and standardized x:y

ratios.
SCRD Graph Construction in Counseling 22

References

Dart, E. H., & Radley, K. C. (2017). The impact of ordinate scaling on the visual analysis of

single-case data. Journal of School Psychology, 63, 105-118.

https://doi.org/10.1016/j.jsp.2017.03.008

Dart, E. H., & Radley, K. C. (2018). Toward a standard assembly of linear graphs. School

Psychology Quarterly, 33(3), 350-355. https://doi.org/10.1037/spq0000269

Johnson, A. H., & Cook, B. G. (2019). Preregistration in single-case design research. Exceptional

Children, 86(1), 95–112. https://doi.org/10.1177/0014402919868529

Kubina, R. M., Kostewicz, D. E., Brennan, K. M., & King, S. A. (2017). A critical review of line

graphs in behavior analytic journals. Educational Psychology Review, 29(3), 583-598.

https://doi.org/10.1007/s10648-015-9339-x

Kubina, R. M., Jr., Kostewicz, D. E., King, S. A., Brennan, K. M., Wertalik, J., Rizzo, K., &

Markelz, A. (2021). Standards of graph construction in special education research: A

review of their use and relevance. Education & Treatment of Children, 44(4), 275–290.

https://doi.org/10.1007/s43494-021-00053-3

Ledford, J., Lane, J., & Severini, K. (2018). Systematic use of visual analysis for assessing

outcomes in single case design studies. Brain Impairment, 19(1), 4-17.

https://doi.org10.1017/BrImp.2017.16

Ledford, J. R., Barton, E. E., Severini, K. E., Zimmerman, K. N., & Pokorski, E. A. (2019). Visual

display of graphic data in single case design studies: Systematic review and expert

preference analysis. Education and Training in Autism and Developmental Disabilities,

54(4), 315–327. https://www.jstor.org/stable/26822511

Lenz, A. S. (2015). Using single-case research designs to demonstrate evidence for counseling

practices [Editorial]. Journal of Counseling & Development, 93(4), 387–393.

https://doi.org/10.1002/jcad.12036
SCRD Graph Construction in Counseling 23

Mirza, R. D., Punja, S., Vohra, S., & Guyatt, G. (2017). The history and development of N-of-1

trials. Journal of the Royal Society of Medicine, 110(8), 330-340.

https://doi.org/10.1177/0141076817721131

Peltier, C., Morano, S., Shin, M., Stevenson, N., & McKenna, J. W. (2021). A decade review of

single-case graph construction in the field of learning disabilities. Learning Disabilities

Research & Practice, 36(2), 121-135. https://doi.org/10.1111/ldrp.12245

Peltier, C., McKenna, J. W., Sinclair, T. E., Garwood, J., & Vannest, K. J. (2022a). Brief report:

Ordinate scaling and axis proportions of single-case graphs in two prominent EBD

journals from 2010 to 2019. Behavioral Disorders, 47(2),134-148.

https://doi.org/10.1177/0198742920982587

Peltier, C., Muharib, R., Haas, A., & Dowdy, A. (2022b). A decade review of two potential

analysis altering variables in graph construction. Journal of Autism and Developmental

Disorders, 52, 714–724. https://doi.org/10.1007/s10803-021-04959-0

Radley, K. C., Dart, E. H., & Wright, S. J. (2018). The effect of data points per x-to y-axis ratio on

visual analysts evaluation of single-case graphs. School Psychology Quarterly, 33(2),

314-322. https://psycnet.apa.org/doi/10.1037/spq0000243

Scuffham, P. A., Nikles, J., Mitchell, G. K., Yelland, M. J., Vine, N., Poulos, C. J., ... & Glasziou,

P. (2010). Using N-of-1 trials to improve patient management and save costs. Journal of

general internal medicine, 25(9), 906-913. https://doi.org/10.1007/s11606-010-1352-7

Woo, H., Lu, J., Kuo, P., & Choi, N. (2016). A content analysis of articles focusing on single-case

research design: ACA journals between 2003 and 2014. Asia Pacific Journal of

Counselling and Psychotherapy, 7(1-2), 118-132.

https://doi.org/10.1080/21507686.2016.1199439

You might also like