You are on page 1of 478

Table

of Contents


Chapter 1: Asking Important (and interesting) Questions ............................... 1

Chapter 2: Creating Measures Based on Questions ........................................ 10

Chapter 3: Collecting Data to Answer Questions ............................................ 23

Chapter 4: Organizing and Displaying Data ..................................................... 34

Chapter 5: Describing the “Typical” Case ....................................................... 57

Chapter 6: Assessing Differences Among Cases ............................................... 81

Chapter 7: Using Sampling Distributions to Make Statistical Inferences ........ 107

Chapter 8: Estimating Population Values from Sample Data .......................... 135

Chapter 9: Testing Claims or Predictions ....................................................... 160

Chapter 10: Measuring the Association Between Two Qualitative Variables .. 199

Chapter 11: Analyzing the Variation Within and Between Group Means ......... 227

Chapter 12: Assessing the Relationship Between Quantitative Variables ........ 252

Chapter 13: Assessing the Relationships Among Multiple Variables ............... 284

Appendix A: Glossary of Key Terms in Analyzing Criminological Data ............... 315

Appendix B: Statistical Tables:

Normal Distribution ......................................................... 331
T-Distribution .................................................................. 332
Chi-Square Distribution ................................................... 334
F- Distribution ................................................................. 335

Appendix C: Notetaking Guides (Chapters 1-12)………………………………………………..336

i
Analyzing Criminological Data Chapter 1

CHAPTER 1:
Asking Important (and Interesting) Questions

This chapter begins our description of the basic foundation for analyzing criminological data. It
focuses on developing research questions in criminal justice and some basic concepts
associated with criminological research. After completing this chapter, you will be able to
define and describe the concepts listed below. You will also have the knowledge necessary to
achieve the following learning objectives (LO) and apply these skills.

Concepts Learning Objectives (LO)

- Research question LO1: Create three types of research questions


- Hypothesis LO2: Produce hypotheses from research questions
- Data LO3: Explain where data come from and how they are used to
- Observations answer criminal justice questions
- Qualitative and quantitative LO4: Differentiate between qualitative and quantitative data


Introduction

Those who make decisions within the criminal justice system have tremendous power. Criminal
justice majors know that this decision-making is called discretion. The discretionary powers of
those who work in this system were outlined in the following flowchart prepared by the
President’s Commission on Law Enforcement and the Administration of Justice in 1967.

Copyright @ 2017 1
Analyzing Criminological Data Chapter 1


Each “event” outlined in the flowchart represents the outcome of discretionary powers. For
example, consider the first “event” listed under “entry into the system”: reported and observed
crime. Whether or not crime is reported depends on the discretionary powers of victims and
witnesses. As cases move through the system, police use their discretionary powers to
determine which cases to investigate and which suspects to arrest, prosecutors decide whether
or not to file charges against those arrested, judges decide whether or not these charges will be
dismissed, parole officers decide whether to charge parolees with a technical violation, and so
forth.

Students majoring in criminal justice devote their academic studies to understanding these
“events” and the decisions made by the people involved in them (e.g., criminals, victims,
witnesses, police, prosecutors, defense attorneys, judges, correctional officers). In order to
study and understand these events and related processes, we must ask questions.

LO1: Create three types of research questions

“Crime” is the very first “event” listed in the flowchart we examined. A crime must occur before
a case is brought into the criminal justice system. One goal of criminologists is to understand
crime events. As criminal justice researchers and students, if we hope to solve crime problems
(e.g., reduce the amount of crime that occurs in society), we must ask specific questions to
learn more about these events. We call these research questions.

As you investigate problems, you can ask three different Research Questions: Questions that
types of research questions: descriptive, relational, and guide investigations into events or
causal. If we are interested in understanding more processes and help us to solve
about robberies in order to prevent them, we might ask: problems

• How many robberies occur each month?
• What types of weapons are used to commit robberies?
• How much money or property is taken in the average robbery?

These are descriptive research questions. Descriptive research questions are asked to learn
more about a problem’s characteristics. They help us to understand what is happening. We ask
these questions to learn about a specific characteristic of a concept (numbers of robberies;
weapons used in robberies; damages associated with robberies). Now, what about the
following questions about robberies:

• How do neighborhoods with low and high numbers of robberies differ?
• What gender is more likely to become a robbery victim?
• Do robberies occur more often at different times of the day?

Copyright @ 2017 2
Analyzing Criminological Data Chapter 1

These are relational research questions. Relational research questions are asked to learn more
about the conditions under which the problem tends to occur. They help us to understand
when or in what situations the problem is most likely to occur. We ask these questions to learn
about the relationship between two concepts (e.g., neighborhoods and robberies; gender and
robberies; time of day and robberies).

How about the following questions about robbery?

• What impact does increasing police presence have on robberies?
• Will changes in lighting affect the number of robberies in convenience stores?
• Does the length of prison sentences for convicted robbers influence robbery rates?

These are causal research questions. Causal research questions attempt to determine the
effect that something will have (or has had) on the problem we are investigating. They help us
to understand what causes the problem and what can be changed to reduce the problem. We
ask these questions to learn more about the impact of one concept on another (impact of
police on robberies; impact of lighting on robberies; impact of prison sentences on robberies).

As you can see, questions increase in complexity as we move from descriptive to relational to
causal research questions. We usually ask relational questions after answering descriptive
questions, and ask casual questions after answering relational questions. This sequence helps
us to build knowledge and, eventually, solve problems.

Factors that
impact problem
Causal

Relational Conditions surrounding


problem

Descriptive
Characteristics of problem




Practice Applications for Learning Objective 1 (LO1):
Identify descriptive, relational, and causal research questions in these examples:

Type of Research Question:


1. Is DNA evidence more common in some types of homicide than A. Descriptive
others? B. Relational
C. Causal

Copyright @ 2017 3
Analyzing Criminological Data Chapter 1

Type of Research Question:
2. What proportion of homicides lead to an arrest? A. Descriptive
B. Relational
C. Causal
Type of Research Question:
3. Does the increased use of the death penalty decrease a state's A. Descriptive
homicide rate? B. Relational
C. Causal

Type of Research Question:
4. How many cases of wrongful convictions occur in the U.S. each A. Descriptive
year? B. Relational
C. Causal

Type of Research Question:


5. Does the risk of wrongful conviction vary by the defendant's A. Descriptive
race? B. Relational
C. Causal
Type of Research Question:
6. Does the type of evidence (e.g., physical vs. eyewitness A. Descriptive
testimony) influence the risks of wrongful convictions? B. Relational
C. Causal
Correct Answers: Q1(B), Q2(A), Q3(C), Q4(A), Q5(B) and Q6(C)

LO2: Produce hypotheses from research questions



Criminologists spend a great deal of time and energy challenging assumptions. For a very long
time in the U.S., it was widely believed that women could not serve effectively as police
officers. To “test” this assumption, researchers have tracked statistics related to female officer
employment and performance.

Criminologists examined the role of women in policing by starting with research questions,
including descriptive, relational, and causal questions. Here are some examples:

Descriptive: What is the gender distribution (males vs. females) of police officers in the U.S.?
Relational: What types of police departments are more likely to hire female officers?
Causal: What impact will changes in the gender distribution of officers have on police use
of force against suspects?

Copyright @ 2017 4
Analyzing Criminological Data Chapter 1


When attempting to answer questions, researchers begin by testing hypotheses. A hypothesis is
a prediction. Hypotheses represent a researcher’s best guess regarding the answer to a
research question.
Hypothesis: A prediction concerning
The following represent hypotheses that stem the answer to a research question.
from the questions above:

HD: There are more male than female U.S. police officers (i.e., a hypothesis from a descriptive
research question).

HR: Urban police departments are more likely than rural police departments to hire female
officers (i.e., a hypothesis from a relational research question).

HC: Police departments with higher proportions of female officers will have lower rates of force
used against suspects than police departments with lower proportions of female officers
(i.e., a hypothesis from a causal research question).


As you see, there is a direct relationship between research questions and hypotheses.
Hypotheses are nothing more than an educated guess concerning the answer to a research
question. Hypotheses, however, are more specific than research questions. They clearly
describe an expected result that can be measured.

What types of police


departments are more likely to
hire female officers?
Research Question

Urban police departments are more Police departments in cities with Larger police departments are more
likely than rural police departments higher crime rates are more likely likely than smaller police departments
to hire female police officers. to hire female officers than others. to hire female police officers.
Hypothesis 1 Hypothesis 2 Hypothesis 3


Once proposed, research studies are then designed to “test” these hypotheses.

Practice Applications for Learning Objective 2 (LO2):
Identify a hypothesis that derives from each of the following research questions:

1. How will the implementation A. Auto thefts declined after lo-jack systems were available for
of new theft prevention purchase by the public.
features in cars impact auto B. More cars were stolen in Las Vegas than any other city in 2008.
theft rates? C. Theft prevention features are being installed in almost every
car sold by manufacturers this year.

Copyright @ 2017 5
Analyzing Criminological Data Chapter 1

2. How has the introduction of A. DNA evidence is collected in an increasingly larger proportion
DNA evidence influenced the of violent crimes in the U.S..
number of guilty pleas? B. Guilty pleas in the U.S. have decreased over the past two years.
C. Guilty pleas in the U.S. have increased following the
introduction of DNA evidence.

3. What is the relationship A. As the number of police officers per capita increases, crime
between the number of police rates decrease slightly.
officers per capita and a city's B. The number of police officers per capita is similar across all
crime rate? large U.S. cities.
C. City crime rates in the U.S. have generally decreased over the
last decade.
Correct Answers: Q1(A), Q2(C), and Q3(A)

LO3: Explain where data come from and how they are used to answer criminal
justice questions

Like any investigator, we must collect evidence to test hypotheses and answer research
questions. A researcher’s evidence comes in the form of data. Data means “pieces of
information” and is plural for datum (which is the term
used for a single piece of information). Data: Pieces of information that
are collected to test hypotheses
Data are obtained by making observations of the event and answer research questions
or condition that we are interested in. For example, if we wanted to learn more about officer-
involved shootings in order to reduce this type of violence, researchers would make
observations of these events. During these observations, we would collect data by noting
important information (e.g., what was the gender/race of the officer and gender/race of the
suspect, did the suspect have a weapon, how many
witnesses were present). Observation: The act of gathering
information.
If observations are made by watching shootings occur (e.g., personally watching officer
shootings on-scene), this would be considered primary data collection. However, primary data
collection for this research topic can be difficult since officer-involved shootings are relatively
rare. Instead, we may decide to engage in secondary data collection and review data already
prepared by police departments concerning these events. These secondary data will contain
information based on observations made by those who were on-scene at the time or conducted
a subsequent investigation, but we (the researchers) did not personally collect the data.

Whenever we obtain data by directly observing an event or process, or if we directly ask people
questions about their experiences (e.g., though interviews or surveys), we are engaging in
primary data collection. If we rely on data that stem from observations or reports collected by
others, then we are engaging in secondary data collection.

Copyright @ 2017 6
Analyzing Criminological Data Chapter 1

Practice Applications for Learning Objective 3 (LO3):


Identify whether each of the following observations represent primary or secondary data
collection:

1. Surveying crime victims about their treatment within the criminal A. Primary data
justice system. B. Secondary data
2. Reading court transcripts to determine the factors that influence A. Primary data
the nature and length of criminal sentences. B. Secondary data
3. Analyzing maps of crime locations to identify the most dangerous A. Primary data
places in a city. B. Secondary data
Correct Answers: Q1(A), Q2(B), and Q3(B)

LO4: Differentiate between qualitative and quantitative data


Data come in many different forms, and different types of data are needed to answer different
types of questions. When preparing to test hypotheses using data, you must first determine
whether you are working with quantitative or qualitative data.

As a general rule, when data can be rank-ordered, you are working with quantitative data.
When you are dealing with numbers that represent a meaningful value (e.g., age, number of
years of employment, income, number of prior arrests, population) or categories that can be
placed in order from highest to lowest (e.g., never-sometimes-often, low-medium-high), you
are working with quantitative data. Quantitative data represent observations that measure how
often something occurs or how much of something exists.

Qualitative data represent observations of qualities rather than quantities (e.g., gender, city,
race, type of weapon). While you can assign numbers to these qualities (e.g., male = 0, female =
1), these assigned numbers only represent different categories and do not imply that these
categories can be rank-ordered from low to high values.

Quantitative Data Qualitative Data


• Involves observations of rank- • Involves observations of descriptive
ordered categories or numerical characteristics that cannot be rank-
values based on measurement ordered.

• Deals with quantities • Deals with qualities

Copyright @ 2017 7
Analyzing Criminological Data Chapter 1



Practice Applications for Learning Objective 4 (LO4):
Classify the following observations as either quantitative or qualitative data:

1. Type of attorney: private, public, assigned counsel A. Quantitative data


B. Qualitative data
2. Bail amount: < $5,000 $5,000 - $10,000 $10,000 to $50,000 A. Quantitative data
> $50,000 B. Qualitative data

3. Attitude toward gun control: strongly support, moderately A. Quantitative data


support, moderately oppose, strongly oppose. B. Qualitative data

Correct Answers: Q1(B), Q2(A), and Q3(A)




Review


Use these reflection questions and directed activities to help master the material presented in
this chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts in
your own words and give at least two examples of each.

- Research question
- Hypothesis
- Data
- Observations
- Qualitative and quantitative









Copyright @ 2017 8
Analyzing Criminological Data Chapter 1



When?

Answer the following questions.

• When should criminologists ask research questions?
• When should we translate our questions into hypotheses?
• When should we conduct observations to collect data?
• When should we collect qualitative data rather than quantitative data?




Why?

Answer the following questions.

• Why is it important to know whether statistical data reported in the mass media or a
government document is the result of primary or secondary data collection? Does
knowing the type of data collection help you make informed decisions about the claims
that are made from this data?

• Why does knowing these facts about research questions, hypotheses, and types of data
help you to recognize liars, cheaters, and other misusers of statistical information?



Copyright @ 2017 9
Analyzing Criminological Data Chapter 2

CHAPTER 2:
Creating Measures Based on Questions

This chapter describes how we create measures of concepts that underlie criminological
research questions. It focuses on the process of taking abstract concepts and developing
appropriately measured variables from them. Understanding the concepts and learning
objectives in this chapter will improve your analytical skills for conducting research and
evaluating applications of criminological research in academic journals, mass media reports,
and government documents.

Concepts Learning Objectives (LO)

- GIGO (Garbage In, Garbage Out) LO1: Operationalize concepts into variables
- Operationalization
- Variable LO2: Judge the strength of specific measures
- Validity
- Reliability LO3: Evaluate the measurement properties of a
- Attributes classification
- Exhaustive
- Mutually Exclusive LO4: Classify variables based on level of
- Homogeneous measurement
- Levels of measurement

Introduction

In Chapter 1, you learned that data – both quantitative and qualitative – come from
observations. You also learned that we collect data in order to answer questions about
problems we encounter in the criminal justice system. In this chapter, you will learn how to
measure concepts in order to collect data (including proper and improper ways to do so).

Proper measurement and data collection are essential. If our measures are bad, or if we
collect data using poor methods, then any statistic we produce to test our hypotheses will be
meaningless.

For example, imagine that you are interested in knowing how much crime occurs on college
campuses. You spend all day on a ride-along with one officer on campus. You bring your
notebook with you to record your observations and collect data. At the end of the day, you
look through your notebook and find only one entry. This officer responded to only one call
concerning a student’s bicycle that was taken.

Copyright @2017 10
Analyzing Criminological Data Chapter 2

If you were to “analyze” your data, you might conclude that:



• Only property crimes occur on college campuses.
• Crime is very rare, and we do not need police on campuses.

Now, think of all the reasons why these conclusions might be wrong. Here are just a few:

Consider… Ask yourself…
You only observed one officer during a single Could other officers have responded to other
shift. calls during this shift or at other times?

You only observed crime occurring on one Do all campuses experience the same
campus. amount of crime?

You incorrectly classified the crime; this Do observers make mistakes?


bicycle was taken from the student by force
and should be considered a violent crime.

You only observed and recorded crime Do some victims refuse or fail to report
reported to the police. crime?


Data that are nonsensical, incomplete, or imprecise
will lead to faulty statistics and conclusions. This idea Garbage In, Garbage Out (GIGO): An
is called “Garbage In, Garbage Out” (GIGO). important principle acknowledging that
inaccurate data will lead to false
The rest of this chapter is designed to teach you how conclusions. Major sources of
to create good measures. This is the first step in inaccurate data in criminological
testing hypotheses. It will also help you avoid (or research include poor measures of
detect) the GIGO problem in research studies. concepts and bad sampling designs.

LO1: Operationalize concepts into variables

Hypotheses contain concepts. Some concepts are easily defined, but some are not. Let’s look
at a hypothesis we presented in Chapter 1:

• Urban police departments are more likely than rural police departments to hire
female officers.

In order to test this hypothesis, we would need to classify departments by location and
police officers by gender.

Copyright @2017 11
Analyzing Criminological Data Chapter 2


The gender classification should be relatively straightforward. Most people generally
understand the concept of gender. There is not widespread disagreement about what this
concept represents at its most basic level. Therefore, we would simply classify officers as
male or female based on their reported sex.

What about the distinction between rural and urban locations? This concept is slightly more
difficult to define. How will you know if a place is rural or urban? What will you observe
about each location to make this distinction? You could make this distinction based on
population (this is a common way to define this concept). However, you could do this in
many different ways. Here is one example of a common definition. It is followed by one
reason why this definition might not adequately capture the concept we are trying to
measure.

Definition of concept Potential inaccuracy
What if the “place” covers a large
An urban place has a population greater than
geographical area? A place with a population
2,500. Rural places have a population of
of 2,500 might not look like an urban area if
2,500 or less.
people live miles away from one another.


It is rare to find a “perfect” definition of a concept. But, can you think of a different (or
better) way to define urban and rural places?

The process of defining concepts so that we can measure them is called operationalization.

Operationalization: The process of
Researchers engage in four steps during the defining concepts so that they can
operationalization process. Here are the first three: be observed and measured

1. Identify the concepts within the stated hypothesis. (What will you need to measure
to answer your research question?)

2. Identify possible indicators of these concepts. (What are the different ways that you
could define your concept?)

3. Develop variables that represent the best measures of each concept. (Which measure
most clearly captures the best definition of your concept?)

A variable is simply a defined characteristic or value that varies. The age of students in this
class is a variable. The number of traffic tickets obtained by students in this class is a variable.
Variables are important in research since our goal is to explain the differences we observe
(e.g., why do some students break traffic laws – or are caught – more than others?). On the

Copyright @2017 12
Analyzing Criminological Data Chapter 2

other hand, we are not interested in analyzing constants. It makes no sense to try to explain
differences among things that do not differ!
We use variables as measures of our Variable: A characteristic that differs across
concepts. people, objects, places, or events


Practice Applications for Learning Objective 1 (LO1):
Identify the best indicators and variables associated with the following concepts:

1. Concept= Dangerous Offender Indicator: A. Criminal record


B. Age of arrest
2. Concept= Dangerous Offender Variable: A. Number of parking tickets
B. Number of arrests for violent felonies

3. Concept= Police Brutality Indicator: A. Verbal altercations with citizens during arrests
B. Excessive force used in the arrest process
4. Concept= Police Brutality Variable: A. Number of citizen complaints for discourtesy
B. Number of citizens exhibiting physical injuries
after an arrest.

5. Concept= Fear of Crime Indicator: A. Concern about loved ones from crime/criminals
B. Concern about neighborhood stability
6. Concept= Fear of Crime Variable: A. Number of crime safety precautions taken
B. Number of residents who move every 5 years

Correct Answers: Q1(A), Q2(B), Q3(B), Q4(B), Q5(A) and Q6(A)



LO2: Judge the strength of specific measures

Not all variables are equal. Many variables used to measure concepts are based on indicators
that may not accurately capture the true meaning of the concepts we are interested in. For
example, what if we measured a person’s level of criminality by counting the number of
tattoos on their body?

Better measures of criminality might include: number of years served in prison, number of
prior arrests, number of charges brought against the person, number of crimes the person
has self-reported.

Copyright @2017 13
Analyzing Criminological Data Chapter 2

A valid measure is a variable that measures what it intends to measure. Measures with high
validity are superior to measures with low validity. Most people would agree that a person’s
number of prior arrests is a more valid measure of criminality than a person’s number of
tattoos. When assessing a measure’s validity, you should ask yourself, “Does this measure
adequately capture the concept I am trying to
measure?” If you have a high degree of validity, Validity: The degree to which a variable,
your answer should be yes. as measured, accurately captures the
concept it intends to measure

We want to ensure that our measures are valid. We also want to ensure that they are
reliable. Reliable measures produce the same results each time we make an observation of
the same thing. For example, imagine that you survey meth-addicts just after they are
arrested. Asking the question, “How many times have you used methamphetamine in the
last 24-hours?” would likely have higher reliability than asking, “How many times have you
used methamphetamine in your lifetime?” This is because it is easier to recall activities that
occurred during the past day than activities engaged in over the course of a lifetime.
Measures tend to be less reliable when they force people to “guess” or “estimate”
something, particularly when dealing with events that occur over long periods of time.

Measures also tend to be less reliable when they are based on second-hand knowledge. For
example, we often use police data to measure victimization trends. However, people may
not consistently report crimes to the police. A victim may be more likely to report crime after
having a positive interaction with a police officer, or less likely after experiencing a negative
interaction.

Reliability is also a problem if our questions are too vague and lack specificity. For example, if
we ask someone, “Are you fearful?” they may give different answers if we ask them
repeatedly because we did not provide enough detail to allow them to answer with
confidence. Some people may answer based on how they feel right now, others may think of
a recent scenario in which they felt afraid,
Reliability: The extent to which a variable,
others may think about things that scare
as measured, will produce consistent
them, and so on. A more reliable question
results when repeated measures are taken
would be, “Are you fearful that your car will
be stolen in your neighborhood?”


When attempting to determine if a particular variable is a good measure of a concept, you
must assess both the validity and reliability of the measure. Without validity and reliability,
you have faulty measures. This will result in GIGO. Any statistics calculated based on these
measures, and your subsequent conclusions, would be meaningless.



Copyright @2017 14
Analyzing Criminological Data Chapter 2


Practice Applications for Learning Objective 2 (LO2):
Identify the most valid and reliable measures of the following concepts:

1. Most valid measure of "social class"? A. Annual income


B. Type of car you drive
C. Number of criminal acts committed

2. Most valid measure of "recidivism" A. Post-parole technical violations for alcohol use
(i.e., repeated criminal activity after B. Being rearrested for felonies after being paroled
serving a criminal sentence)? C. Having thoughts about committing crimes.

3. Most reliable measure of "exposure to A. Number of times heard the word "marijuana"
illegal drugs"? B. Number of times smoked marijuana in past 10 years
C. Number of times smoked marijuana in the past month

4. Most reliable measure of "social A. Number of people you have seen this month.
bonds"? B. Number of email messages you receive this month.
C. Number of close friends you've talked to this month.

Correct Answers: Q1(A), Q2(B), Q3(C) and Q4(C)

LO3: Evaluate the measurement properties of a classification

We know that a variable represents a characteristic that differs across people, objects,
places, or events. Now we will focus specifically on these characteristics. A variable’s
characteristics are divided or classified into groupings called attributes.

The attributes of a variable will determine whether we
will collect qualitative or quantitative data. This, in Attributes: Classifications of
turn, determines whether we are working with a variable’s characteristics
qualitative or quantitative variables. Examine the following examples. The variable name is
listed first; the attributes are presented below each variable.

Copyright @2017 15
Analyzing Criminological Data Chapter 2

Qualitative Variables Quantitative Variables


(attributes cannot be ranked) (attributes can be ranked)

Gender: Drug use:


Male, Female Never, Rarely, Sometimes, Always

Race: Years served in prison:


White, Black, Other 0, 1, 2, 3, 4, 5, 6 or more

Region: Age:
North, South, East, West 0 - 120


To assess the quality of any measure, you must first determine whether it is valid and
reliable. However, you must also determine if a variable’s attributes are exhaustive,
mutually exclusive, and homogenous. These are properties associated with good (valid and
reliable) measures.

A variable has exhaustive attributes if it includes all possible responses or outcomes. Imagine
that I asked you this question and gave you only these options:

• Where were you born?
Exhaustive: Attributes that
o India represent all possible responses
o England or outcomes
o Japan

Chances are, you could not provide a response to this question (although some people
could)! This is because the attributes presented are not exhaustive. They do not represent all
possible responses. For this variable, an exhaustive set of attributes would cover all possible
countries of the world.


A variable has mutually exclusive attributes if the classification categories do not overlap.
This means that a response could only be correctly classified within one attribute (not two or
more). For example, let’s say that you made $25,000 last year and were asked to answer this
question:

• What was your annual income during the last Mutually exclusive: Attributes
calendar year: that offer only one category for
o $10,000-$25,000 each possible response or
o $25,000-$40,000 outcome
o $40,000-$65,000

Copyright @2017 16
Analyzing Criminological Data Chapter 2


Based on your salary, you might choose either the first or second category. Individuals who
made $40,000 could choose either the second or third categories. This is because the
attributes presented are not mutually exclusive. You may have also noticed that they are not
exhaustive (what about people who made less than $10,000 or more than $65,000?).

Finally, a variable should contain homogeneous attributes. This means that all things
classified within a single attribute should be similar to each other. Based on this idea, can
you identify the problem with the following question?

• What is your race? Homogeneous: Attributes that
o White classify similar things in the same
o Black categories
o Other

Hopefully, you immediately thought about problems with the “other” category. This set of
attributes assumes that all races, other than black and white, are similar to each other.
However, it might be difficult to argue that this is a meaningful classification for “other”
races. If you were really paying attention, you might have also noticed that these attributes
are not mutually exclusive (can people have backgrounds that include more than a single
race?).

Practice Applications for Learning Objective 3 (LO3):


Identify exhaustive, mutually exclusive, and homogeneous classifications of attributes in the
following examples:

1. Convicted of a felony? No, Yes Exhaustive classification?


A. True
B. False

2. Number of charges filed? 0, 2, 3, 4 or more Exhaustive classification?


A. True
B. False

3. Type of drug offense? marijuana, cocaine, selling marijuana, Mutually exclusive?


possession of cocaine A. True
B. False

4. Number of crime victims? 0, 1-2, 2-5, 5 or more Mutually exclusive?


A. True
B. False

5. Number of complaints for excessive force? 0-2, 3, 4 or more Homogeneous classification?


A. True
B. False

Copyright @2017 17
Analyzing Criminological Data Chapter 2

6. Offender's age? < 16, 16-23, 24-65, over 65 years old Homogeneous classification?
A. True
B. False
Correct Answers: Q1(A), Q2(B), Q3(B), Q4(B), Q5(B) and Q6(B)


Measures that do not possess these three important properties (i.e., are not exhaustive,
mutually exclusive, and homogeneous) will result in GIGO. It is imperative that you learn to
create measures that meet all of these criteria and identify measures that do not meet these
standards.

LO4: Classifying variables based on level of measurement

Earlier, you learned that the operationalization process – the act of defining concepts so that
they can be observed and measured – consists of four steps. We have already covered the
first three:

1. Identify the concepts within the stated hypothesis. (What will you need to measure
to answer your research question?)

2. Identify possible indicators of these concepts. (What are the different ways that you
could define your concept?)

3. Develop variables that represent the best measures of each concept. (Which measure
most clearly captures the best definition of your concept?)

The fourth step in the operationalization process is:

4. Determine the level of measurement associated with the attributes of the selected
variable. (What type of scale is used to
measure your variable?) Level of measurement: The type of
measurement scale associated with a
variable’s attributes

Levels of
Concepts Indicators Variables
Measurement

Copyright @2017 18
Analyzing Criminological Data Chapter 2

There are four different levels of measurement: nominal, ordinal, interval, and ratio.
Nominal and ordinal levels of measurement represent categorical variables. Interval and
ratio levels of measurement represent continuous variables. The reasons for these
distinctions will become clear as we examine and define each type of scale.

Categorical Continuous
Variables Variables

Nominal Ordinal Interval Ratio




Nominal Measures

Variables that are measured at the nominal level have attributes with no order among the
categories (these are always qualitative variables). When measuring something using a
nominal scale, all you do is categorize responses. You cannot rank the attributes of nominal
level variables. Nominal scales represent the lowest level of measurement because they only
allow us to talk about qualitative differences between categories but we cannot order them
by magnitude or intensity.

Variables with only two attributes – we call these dichotomous variables – are always
considered nominal-level variables.

Examples of nominal measures include:

• Crime type: Violent, Property, Public Order, White Collar, Organized Crime
• Type of counsel: Public, Private, Other
• Car alarm: Yes, No

Ordinal Measures

Ordinal variables represent the most basic form of quantitative variables. The attribute
categories of an ordinal measure can be ordered in a meaningful way (from low to high, or
high to low). However, the “spread” of each category and the “distance” between each
category is either unequal or unknown. We can always order ordinal scales from least to
most. Still, the “jump” from one category to the next might not be equal in distance.

Examples of ordinal measures include:

• Satisfied with police services: strongly disagree, disagree, agree, strongly agree

Copyright @2017 19
Analyzing Criminological Data Chapter 2

• Frequency of drug use: never, rarely, sometimes, always


• Victim of a crime: 0, 1-2, 3-4, 5 or more times

Interval Measures

Variables that are measured at the interval level have attributes that can be ordered and the
attribute categories have equal distance between them (this makes them continuous rather
than simply categorical). Interval scales are always numerical. The most common example
used to illustrate an interval scale is the Fahrenheit scale of temperature. A temperature of
80 is exactly 4 degrees hotter than a temperature of 76 (which demonstrates equal distances
between each point on the scale). However, 80 degrees is not “twice as hot” as 40 degrees.
This is because the scale lacks a “true zero.” If the scale had a true zero, a temperature of 0
degrees would mean that there is no temperature at all.

Although interval scales are not very common in criminal justice, examples of interval
measures include (other than Fahrenheit):

• Altitude (0 does not mean that you have no altitude – you are at sea level)
• WAIS intelligence score (0 does not indicate brain death)
• Health measurement scales (0 does not usually mean that you are dead)

Ratio Measures

Ratio measures represent the highest level of measurement. Like interval scales, they are
always numerical. Ratio variables have attributes that can be ordered, have equal distance
between them, and have a meaningful zero point. A score of zero indicates the absence of
the characteristic being measured.

Examples of ratio measures include:

• Age
• Number of prior arrests
• Number of death penalty convictions

It is important to recognize the level of measurement associated with variables in any
hypothesis. This is because the level of measurement will determine which statistic you
should use to test your hypothesis.

Copyright @2017 20
Analyzing Criminological Data Chapter 2

Practice Applications for Learning Objective 4 (LO4):


Identify the level of measurement (nominal, ordinal, interval/ratio) in these examples:

1. Arrested for a capital crime? No, Yes A. Nominal


B. Ordinal
C. Interval/Ratio

2. Number of pretrial hearings? 0, 1, 2, 3 or more A. Nominal


B. Ordinal
C. Interval/Ratio
3. Length of prison sentence (in years)? 0, 1, 2, 3, 4, 5 ... etc... A. Nominal
B. Ordinal
C. Interval/Ratio

4. Type of physical evidence? hair, blood, skin, saliva, semen, other A. Nominal
B. Ordinal
C. Interval/Ratio

5. Rating of police services? poor, fair, good, excellent A. Nominal


B. Ordinal
C. Interval/Ratio

6. Number of days between arrest and conviction (insert specific A. Nominal


number of days in space provided)? _____ B. Ordinal
C. Interval/Ratio
Correct Answers: Q1(A), Q2(B), Q3(C), Q4(A), Q5(B) and Q6(C)

Review

Use these reflection questions and directed activities to help master the material presented
in this chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts
in your own words and give at least two examples of each.
- GIGO (Garbage In, Garbage Out) - Attributes
- Operationalization - Exhaustive
- Variable - Mutually Exclusive
- Validity - Homogeneous
- Reliability - Levels of measurement

Copyright @2017 21
Analyzing Criminological Data Chapter 2

How?

Answer the following questions and complete the associated activities.

• How can you turn concepts into variables?
o Think of at least two ways to operationalize two different concepts that are
important to our criminal justice system (e.g., criminal history and use of
force).

• How do you determine if a variable is a valid and reliable measure of a concept?
o Write three survey questions that lack validity and three that lack reliability.
o Write three survey questions that provide valid and reliable measures of a
concept.

• How do you evaluate the measurement properties of a variable’s attributes?
o Define three variables in a way that makes them exhaustive, mutually
exclusive, and homogeneous.

• How do you determine the level of measurement associated with a given variable?
o Design two measures at each of these three levels: nominal, ordinal, and
interval/ratio.


When?

Answer the following questions.

• When should criminologists operationalize variables?
• When should we assess the validity and reliability of measures?
• When should we evaluate the measurement properties of a variable’s attributes?
• When should we determine a variable’s level of measurement?



Why?

Answer the following question:

• Why does knowing this information about operationalization and measurement help
you to recognize liars, cheaters, and other misusers of statistical data?

Copyright @2017 22
Analyzing Criminological Data Chapter 3

CHAPTER 3:
Collecting Data to Answer Questions


This chapter describes the major concepts involved in collecting data to answer criminological
research questions. It focuses on identifying the appropriate unit of analysis and the different
types of sampling designs for collecting data. Understanding the concepts and learning
objectives in this chapter will increase your skills as an informed data analyst and critic of how
criminological research findings are used in government documents and mass media reports.

Concepts Learning Objectives (LO)

- Unit of analysis
- Reductionist fallacy LO1: Infer the unit of analysis used in research studies
- Ecological fallacy
- Samples LO2: Critique statistical claims and interpretations of data
- Populations based on the unit of analysis
- Probability sampling
- Nonprobability sampling LO3: Develop probability and nonprobability sampling
- Sampling bias strategies
- Sampling error
LO4: Assess levels of error/bias associated with sampling
procedures

Introduction

In Chapters 1 and 2, you learned how to begin to solve criminal justice problems with statistics.
In particular, you learned to:
• Ask research questions and develop hypotheses about a particular issue.
• Operationalize concepts contained in hypotheses in order to measure them.
• Assess the strength of measures by examining the properties of variables and
associated attributes to avoid the GIGO (Garbage In, Garbage Out) problem.
• Identify variables’ level of measurement to select appropriate statistics.

However, before you learn to use statistics to analyze your data, you should recognize another
reason why a study’s conclusion might suffer from the GIGO problem. While poorly designed

Copyright @2017 23
Analyzing Criminological Data Chapter 3

measures may produce GIGO, the strategy used to collect data and make observations can also
produce GIGO.
The Research Process

Research Collect Data Determine


Operationalize Select
Questions and and Make Level of
Concepts Statistics
Hypotheses Observations Measurement


Stages in which a GIGO problem can develop



To avoid GIGO, we need both strong measures and strong methods of collecting data.

In this chapter, we will revisit the concepts of collecting data and making observations. You will
learn about different types of data collection strategies and how to assess their strengths and
weaknesses. You will also learn about misleading “claims” researchers sometimes make when
interpreting their findings. We will begin by further exploring how and why observations are
made when conducting research.

LO1: Infer the unit of analysis used in research studies

You know that researchers gather information to test hypotheses by making observations.
When researchers observe something, they make choices about what to (and what not to) look
at in order to measure concepts. These choices are driven by the hypothesis being tested.

For example, let’s say that we wanted to test this hypothesis:

H1: Poor people commit more violent crime than rich people.

To test this hypothesis, we would make observations to measure both income and violent
crime. We might ask individuals to report their annual earnings to collect data about income.
Similarly, we might count the number of times each individual has committed a violent crime.

In this case, we decided to make observations about individuals in
order to compare differences among individuals.

However, let’s say that we wanted to test this hypothesis:

H2: Lower-income neighborhoods have higher violent crime rates than higher-income
neighborhoods.

Copyright @2017 24
Analyzing Criminological Data Chapter 3

To test this hypothesis, we would also make observations to measure both income and violent
crime. Again, we might ask individuals to report their annual earnings to collect data about
income. We might also count the number of times individuals have been arrested for violent
crimes.
In this case, we decided to make observations about individuals in
order to compare differences among neighborhoods.

The entity (e.g., the “what” or “whom”) that you are
interested in comparing or analyzing is called the unit of Unit of analysis: The entity
analysis. analyzed in a research study

As the examples above illustrate, sometimes the unit of observation is the same as the unit of
analysis; sometimes it is not. While we made observations of individuals in both cases, the unit
of analysis in the first hypothesis was individuals, while the unit of analysis in the second
hypothesis was neighborhoods.

Units of analysis can be anything we are interested in studying, including individuals, groups,
events, or organizations/agencies. The easiest way to infer the unit of analysis in a research
study is to ask yourself, what entity (or thing) are you trying to draw conclusions about?

Practice Applications for Learning Objective 1 (LO1):


Infer the unit of analysis in the following hypotheses: Unit of Analysis

1. Juveniles with close ties to family and friends are less likely to be A. Juveniles
delinquent than juveniles with no close ties to family or friends. B. Family/Friends
C. Delinquency

2. Crime rates are higher in cities in the South than cities in other A. Crime rates
regions of the country. B. Cities
C. Region of the country

3. Murderers who kill strangers are more likely to get death A. Murderers
sentences than murderers who kill family members. B. Victim-offender relationship
C. Death sentences

4. Recidivism rates are lower in specialized drug courts than A. Recidivism


traditional criminal courts. B. Rates
C. Type of court

5. Acts of spectator violence are more common when security is A. Acts of spectator violence
lacking. B. Security level
C. High security
Correct Answers: Q1(A), Q2(B), Q3(A), Q4(C), and Q5(A)

Copyright @2017 25
Analyzing Criminological Data Chapter 3

LO2: Critique statistical claims and interpretations of data based on the unit of
analysis

A study’s unit of analysis determines the types of claims that can be made – or conclusions that
can be drawn – in a research study.

If we use statistics to compare individuals (e.g., individuals represent our unit of analysis), we
can only draw conclusions about individuals. If we use statistics to compare neighborhoods
(e.g., neighborhoods represent our unit of analysis), we can only draw conclusions about
neighborhoods.

Let’s use our previous hypothesis to illustrate this idea. Suppose that we find evidence to
support this hypothesis:

H1: Poor people commit more violent crime than rich people.

A researcher might then be able to claim, legitimately, that if an individual is poor he or she is
more likely to commit violent crime than an individual who is rich. The researcher CANNOT
legitimately claim that neighborhoods with more poor people are more likely to experience
violent crime than neighborhoods with more affluent people. We cannot draw conclusions
about neighborhoods unless we directly compare differences among neighborhoods in our
study. When researchers make a claim about groups when they have only compared
differences among individuals, they commit the reductionist fallacy.

Reductionist Fallacy: Making
Suppose that we find evidence to support our second claims about groups when using
hypothesis: individuals as the unit of analysis

H2: Lower-income neighborhoods have higher violent crime rates than higher-income
neighborhoods.

A researcher might then be able to claim, legitimately, that a particular lower-income
neighborhood is more likely to experience violent crime than a particular higher-income
neighborhood. The researcher CANNOT legitimately claim that people living in lower-income
neighborhoods are more likely to commit crime than people living in more affluent
neighborhoods. We cannot draw conclusions about individuals unless we directly compare
differences among individuals in our study. When Ecological Fallacy: Making
researchers make a claim about individuals when they claims about individuals when
have only compared differences among groups, they using groups as the unit of
commit the ecological fallacy. analysis

Copyright @2017 26
Analyzing Criminological Data Chapter 3

If you can correctly identify a study’s unit of analysis, you can determine whether a researcher
is making a legitimate claim concerning his or her findings.

Practice Applications for Learning Objective 2 (LO2):


Identify whether the following claims represent the ecological
fallacy, the reductionist fallacy, or an acceptable claim based on
the information provided: Evaluate the Claim

1. Finding: Prosecutors with more experience have higher conviction


rates than prosecutors with less experience. A. Ecological fallacy
B. Reductionist fallacy
Claim from finding: Court systems that have prosecutors with more
C. Acceptable claim
experience will have higher conviction rates.

2. Finding: As public support for the death penalty declines, U.S.


prosecutors will be less likely to seek the death penalty. A. Ecological fallacy
B. Reductionist fallacy
Claim from finding: If public support for the death penalty declines
C. Acceptable claim
from 60% to 40%, prosecutors in the U.S. will seek the death penalty
less often.

3. Finding: Juries overrepresented by women are more likely to acquit


defendants than juries that are overrepresented by men. A. Ecological fallacy
B. Reductionist fallacy
Claim from finding: If a woman is serving on a jury, she is most likely
C. Acceptable claim
to acquit the defendant.

Correct Answers: Q1(B), Q2(C), and Q3(A)

LO3: Develop probability and nonprobability sampling strategies



When we collect data to test hypotheses, we often have to choose which observations to make.
This choice is important since these observations will determine the results of our analysis.

You might assume that we should collect data by making observations of every “thing” (e.g.,
unit of analysis) we are interested in. For example, if we wanted to draw conclusions about
differences among U.S. neighborhoods, it might be helpful to have data on every neighborhood
located in this country. However, collecting data from every U.S. neighborhood would be costly,
time consuming, perhaps not possible in some cases, and – as you’ll learn in this course – not
necessary!

Copyright @2017 27
Analyzing Criminological Data Chapter 3

Population: A group that contains all
The entire group of entities (or things) that you
entities (people, places, events, agencies)
want to study is called a population. We are
we want to learn about
almost never able to observe and collect data on
every entity that we are interested in learning about. In other words, we rarely have data on an
entire population.

Instead, we collect data from samples drawn from populations. A sample is simply a smaller
group of entities taken from a population. A sample represents the entities that we actually
observe. From these observations, we attempt to
draw conclusions about our population of Sample: A subset of entities drawn from a
interest. larger group that we want to learn about







Population

Sample




There are many different ways to draw samples. If we want to ensure that our sample
accurately reflects the population from which it was drawn, then we must use a probability
sampling method. When using probability sampling, the entities included in the study are
selected randomly. This means that every entity in the population has an equal chance of being
selected.

Probability sampling: A sampling method in
If we know how many entities are in our which entities are randomly selected and
population, we also will know the exact each entity has an equal and known chance
probability that any given case will be of being selected from a population
selected. For example, let’s suppose that we
want to study a population of 30,000 inmates using a probability sampling method. Using this
method, we would randomly select a smaller group of inmates for our sample, and each inmate
would have an equal chance of being selected (1 in 30,000).

Probability sampling allows us to draw conclusions about a population by studying only a subset
(smaller group) of the population’s entities. When we use statistics to analyze the
characteristics of a probability sample, we can then generalize these findings to the larger
population. For example, if we find that the average age of our inmate sample is 26, then we
would also expect the average age of our population to be 26 (or very close to it).

Copyright @2017 28
Analyzing Criminological Data Chapter 3


Not all samples are probability samples. Although preferable, it is not always possible to draw a
probability sample. When entities are selected from a population in a nonrandom way (i.e.,
every entity does not have an equal
Nonprobability sampling: A sampling method
chance of being selected), this is called
in which entities are not randomly selected
nonprobability sampling.
from a population

For example, if you collected data from the population of 30,000 inmates by speaking to the
first 30 inmates who volunteered to talk with you, this would be a nonprobability sample.
Although you could learn a lot by speaking to these inmates, you could not generalize your
findings to the larger inmate population. In other words, if the average age of your volunteer
sample was 22, you could not claim that the average age of all inmates is 22.

Practice Applications for Learning Objective 3 (LO3):


Identify the sampling strategy that would best allow us to generalize a study's results to the
following populations:

1. Los Angeles Police A. Take a random sample of 500 officers from LAPD
Officers (LAPD). B. Randomly select 500 of the highest ranking LAPD officers
C. Interview 500 officers from LAPD's traffic patrol department

2. Convicted Drug A. Take a random sample of 200 drug users in drug courts in Montana
Offenders in the B. Randomly select 200 drug sellers in Montana
state of Montana C. Randomly select 200 convicted drug offenders from all Montana courts

3. Municipal Court A. Randomly sample 100 currently serving Municipal Court Judges
Judges B. Randomly sample 100 previously serving Municipal Court Judges
C. Randomly select 100 past and current Municipal Court Judges.

4. Boston Residents A. Randomly interview 150 people at Logan (Boston) airport.


B. Randomly select 150 names from the Boston telephone directory.
C. Randomly select 150 college students at Harvard University.

Correct Answers: 1(A), 2(C), 3(C), and 4(B).

LO4: Assess levels of error/bias associated with sampling procedures



Findings from nonprobability samples cannot be generalized to populations because this type
of sampling method can create inaccuracies in our data called sampling bias. Sampling bias
produces differences between sample and population characteristics. These differences will

Copyright @2017 29
Analyzing Criminological Data Chapter 3

bias the results of our analyses. Various types of sampling bias contribute greatly to the GIGO
problem in much criminological research.

For example, consider our previous example of a sample of 30 volunteer inmates from a
population of 30,000 inmates. We may find that younger inmates are more likely to volunteer
to talk to researchers than older inmates. We
may also find that inmates with longer Sampling Bias: Discrepancies between
sentences are more likely to talk to sample and population characteristics
researchers. These systematic differences that result from using a nonrandom or
between volunteers and non-volunteers are nonprobability sampling method
likely to produce sampling bias. Therefore, you cannot be confident that the characteristics of
your sample accurately reflect the characteristics of the entire inmate population. The use of
probability sampling helps to eliminate sampling bias and allows us to generalize our sample
findings to the larger population.

Even when using probability sampling, we still cannot be sure that our findings are 100%
accurate. We must always assume that there is some level of sampling error associated with
any sample. This is because (1) there are an infinite number of random samples that can be
drawn from any population and (2) there is
always a slight chance that our selected random Sampling Error: Degree of inaccuracy in
sample does not perfectly represent the findings that results from using data from
characteristics of the population. samples instead of an entire population

Each time we draw a different random sample from the same population, we draw a different
collection of entities. We can expect that each collection will be at least slightly different from
the next. Differences between each sample and the population from which it is drawn create
sampling errors.



Sample 1 Sample 2

Population Population

As a general rule, large random samples will produce less sampling error than other types of
samples.

Copyright @2017 30
Analyzing Criminological Data Chapter 3

Practice Applications for Learning Objective 4 (LO4):


Identify the sources of sampling bias and poor research design that would dramatically
influence the results of the following studies:

1. You want to study the frequency of street-level drug dealing in New York City. Your sampling
strategy involves watching closed-circuit television (CCTV) footage of people on 25 city blocks
between noon and 2pm. What types of sampling bias are present in this study that would
distort your results and any generalizations you make from them?
A. You didn't select a "random" sample of city blocks so we don't know if they represent all
city blocks in New York.
B. The time period selected (noon to 2pm) is an unlikely time for drug dealing. You would
probably get vastly different results if your sampled others times of the day (e.g.,
nighttime).
C. Drug dealers may see the CCTV cameras and move to other locations for drug transactions.
D. All of the above are examples of sampling bias and/or errors in research design.

2. You want to study how family factors influence juvenile delinquency by interviewing a random
sample of 100 juveniles who were certified as adults and sent to adult prisons. What types of
sampling bias are present in this study that would distort your results and any generalizations
you make from them?
A. Juveniles who are certified as adults are not representative of all juvenile offenders (e.g.,
they have more extensive criminal records, commit more serious crimes, and have
different family backgrounds than juveniles who remain in youth treatment centers).
B. Juveniles who are caught, convicted, and sentenced to adult prisons are different than
those who are not.
C. Certified juveniles who agree to be interviewed are probably different than those who
refuse to participate in your study.
D. All of the above are examples of sampling bias and/or errors in research design.

Correct Answers: Q1(D) and Q2(D)

Copyright @2017 31
Analyzing Criminological Data Chapter 3

Review

Use these reflection questions and directed activities to master the material presented in this
chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts in
your own words and give at least two examples of each.

- Unit of analysis - Probability sampling
- Reductionist fallacy - Nonprobability sampling
- Ecological fallacy - Sampling bias
- Samples - Sampling error
- Populations



How?

Answer the following questions and complete the associated activities.

• How can you infer the unit of analysis used in research studies?
o Read two journal article abstracts and determine the unit of analysis for each
study.

• How can you tell if a researcher is committing the reductionist or ecological fallacy?
o Describe two scenarios in which each type of fallacy is created.

• How do you know if probability or nonprobability sample is being used?
o Give an example of a probability sampling method and a nonprobability sampling
method.

• How can you assess levels of error/bias associated with sampling procedures?
o Describe two samples: one with a high level of sampling error/bias and one with
a low level of sample error/bias.

Copyright @2017 32
Analyzing Criminological Data Chapter 3





When?

Answer the following questions.

• When should we determine the unit of analysis in a research study?
• When should we determine whether a researcher has committed the reductionist or
ecological fallacy?
• When should we use probability sampling methods rather than nonprobability sampling
methods?
• When should we determine a sample’s level of bias/error?



Why?

Answer the following question.

• Why does knowing these facts about the unit of analysis and sampling designs help you
to recognize liars, cheaters, and other misusers of statistical information?







Copyright @2017 33
Analyzing Criminological Data Chapter 4



CHAPTER 4:
Organizing and Displaying Data

This chapter describes some basic ways to organize, summarize, and display criminological data
for statistical analysis. It covers how to construct, calculate, and interpret data distributions
(e.g., frequency/percentage tables) and graphical representations of them (e.g., bar charts, pie
charts, histograms). By mastering the concepts and learning objectives in this chapter, you will
acquire some of the knowledge and skills necessary for conducting and evaluating
criminological research.

Concepts Learning Objectives (LO)

- Data reduction
- Raw data LO1: Produce univariate displays of data distributions
- Univariate analysis from raw data
- Frequency distributions
- Percentage distributions LO2: Calculate values that describe data distributions
- Cumulative distributions
- Bar chart / Pie chart /Histogram / LO3: Interpret graphical displays of data distributions
Line graph
- Percentile LO4: Judge the spread and shape of data distributions
- Skewed data
- Outlier


Introduction

As criminologists, you know that our goal is to learn more about justice system processes and
decisions in order to solve problems. Generally speaking, we accomplish this by testing
hypotheses to answer research questions. By mastering the skills and concepts presented in the
first three chapters, you know how to carry out the initial steps required to answer important
questions. Specifically, you know how to:

• Create different types of research questions and develop related hypotheses.
• Operationalize the concepts contained in hypotheses to measure them.
• Create robust (i.e., valid and reliable) measures with appropriate attribute
properties (i.e., exhaustive, mutually exclusive, homogeneous).

Copyright @2017 34

Analyzing Criminological Data Chapter 4


• Develop sampling strategies that allow generalizations to be made to larger
populations.
• Identify misleading generalizations (i.e., ecological and reductionist fallacies) based
on a study’s unit of analysis.

These skills allow you to be a critical and informed consumer of information. When presented
with “facts” or “findings,” you can assess the credibility of this information by examining how
concepts were operationalized and samples were drawn. You should not trust findings that
stem from faulty measures or sampling strategies that create massive sampling error or bias. By
practicing and mastering the skills presented in the first three chapters, you have trained
yourself to detect and avoid the “GIGO” problem in research.

Good input (data collection) is essential to good outputs (research findings), but it doesn’t
guarantee them. In fact, the "best practices" in criminological research involve both sound data
collection practices and a good working knowledge of
Data reduction: Organizing data
the appropriate statistical procedures for analyzing
into a simplified and useful form
data. We will now focus on how researchers generate
sound research findings. In the upcoming chapters, you will learn how to critique the methods
used to actually analyze data.

Whenever we collect large amounts of data, this information must be organized in a useful way.
Data reduction is the act of summarizing masses of data in a Raw data: Unanalyzed data
way that allows us to draw meaningful conclusions.

For example, imagine that you wanted to conduct a survey of 1,000 students randomly selected
from your college to learn more about their criminal behaviors. Each survey contains 50
questions related to demographics (e.g., age, gender, race), college status (e.g., year, major),
and criminal involvement (e.g., drug use, gang affiliation, violent behaviors, property offenses).
If 1,000 students answer 50 survey questions, this would leave you with 50,000 pieces of
datum! Data reduction techniques will help you to organize these raw data, interpret them,
and present your findings.

In this chapter, you will learn how to engage in
Univariate analysis: Examination of data
data reduction for a single variable. This is
distributed across a single variable’s
referred to as univariate analysis.
attributes

Specifically, you will learn to work with univariate data distributions: create different types,
calculate descriptive values, create and interpret visual depictions, and interpret shapes.




Copyright @2017 35

Analyzing Criminological Data Chapter 4


LO1: Produce univariate displays of data distributions from raw data

When researchers collect data, they start to engage in data reduction by creating a dataset.
Datasets are nothing more than an organized collection of raw datum. The collection is usually
stored in an electronic spreadsheet using data management software, such as excel or SPSS
(Statistical Package for the Social Sciences). To illustrate this, let’s assume that the following
questions were included in a criminal behavior survey you administered at your college:












Once you obtained your completed surveys, you would need to input your data into a dataset.
Datasets are structured in a systematic way that makes them easy to read. Each row (horizontal
line) represents a single case, and each column (vertical line) represents a single variable. In the
dataset below, row 1 represents the responses from a single survey respondent. Column 1
represents all respondent answers to the gender question (variable). The places where rows
and columns meet are called cells. Each cell contains a single piece of datum or information
provided by a single respondent.





Single case



Single variable
Single cell






Copyright @2017 36

Analyzing Criminological Data Chapter 4


This is a very small dataset that includes 10 cases (rows) and 5 variables (columns). Although
you can “eyeball” this dataset to find general patterns (e.g., there are almost as many females
as males, most people are in their early 20s, most people have not been arrested), drawing
these types of conclusions by looking at raw data becomes impossible with larger datasets. That
is why data reduction techniques are so useful.

Most statistical software packages (like SPSS) can be used to create data summary outputs;
therefore, you will rarely compute these
summaries by hand. However, you must Frequency distribution: A data summary
understand how summaries are created in column that presents the number of times
order to correctly interpret them. each variable attribute appears in a dataset

Univariate data distributions summarize data so that we can quickly identify meaningful
patterns. The most basic type of data distribution is a univariate frequency distribution.
Frequency is another term for count. In order to create a univariate frequency distribution, we
simply count the number of cases that share the same attribute for a single variable.

The table below represents univariate frequency distributions for two nominal (qualitative)
variables: gender and marijuana use. The symbol f is used to denote frequency. The symbol n is
used to denote the total sample size (when capital N is used, it signifies the size of a
population).


Gender f Marijuana Use f

Female 4 No 3
Male 6 Yes 7
n = 10 n = 10


Univariate frequency distributions for qualitative variables, such as those above, are relatively
straightforward. Each variable’s attributes are presented along with their relative frequency.

Dealing with quantitative variables often requires greater thought in developing category
groupings that provide the most accurate and meaningful portrayal of our data. For example,
examine the three frequency distributions for income below.

Income f Income f Income f
≤$25k 6
<$10k 2 <$10k 2

$10-$25k 4 $10-$25k 4 >$25k 4

$25.1-$50k 3 $25.1-$50k 3 n = 10

>$50 1
$50.1-$75 0
>$75 1 n = 10
n = 10

Copyright @2017 37

Analyzing Criminological Data Chapter 4




Although they differ, all three frequency distributions accurately present our data. Because we
are dealing with a variable that can be rank-ordered (an ordinal variable in this case), we have
the ability to collapse categories. There is no one “correct” way to collapse or combine attribute
categories. The best frequency distribution structure will depend on at least two factors:

(1) how your data are distributed across categories and

(2) the research question you are attempting to answer.

In general, researchers try to avoid categories with zero cases and they strive to create a
relatively even distribution of cases across categories (i.e., each category has about the same
number of cases). However, it only makes sense to collapse variables in this way if the
combined categories will still produce meaningful findings.

Examine the distribution of our two ratio-level variables in the dataset: age and number of
arrests. Which of the following univariate frequency distributions do you think is most
meaningful for our age variable? How can you justify your selection?


Age f Age f Age f
18 1 <21 5 <21 6
21-22 3 ≥21 4
19 1
n = 10

20 3 >22 2

21 2 n = 10
22 1

23 1
24 0
25 0
26 0

27 1

n = 10




Which of the following univariate frequency distributions do you think is most meaningful for
our number of arrests variable? How can you justify your selection?





Copyright @2017 38

Analyzing Criminological Data Chapter 4




Arrests f Arrests f Arrests f

0 7 0 7 No 7

1 1 1-2 2 Yes 3
2 1 >2 1 n = 10
3 0 n = 10
4 0
5 1
n = 10


Again, remember that there is not one “correct” way to construct a frequency table, but tables
that are difficult to read (e.g., too many categories, several categories without cases) or contain
category groupings that do not make sense (e.g., they are not exhaustive, mutually exclusive,
and homogenous) should be avoided.

Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about univariate displays of data distributions from raw data:

1. Here are the numbers of prior arrests Answer Categories:


of 10 juveniles: 0, 1, 0, 1, 0, 0, 4, 1, 2, 0. A. 3
What numerical value should be B. 4
included in the red cell of the adjoining C. 5
frequency distribution of these data? D. 6

2. Here are the sentences (F = fine; P = Answer Categories:


probation; J = Jail) given to 10 people A. F=3, P=2, and J=5
convicted of disorderly conduct: F, P, F, B. F=2, P=3, and J=5
F, P, J, F, F, P, J. C. F=5, P=2, and J=3
What numerical value should be D. F=5, P=3, and J=2
included in the red cells of the adjoining
frequency distribution of these data?


3. What is wrong with the following
Answer Categories:
frequency distribution?

A. Type of crime is a rank-order quantitative variable.
B. The frequencies in each category don’t add up to
the total sample size (n).
C. It includes categories (e.g., aggravated assault,
fraud) with zero cases (f = 0).

Copyright @2017 39

Analyzing Criminological Data Chapter 4

4. What is wrong with the following Answer Categories:


frequency distribution?
A. The age categories are not homogenous.
B. The frequencies in each category don’t add up to
the total sample size (n).
C. The age categories overlap (i.e., they are not
mutually exclusive).

Correct Answers: Q1(C), Q2(D), Q3(C), and Q4(A)

LO2: Calculate values that describe data distributions

Frequency distributions allow researchers to produce succinct visual summaries of their raw
data. Additional information can be added to frequency distribution tables to make it easier to
determine how values are spread across attributes (e.g., how data are distributed). One such
type of information is the percentage distribution. Percentage distributions standardize
frequencies, from 0 to 100, so that we
can quickly determine what percentage Percentage distribution: A data summary column
of the sample is associated with each based on a standardized frequency distribution
attribute category. The percentage ranging from 0 to 100
describes the relative frequency of any given attribute, compared to all other attributes.

To convert frequencies into percentages (commonly abbreviated as % or pct), we use the
formula:
pct = (f/n)100,

where f = frequency and n = total sample size.

The income distribution table below illustrates how to calculate the percentage (pct)
distribution column based on frequency scores (f) and the total sample size (n = 10):

Income f pct Calculation
<$10k 2 20 (2/10)100 = (.2)100 = 20
$10-$25k 4 40 (4/10)100 = (.4)100 = 40
$25.1-$50k 3 30 (3/10)100 = (.3)100 = 30
>$50 1 10 (1/10)100 = (.1)100 = 10
n = 10 100%

If calculated correctly, every percent distribution column should sum (Σ) to 100. This indicates
that all cases (100%) are represented in the distribution table (see table below)

Copyright @2017 40

Analyzing Criminological Data Chapter 4




Income f pct
<$10k 2 20
$10-$25k 4 40
$25.1-$50k 3 30
>$50 1 10

n = 10 Σ = 100



Percent distributions allow us to quickly determine which categories contain the most cases (or
the least) and how much more (or less) populated these categories are when compared to
others. The small numbers in the current example make interpreting differences between
frequencies relatively easy. However, a percentage distribution becomes more useful when
working with larger datasets.

Cumulative distributions can also be added to univariate frequency distribution tables. These
help us to further interpret distributions, particularly rank-ordered distributions. Cumulative
distributions provide a “running total” or sum
of a specific distribution category as well as Cumulative distribution: A data summary
all those before it. column that provides a “running total” for
another data summary column
Cumulative distributions can be calculated for both frequency (cf) and percentage (cpct)
distributions.

The income distribution table below illustrates how to calculate the cumulative frequency (cf)
distribution column. As you can see, the first number reported is simply the frequency
associated with the first category (since no categories come before it). For the subsequent
categories, we simply add the category number to all category numbers that come before it.
The last cumulative frequency score should always equal the total sample size (n). This confirms
that all cases have been included in our cumulative frequency distribution calculations.

Income f cf Calculation
<$10k 2 2 2
$10-$25k 4 6 2+4
$25.1-$50k 3 9 2+4+3
>$50 1 10 2+4+3+1

We use the same formula to calculate the cumulative percentage (cpct) distribution column,
but base our calculations on the percentage (pct) distribution column. Again, the first number
reported is simply the percentage associated with the first category (since no categories come
before it). For the subsequent categories, we simply add the category percentage to all

Copyright @2017 41

Analyzing Criminological Data Chapter 4


category percentages that come before it. The last cumulative percentage score should always
equal 100. This indicates that all cases have been included in our cumulative percentage
distribution calculations.

Income pct cpct Calculation
<$10k 20 20 20
$10-$25k 40 60 20+40
$25.1-$50k 30 90 20+40+30
>$50 10 100 20+40+30+10

The final distribution table should be structured in the following way:


Income f cf pct cpct

<$10k 2 2 20 20
$10-$25k 4 6 40 60
$25.1-$50k 3 9 30 90
>$50 1 10 10 100
n = 10 Σ = 100
%

When using cumulative distributions to examine quantitative variables, we can quickly
determine the number (frequency) or percentage of cases that fall at or below a given
score/category. For example, the cpct column shows that 60% of students make less than $25k
per year and 90% of students make $50k or less per year.

In summary, frequency tables, with percentage
and cumulative distributions, are a useful way of How To Interpret
summarizing data so that we can begin to
Frequency Distribution Tables
examine how our data are distributed.

Qualitative variables: Compare
When working with qualitative data, we can
percentage distribution scores to identify
examine the category frequencies and compare
them to each other using the percentage most common and most rare attributes
distribution to determine what attributes appear Quantitative variables: Use cumulative
most or least often in our dataset.
distributions (for both frequencies and

percentages) to determine how many
When working with quantitative data, it is
cases fall above or below a certain score
helpful to include cumulative distributions, for
both frequencies and percentages, so that we
can determine how many cases fall below or above
a certain score.

Copyright @2017 42

Analyzing Criminological Data Chapter 4


Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about reading data in frequency and percentage distribution:

1. Here is a frequency distribution of the type of arrests of 20

offenders. Based on this data, how many people were arrested
for drug offenses? Answer Categories:
A. 10
B. 7
C. 2
D. 1



2. Based on this percentage distribution, what is the least
common age group for person arrested for prostitution?
Answer Categories:

A. under 18 years old
B. 18-24 years old
C. 25-29 years old
D. 30-49 years old
E. 50 or older

t

3. Based on this cumulative percentage distribution (cpct), what
percent of prostitution arrestees are under 30 years old?
Answer Categories:

A. 2.0%
B. 28.4%
C. 44.4%
D. 92.0%


Correct Answers: Q1(A), Q2(A), and Q3(C)

Copyright @2017 43

Analyzing Criminological Data Chapter 4


LO3: Interpret graphical displays of data distributions

We engage in data reduction when we create data summary tables (e.g., frequency
distributions). Tables are useful; however, we can create other types of visuals that may better
convey our summaries and display data distributions.

For example, the following data distribution table describes the cause of death in all 2009 U.S.
homicides.


Cause of Death f cf pct cpct
Firearm 9,203 9,203 67.24 67.24
Knife 1,836 11,039 13.42 80.66
Other weapon 1,438 12,477 10.51 91.17
All other causes 1,209 13,686 8.83 100.00
n = 13,686 Σ = 100



The categories are not completely homogenous. The “firearm” category includes all types of
guns (e.g., shotguns, handguns, rifles). The “knife” category includes knives as well as other
types of cutting instruments (e.g., machetes). The “other weapon” category includes blunt
objects and other personal weapons (e.g., brass knuckles). The “all other causes” category
represents all other methods of death, including fire, drowning, strangulation, and poison,
among others. Despite the heterogeneous nature of the categories, you can determine from
this table that “death by firearm” is the leading cause of death for those murdered in the U.S.

Another way to convey this information is to present the data distribution in a bar chart.

Each “bar” in a bar chart represents the Bar Chart: A visual data display in which the
frequency of a particular attribute. The length of a bar represents the frequency of
frequency (f) for each cause of death a single variable attribute
presented in the table below is displayed
as a bar in the chart. The horizontal axis (also known as the x-axis) displays the four “cause of
death” categories. The vertical axis (also known as the y-axis) displays numbers to help
determine how many murders are associated with each cause of death.


Copyright @2017 44

Analyzing Criminological Data Chapter 4



This visual depiction quickly and clearly conveys the fact that firearms were used more than any
other type of weapon or method to murder people in the U.S. during 2009. Many people find it
easier to interpret a graphic like this than a frequency table filled with numbers and statistics.

There are many different ways to graphically display the same data. For example, data
distributions can also be presented in a pie chart. Pie charts display the relative frequency of
each attribute, meaning that each
“slice” is based on a standardized score. Pie Chart: A visual data display in which each
“slice” represents the frequency of a single
Percentage (pct) distribution values variable attribute relative to the frequencies of
represent a standardized score for each all other attributes
frequency value. Percentage distributions and pie chart values are numerically the same; they
both represent standardized frequency scores that sum to 100 percent. A pie chart depicting
the percentage distribution for cause of death is presented below.



Copyright @2017 45

Analyzing Criminological Data Chapter 4


Bar charts and pie charts are particularly useful for depicting the attribute values of qualitative
variables. Quantitative variables, which can be rank-ordered and are often based on numerical
values, are most often displayed using other types of visuals.

One type of graphical display often used to depict quantitative variable distributions is the
histogram. The histogram looks very similar to a bar chart except that there is no space
between each bar. The absence of space indicates that we are dealing with a quantitative
variable. The connectivity between
the columns helps to illustrate the Histogram: A visual data display in which a continuous
continuous, rank-ordered nature series of connecting bars vary in length to represent
of the data distribution. the frequency of quantitative variable attributes

Examine the data distribution table below that summarizes the age of known U.S. homicide
offenders in 2009.


Offender Age f cf pct cpct

≤19 2,241 2,241 20.81 20.81

20-29 4,476 6,717 41.56 62.37

30-39 1,923 8,640 17.86 80.22
40-49 1,164 9,804 10.81 91.03
50-59 617 10,421 5.73 96.76
60-69 217 10,638 2.02 98.79
≥70 131 10,769 1.22 100.00
n = 10,769 Σ = 100



The table provides useful information. By comparing frequencies and percentages, it tells us
that more homicide offenders fall within the 20-29 age range. By examining the cumulative
frequency and cumulative percent distributions, we know that the vast majority (over 80
percent) of homicide offenders are under the age of 40.

We can also visually represent this data in a histogram. The histogram below allows us to
quickly assess the general nature of the distribution and identify the most and least commonly
occurring age categories. You may have noticed that data in graphical form is generally less
precise than data presented in table format. For example, it would be difficult to guess the
precise number of homicide offenders over the age of 70 using the histogram alone. However,
graphs allow most people to interpret data distributions more quickly and with less difficulty
than data presented in tables alone.

Copyright @2017 46

Analyzing Criminological Data Chapter 4



So far, you have learned techniques used to display and interpret univariate data distributions.
We have examined tables and graphs that illustrate frequency or percentage distributions of a
single variable, including both qualitative and quantitative variables.

In future chapters, you will learn how to construct and interpret bivariate (two variable) tables
and graphical displays. However, there is one commonly used bivariate graph that we will
introduce here. Line graphs are used to depict change in a linked series of data points. The data
points are connected by a line, which helps to illustrate continuity between each point and the
next.
Line Graph: A visual data display in which a line is
Line graphs are commonly used to depict used to connect a series of related data points
changes in a variable over time. For
example, the line graph below illustrates changes in the U.S. violent crime rate between 1960
and 2009. Individual years are plotted along the x-axis (horizontal axis), and the violent crime
rate is plotted along the y-axis (vertical axis).



Although line graphs are commonly used to illustrate changes across time, these graphs can be
used to depict the relationship between any two quantitative variables.

Whether you should use a table or graph (or both) to present your data will depend on your
audience and the message being conveyed. Statisticians and other academics will usually prefer
to see findings presented in a detailed table, while practitioners and the general community

Copyright @2017 47

Analyzing Criminological Data Chapter 4


may find a clear graphical display more useful. As we discussed previously, a table offers more
precise data, but a graph provides faster and easier interpretations.

Practice Applications for Learning Objective 3 (LO3):


Select the appropriate method of graphical display for the following data distribution:

1. What type of graphical display is appropriate for visually

representing the relative frequency of different types of crimes
Answer Categories:
committed in the following data distribution?

Type of Crimes f A. bar chart
Violent offenses 75 B. pie chart
Property offenses 55 C. histogram
Drug offenses 150 D. line graph
Other offenses 20

n= 300


2. What type of graphical display is appropriate for visually

representing the relative percentages of different types of
Answer Categories:
punishments in the following data distribution?

Type of Punishment pct A. bar chart
Fine 20.0 B. pie chart
Probation 43.5 C. histogram
Jail 24.5 D. line graph
Prison 10.0

Death 2.0
100.0 %


3. What type of graphical display is appropriate for visually

representing the dollar loss from street robberies in the following
Answer Categories:
frequency distribution?

Dollar Loss f A. bar chart
< $100 550 B. pie chart
$100-$199 400 C. histogram
$200-$299 350 D. line graph
$300-$399 325

$400-$499 200
$500-$999 450

$1,000 - $2,000 150
> $2,000 375
n=2,800

Copyright @2017 48

Analyzing Criminological Data Chapter 4



4. What type of graphical display is appropriate for showing the

annual number of murders in the U.S. between 2000 and 2010?



Answer Categories:
Year Murders Year Murders

2000 15,586 2006 17,309 A. bar chart
2001 16,037 2007 17,128 B. pie chart
2002 16,229 2008 16,465 C. histogram
2003 16,528 2009 15,339 D. line graph
2004 16,148 2010 14,748
2005 16,740

Correct Answers: Q1(A), Q2(B), Q3(C), and Q4(D)




LO4: Judge the spread and shape of data distributions

Frequency tables and graphical displays of data distributions allow us to begin to interpret our
data and draw conclusions. How data are distributed across a variable's attributes conveys
important information, particularly for quantitative variables. It is essential that you know how
to interpret this information.

Frequency tables display the spread of data across categories, while graphs display the shape of
data distributions.

Actually, the terms “spread” and “shape” refer to the same underlying concept. Both describe
the distribution of data, or the way that data “cluster” within and among categories. Different
techniques can be used to determine if data are distributed in an equal way or if they are
uniquely clustered. Two simple methods for assessing the spread and shape of data
distributions are presented below.

Assess the spread of data distributions

Let’s say that you are working with “age” data. This data was measured using the six categories
listed in the table below. If the data were equally distributed, there would be no or very little
difference between the frequency and percentage of cases found within each category. The
table below displays a distribution that is almost perfectly equal.




Copyright @2017 49

Analyzing Criminological Data Chapter 4


Age F Cf pct cpct
≤19 1,795 1,795 16.67 16.67
20-29 1,795 3,590 16.67 33.44
30-39 1,795 5,385 16.67 50.01
40-49 1,795 7,180 16.67 66.68
50-59 1,794 8,974 16.65 83.33
≥60 1,795 10,769 16.67 100.00
n = 10,769 Σ = 100.00

Of course, we would not expect to find this hypothetical distribution when examining real data.
We expect to find differences across categories. In fact, “equal” data distributions are not
“normal” data distributions. Details about normal data distributions and their importance will
be presented in future chapters. For now, you will learn how to determine whether a
distribution’s spread and shape possess a specific element of normal distributions: symmetry.

In general, symmetrical data distributions will have about 50% of cases falling within the first
half of variable categories (and, thus, 50% of cases falling within the last half of categories).
Cumulative percentage (cpct) scores in frequency distribution tables help us to determine if this
is true. The location of specific percentiles within variable categories is used to determine if
data are spread out across categories, or if cases are clustered at a specific end of the scale.

For example, if the 50th percentile in the cumulative percentage column is located within a
central variable category, then we have our first piece of evidence to suggest that the
distribution may be symmetrical. The
Percentile: A numerical point in a cumulative
cumulative percentage column in the
percentage distribution, between 0 and 100, with a
table above shows that 50.01% of all
specific percentage of cases that fall at or below it
cases fall within the first half (n = 3) of
the variable categories, suggesting distribution symmetry.

Now compare the table above to the table below. Even though the case frequencies within
individual categories are very different, the cumulative percentage column shows an important
similarity. Like the previous table, the table below shows that about half (50.09%) of all cases
fall within the first half of the variable categories.

Age F cf pct cpct
≤19 500 500 4.64 4.64
20-29 1,950 2450 18.11 22.75
30-39 2,944 5,394 27.34 50.09
40-49 3,003 8,397 27.88 77.97
50-59 1,875 10,272 17.41 95.38
≥60 497 10,769 4.62 100.00
n = 10,769 Σ = 100.00

Copyright @2017 50

Analyzing Criminological Data Chapter 4


It is also important to note that the frequency of cases in the two center categories closely
mirror each other (27.34% vs. 27.88%), as do the second and fifth categories (18.11% vs.
17.41%), and the first and sixth categories (4.64% vs. 4.62%). These similarities are highlighted
in the table below. For this reason, you should recognize that this data distribution – or the
“spread” of the data – is symmetrical. The cases tend to be clustered toward the center
categories of the distribution, with cases decreasing in comparable frequency as we move away
from this central cluster.

Age F cf pct cpct
≤19 500 500 4.64 4.64
20-29 1,950 2450 18.11 22.75
30-39 2,944 5,394 27.34 50.09
40-49 3,003 8,397 27.88 77.97
50-59 1,875 10,272 17.41 95.38
≥60 497 10,769 4.62 100.00
n = 10,769 Σ = 100.00

Actual sample data distributions rarely reflect such a high degree of symmetry, but you can
assess symmetry in any frequency distribution table by inspecting the:

1. Cumulative percentage (cpct) distribution column – does the 50th percentile fall toward
the middle of the attribute scale (i.e., categories)?

2. Percentage (pct) distribution column – do case frequencies in the lower half of the
distribution mostly mirror case frequencies in the upper half of the distribution?

If the answers to these questions are yes, then you are working with a symmetrical data
distribution.

Let’s look at a real frequency distribution table with an unsymmetrical data distribution. UCR
data on the age of homicide offenders is presented below. You can see that the 50th percentile
falls much closer to the lower end of the scale (somewhere within the 20-29 age category).

Age F cf pct cpct
≤19 2,241 2,241 20.81 20.81 50th percentile
20-29 4,476 6,717 41.56 62.37 falls somewhere
30-39 1,923 8,640 17.86 80.22 below the
upper-limit of
40-49 1,164 9,804 10.81 91.03
this category
50-59 617 10,421 5.73 96.76
≥60 348 10,769 3.24 100.00
n = 10,769 Σ = 100.00

Copyright @2017 51

Analyzing Criminological Data Chapter 4


You may also notice that category frequencies on the lower end of the scale do not mirror
those on the upper end of the scale. When symmetry is absent from data distributions, we are
working with skewed data.

Skewed data: A data distribution
Data distributions with values that cluster
characterized by asymmetry in the spread
toward the bottom end of a scale (i.e., the 50th
of values across variable attributes
percentile falls toward lower values) have a
positive skew. Conversely, when values cluster toward the upper end of a scale (i.e., the 50th
percentile falls toward higher values), the data have a negative skew. The reason for these
labels becomes clear when examining graphical displays (i.e., shapes) of positively and
negatively skewed data distributions.


Assess the shape of data distributions

As mentioned previously, graphs can be used to assess the shape of data distributions. As you
know, histograms are used to display quantitative univariate data distributions.

If our offender age distribution was relatively symmetrical, then it might look like the histogram
below.



Notice that each side looks almost identical to the other. This uniform shape represents a
symmetrical distribution. However, in the two tables below, the distributions have
unsymmetrical shapes.

The histogram on the right represents actual homicide offender age data. The shape of this
histogram indicates that the data are positively skewed and is sometimes referred to as a right
skew. This is because the tail of the distribution looks like it is being pulled toward the higher
end of the scale (i.e., toward the right).

The histogram on the left represents hypothetical homicide offender age data. The shape of
this histogram indicates that the data are negatively skewed and is sometimes referred to as a

Copyright @2017 52

Analyzing Criminological Data Chapter 4


left skew. This is because the tail of the distribution looks like it is being pulled toward the
lower end of the scale (i.e., toward the left).




Outliers can cause skewed data. Outliers are cases with values that are distinctly separate from
most of the other case values.
Outlier: A case value that is numerically
An outlier or outliers on the upper end of a distant from the majority of case values
scale will produce a positively skewed data distribution, while those on the lower end of a scale
will produce a negatively skewed distribution.

Practice Applications for Learning Objective 4 (LO4):


For each of the following data distributions and graphs, identify the correct statement about
them.


Q1. Which statement is true about this data distribution?

A. The data are positively skewed.
B. The data distribution is symmetrical.
C. The data are negatively skewed.
D. The 50th percentile falls in the 2nd category (age 18-35)

Copyright @2017 53

Analyzing Criminological Data Chapter 4



Q2. Which statement is true about this data distribution?

A. The data are positively skewed.
B. The data distribution is symmetrical.
C. The data are negatively skewed.
D. The 50th percentile falls in the 2nd category (1-2 arrrests)






Q3. Which statement is true about the data distribution in
this graph?

A. The data are positively skewed.
B. The data distribution is symmetrical.
C. The data are negatively skewed.
D. The data are skewed toward the right.


Q4. Which statement is true about the data distribution in
this graph?

A. The data are positively skewed.
B. The data distribution is symmetrical.
C. The data are negatively skewed.
D. The data are skewed toward the right.


Q5. Which statement is true about the data distribution in
this graph?

A. The data are equally distributed.
B. The data distribution is symmetrical.
C. The data are negatively skewed.
D. The data are skewed toward the right.

Correct Answers: Q1(C), Q2(A), Q3(B), Q4(C), and Q5(D)


Copyright @2017 54

Analyzing Criminological Data Chapter 4


Review

Use these reflection questions and directed activities to master the material in this chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts in
your own words and give at least two examples of each.

- Data reduction - Cumulative distributions
- Raw data - Bar chart / Pie chart /Histogram / Line graph
- Univariate analysis - Percentile
- Frequency distributions - Skewed data
- Percentage distributions - Outlier


How?

Answer the following questions and complete the associated activities.

How can you produce univariate displays of data distributions from raw data?

Create a basic frequency table for the following data on number of arrests:
0,0,0,1,0,0,2,0,5,1,0,0,1,1,0,2,0,0,3,0.

How can you calculate values that describe data distributions?

For your frequency table, calculate a cumulative frequency, percentage, and
cumulative percentage column.

How do you interpret graphical displays of data distributions?

Give an example of variables you would want to display in a bar chart, pie chart,
histogram, and line graph. What information would you display (e.g., frequencies,
percentages)?

How can you judge the spread and shape of distributions?

List the types of information you would observe in tables and graphs to make these
judgments.

Copyright @2017 55

Analyzing Criminological Data Chapter 4




When?

Answer the following questions.

• When do we want to produce frequency tables?

• When should you calculate descriptive data distribution values?

• When should we use graphical displays of data distributions instead of tables?

• When do data become skewed?



Why?

Answer the following question.

• Why does knowing this information about data distributions and graphical displays help
you to recognize liars, cheaters, and other misusers of statistical data?







Copyright @2017 56

Analyzing Criminological Data Chapter 5

CHAPTER 5:
Describing the “Typical” Case

This chapter focuses on statistical measures that identify the typical case in the distribution of a
variable. You will learn how to calculate, interpret, and properly apply these summary
measures of central tendency and typicality. Mastering the learning objectives in this chapter
will increase your training as an informed consumer of statistical data on crime and criminal
justice practices.

Concepts Learning Objectives (LO)

- Descriptive statistics
- Measures of central tendency LO1: Calculate measures of central tendency
- Mode
- Median LO2: Select the most appropriate measure of central
- Mean tendency

LO3: Judge the accuracy of central tendency measures
based on data distributions

LO4: Infer a distribution’s shape based on multiple
measures of central tendency


Introduction

The skills you have acquired so far lay the groundwork for calculating statistics. You know how
to build a solid research framework by asking different types of research questions. You know
how to avoid the GIGO problem by creating strong measures of research concepts and how to
develop appropriate data collection strategies. You also know how to organize and summarize
your data in frequency tables.

Remember, our ultimate goal is to solve problems by answering research questions. Statistics
are tools that allow us to answer questions with data. However, we cannot calculate statistics
until we have accomplished the tasks listed above. And, if we do not do these things well, then
our statistics will be meaningless. The process of calculating and interpreting statistics is a lot
like building a skyscraper; the foundation must be carefully planned and very strong or anything
stacked on top of it will do people more harm than good.

Copyright @2017 57


Analyzing Criminological Data Chapter 5

Once our foundation is built, we calculate descriptive statistics to learn more about the
variables in our study. Descriptive statistics are the most basic type of statistic. They simply
“describe” data features and help us to know more about the particular characteristics of the
variables in the analysis.
Descriptive statistics: Statistics that

summarize descriptive information
Frequency tables and graphical displays provide a
about the variables in a study.
very basic level of descriptive information. You
have learned how to simplify masses of raw data so that they can be easily interpreted in table
or graphic formats. In this chapter, you will learn additional techniques for interpreting,
presenting, and describing characteristics of univariate data distributions by calculating
descriptive statistics.

Descriptive statistics allow us to describe the “typical” case in a sample. For example, we might
want to know:

1. How many times does the typical police officer use force against suspects each year?
2. How typical is it for prosecutors to use plea bargains to reduce felony charges to
misdemeanors?
3. How long is the typical sentence for those convicted of committing burglary?

Another term for “typical” is “average.” In statistics we use measures of central tendency to
calculate averages. These statistical measures produce values that describe the middle point or
center of data distributions. In other words,
these values represent where scores tend to Measures of central tendency: Statistics
cluster along a scale. that describe the center point of a data
distribution
We begin by learning how to calculate different types of averages to answer research questions
like the ones above.

LO1: Calculate measures of central tendency

The three most common measures of central tendency are the mode, median, and mean.

The Mode

The mode (Mo) is the most simplistic measure of
central tendency. It does not require any Mode: The most frequently occurring
mathematical calculations. To find the mode, score or category in a data
simply find the most frequently occurring score or distribution; symbolized using Mo
attribute in the data distribution.



Copyright @2017 58


Analyzing Criminological Data Chapter 5

Let’s say that you were working with sample data that included a measure of “number of traffic
tickets.” Below are the responses you obtained from a random sample of 15 students.


Number of tickets = 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4, 6



The data distribution above includes three 0s, five 1s, three 2s, two 3s, one 4, and one 6. The
mode for this variable is 1 because this is the most frequently occurring score in the
distribution. It occurs five times, more often than any other score in the distribution.

It is even easier to find the mode when the data have been summarized in a frequency
distribution. Simply look for the largest frequency category.


Traffic tickets f
0 3
1 5 The modal category
2 3 with the most
3 2 frequently occurring
score is 1 traffic ticket.

4 or more 2

n = 15




Additional examples of modes found within data distributions are presented below. The modal
categories are underlined. Although these distributions have only one mode, it is possible to
have multiple modal categories. For example, if the $10-25k and the $25.1-$50k categories
each contained 4 cases, then Mo = $10-25k, $25.1-$50k.


Income f
Gender f Marijuana Use f
<$10k 2
Female 4 No 3
$10-$25k 4
Male 6 Yes 7
$25.1-$50k 3
n = 10 n = 10
$50.1-$75 0
>$75 1 Mo = Male Mo = Yes
n = 10

Mo = $10-25k




Copyright @2017 59


Analyzing Criminological Data Chapter 5


The Median

The median (Md) is a measure of central tendency that splits a data distribution precisely in
half. It represents the 50.5 percentile, with 50% of cases falling below it and 50% of cases falling
above it.
Median: A value that represents the
To find the median, you must calculate the median middle score in a data distribution;
position using the following three-step process: symbolized using Md

Step 1: Rank order cases
Arrange the data from the lowest to the highest value. For example, if your dataset contains the
scores {2, 8, 4, 1, 7}, they should be rank ordered {1, 2, 4, 7, 8}.

Step 2: Calculate the median position (MP)
The formula for calculating the median position is:

MP = (n + 1)/2, where n = total sample size

For the data distribution {1, 2, 4, 7, 8}, n = 5. So, MP = (5 + 1)/2 = 6/2 = 3.

For the data distribution {1, 2, 2, 4, 7, 8}, n = 6. So, MP = (6 + 1)/2 = 7/2 = 3.5.

Step 3: Locate the median value
Once you have calculated the median position, you can locate the median value (Md). For the
first data distribution {1, 2, 4, 7, 8}, MP = 3. This means that the median score will be the value
of the third case. In this data distribution, the third case has a value of 4. Therefore, Md = 4.
Half of the cases in the sample fall below the median (4), and half fall above.

{1, 2, 4, 7, 8} ---- The third (middle) case in the data distribution (4) is the median value

The example above uses an odd number of cases. If dealing with an even number of cases, one
additional calculation is required. For the data distribution {1, 2, 2, 4, 7, 8}, MP = 3.5. This
means that the median falls exactly between the third and fourth case. The median value will
be the average value of the third and fourth case. In this data distribution, the third case has a
value of 2 and the fourth case has a value of 4. To find the average, we add them together and
divide the total by 2.

Therefore, Md = (2 + 4)/2 = 6/2 = 3. Half of the cases in the sample fall below the median (3),
and half fall above it.
{1, 2, 2, 4, 7, 8}
The average of the third and fourth case
in the data distribution is equal to 3

Copyright @2017 60


Analyzing Criminological Data Chapter 5



Medians can also be calculated from frequency distributions of data grouped together in
numerical intervals (e.g., 2-5, 10-20), but the median “value” in these situations will represent a
category rather than an exact value. For example, the age distribution below contains 936
cases. To calculate the median position, we use the same formula for MP.

MP = (936 + 1)/2 = 937/2 = 468.5.


Age f cf
<21 298 298 The median position of
21-40 301 599 468.5 falls somewhere
41-60 243 842 within the second category

>60 94 936
n = 936




Therefore, the median value will fall exactly between the values of the 468th and the 469th case.
Unfortunately, we do not have the exact values for these cases, but we can see what category
they fall within. The cumulative frequency column (cf) tells us that these cases fall somewhere
within the second category. As such, a rough approximation of the median for this data would
be Md = ≈ 21-40, suggesting that about half of the cases fall within and below this category, and
half of the cases fall within and above this category.

You may have noticed that this rough approximation is much like a previous technique you
used to find the middle case in a frequency distribution. In Chapter 4, you learned to calculate
percentage (pct) and cumulative percentage (cpct) columns for a frequency distribution table.
These columns are included in the table below.

Age f cf pct cpct


<21 298 298 31.8 31.8 The 50.5 percentile falls
21-40 301 599 32.2 64.0 somewhere within the
41-60 243 842 26.0 90.0 second category
>60 94 936 10.0 100
n = 936 Σ = 100


If the cf and the cpct columns have been calculated, you can estimate the median by simply
locating the category that contains the 50.5 percentile. Again, we find that Md ≈ 21-40, with
about half of the cases falling within and below this category, and half of the cases falling within
and above this category.
Copyright @2017 61


Analyzing Criminological Data Chapter 5


The Mean

The mean is a measure of central tendency based on Mean: An average based on
every score in a data distribution. To calculate a mean, every score in a data distribution;
you simply add up all scores in a data distribution and symbolized using x (x bar)
divide this number by the total number of cases.

The mean formula is:

, where Σ x = the sum of all scores for a given variable, and


n = the total sample size.

Let’s say that we want to know what the average auto theft rate is for the seven largest U.S.
cities. These cities and their associated 2011 auto theft rates (i.e., the number of auto thefts per
100,000 people) are presented in the table below.

Auto Theft Rate Rates
114.9 (New York) Criminal justice researchers often use rates to estimate risk of crime and
406.5 (Los Angeles) victimization. These rates are standardized by the population size so
719.2 (Chicago) that we can accurately compare cities of differing sizes. To calculate a
572.9 (Houston) crime rate, divide the number of crimes by the population and multiply
486.5 (Philadelphia)
this number by 100,000.
515.3 (Phoenix)
465.3 (Las Vegas)

To calculate the average auto theft rate for these seven cities, add up all rates (i.e., this is the
sum of all scores), and divide this number by the total sample size (i.e., the total number of
cities).

x = (114.9 + 406.5 + 719.2 + 572.9 + 486.5 + 515.3 + 465.3)/7 = 3280.6/7 = 468.7.

This tells us that the average auto theft rate across these seven cities is 468.7 per 100,000
people.

In this example, we are only working with seven scores. However, we may sometimes need to
calculate a mean from a larger dataset. If the data are arranged in a frequency table, you can
use a shortcut to calculate the mean.

The table below presents hypothetical data on the number of victimizations experienced by
residents in an urban area. There are 2,500 individual scores summarized in this table. Rather
than add up 2,500 individual scores, substitute (Σ fx) for (Σ x) in the numerator of our mean
Copyright @2017 62


Analyzing Criminological Data Chapter 5

formula. This tells us to multiple each score by its associated frequency before summing the
values.


Victimizations f
= ((0 x 4187) + (1 x 507) + (2 x 225) + (3 x 112) + (4 x 60) + (5 x 109))/2500
0 1487
1 507 = (0 + 507 + 450 + 336 + 240 + 545)/2500
2 225
= (2078)/2500
3 112
4 60 = 0.83
5 109

n = 2,500



This measure tells us that, on average, each person experienced 0.83 victimizations (less than
one victimization per person).

Practice Applications for Learning Objective 1 (LO1):


Identify the correct answer about the mode, median, and mean in the following questions:


Q1. What is the mode in this data on pretrial
Number of Pretrial Motions motions?
in 20 Trials: A. 0 motions
1,0,1,0,1,2,3,2,1,1 B. 1 motion
C. 2 motions
1,2,3,2,1,5,1,15,1,1 D. 3 motions
E. 5 motions


Q2. What the modal number of arrests in this
frequency distribution?

A. 0 arrests
B. 1-2 arrests
C. 3-4 arrests
D. > 5 arrests



Copyright @2017 63


Analyzing Criminological Data Chapter 5



Q3. What is the median caseload in this raw data?
Monthly caseload of 7 probation and A. 75 cases
parole officers: B. 60 cases
75, 50, 55, 40, 80, 70, 60 C. 40 cases
D. 55 cases



Days from Arrest Q4. What is the median category for the number of
to Conviction f cf cpct days from arrest to conviction in this frequency
< 30 days 10 10 5 distributions?
31-60 days 20 30 15
61-90 days 80 110 55 A. < 30 days
91-120 days 50 160 80 B. 31-60 days
> 120 days 40 200 100 C. 61-90 days
n=200 100% D. 91-120 days



Q5. What is the mean caseload for this sample of
Monthly caseload of 7 probation and probation/parole officers?
parole officers:
75, 50, 55, 30, 80, 70, 60 A. 75 cases
B. 60 cases
C. 40 cases
D. 55 cases



Number of Executions
(n = 80 Countries) Q6. What is the mean number of executions in these
80 countries?
Executions f

0 30
A. 1.0 executions
1 20
B. 1.25 executions
2 15
C. 1.5 executions
3 10
D. 2.0 executions
4 5

n= 80
Correct Answers: Q1(B), Q2(A), Q3(B), Q4(C), Q5(B), and Q6(B)






Copyright @2017 64


Analyzing Criminological Data Chapter 5

LO2: Select the most appropriate measure of central tendency

You now know how to calculate three measures of central tendency. However, you must know
which measure – the mean, median, or mode – will provide the most accurate description of
the typical or average case for the particular variables in your sample. To know this, you must
know the strengths and limitations of each of these descriptive statistics.

The Mode

1. Strengths of the mode:
The mode is the most basic measure of central tendency. Sophisticated mathematical
calculations are not needed to compute the mode. Once data are summarized in a frequency
table, the category or score with the most cases represents the modal value.

Because the mode simply represents the most commonly occurring score or category in a data
distribution, it can be used to analyze data collected at any level of measurement (i.e., nominal,
ordinal, or interval/ratio). It is the only measure of central tendency that can be used to describe
a nominal level variable.

Unlike the median and mean, more than one mode is possible in a data distribution. This allows
the mode to describe multi-modal, often bi-model (two mode), distributions. For example,
public survey data on abortion and death penalty attitudes often reveal bimodal distributions
because most people either “strongly oppose” or “strongly support” these practices and are
less likely to be “neutral” on these issues. While two modes accurately represent the data, the
median would fail to correctly summarize these types of data distributions (see frequency table
below).


Attitude/Opinion f cf

Strongly support 35 35
Mo = Strongly support, Strongly oppose
Support 12 47
Neutral 8 55 Md = Neutral
Oppose 10 65 MP = (100 + 1)/2 = 101/2 = 50.5 à Md = neutral
Strongly oppose 35 100
n = 100


Another desirable characteristic of the mode is that it is unaffected by extreme scores (i.e.,
outliers).



Copyright @2017 65


Analyzing Criminological Data Chapter 5


2. Limitations of the mode:
The mode is the most frequently occurring category in a variable’s distribution, but it does not
necessarily represent the majority of scores in this distribution. In fact, the mode tells us very
little about how all scores are distributed across a variable’s categories. It cannot tell us
whether there are large or small differences between the modal category and the other
categories for a variable.

The mode is also sensitive to interval sizes. When dealing with quantitative variables, we can
manipulate the mode by collapsing or expanding categories. In the example below, the mode
changes from the category 21-40 to the >40 category when two intervals are combined.

Mo = 21-40 Mo = >40

Age f Age f
<21 298 Collapse 41-60 and >60
<21 298
to create a new modal
21-40 301 21-40 301
category of >40
41-60 243 >40 337
>60 94 n = 936
n = 936


The Median

1. Strengths
A median value is easy to interpret. It represents the location or point along a scale in which
50% of cases fall above it and 50% of cases fall below it (the 50.5 percentile).

Unlike the mode, the median is based on the distribution of all cases within a dataset.

Medians can be computed for ungrouped data (i.e., individual scores that are not grouped into
categories) and grouped data (i.e., data combined into intervals such as 0-19, 20-29, etc.).

The median is unaffected by extreme scores (i.e., outliers).

2. Limitations
Since scores must be ordered to compute a median, it can only be computed for quantitative
variables measured on an ordinal, interval, or ratio scale. A median cannot be calculated for a
nominal level variable.

Medians calculated for grouped or interval data are gross approximations of a distribution’s
middle rather than a precise location along a scale.

Copyright @2017 66


Analyzing Criminological Data Chapter 5

Although extreme scores at the upper or lower ends of a scale do not bias the median (since it
is based on a score or scores falling in the middle of the scale), it does not fully represent all
scores in a dataset. In other words, the median is only representative of the middle score in a
distribution, so it does not tell us anything about the distribution of the other scores.

The Mean

1. Strengths
Unlike the mode – which looks only at common frequencies, and the median – which looks only
at the value for the middle case(s), the mean is calculated based on each individual case score
in a distribution. So, every individual score is used in its computation.

The mean is the most popular measure of central tendency for interval and ratio level variables.

2. Limitations
Since numerical information for each case is required to calculate a mean, it can only be
computed for variables measured on an interval or ratio scale. A mean cannot be calculated for
a nominal or ordinal level variable.

The mean is sensitive to outliers because each score contributes to the value of the mean. For
example, look at the following age distribution of college freshman:

Age = 18, 18, 18, 18, 19, 19, 20, 20, 23, 67

The outlier in this distribution (67) creates a strong positive (right) skew in the data. The mean
for this distribution is 24. If the outlier were removed from this dataset, the mean would be
19.22 – a much more accurate reflection of the typical age for college freshman in this data.

Measures of Central Tendency
Mode Median Mean

Strengths • No sophisticated • Easy to interpret (50.5 • Reflects the scores
calculations required
percentile) of all cases in its
• Used for any level of • Based on the numerical value
measurement
distribution of all • A popular measure
• Can better describe cases of central tendency

multi-modal
distributions • Can be calculated for

both ungrouped and
• Unaffected by
extreme scores grouped data

• Unaffected by
extreme scores

Copyright @2017 67


Analyzing Criminological Data Chapter 5

Limitations • Does not explain how • Cannot be used to • Cannot be used to


similar or different describe nominal data describe nominal or
the modal category is • Provides only gross ordinal data
to all other categories approximations of the • Can be heavily
• Can be manipulated middle point for biased by extreme
by changing interval interval data outliers
sizes • Does not fully
represent all scores in
a dataset


General rules for selecting measures of central tendency

When deciding which measure of central tendency to use, consider (1) the variable’s level of
measurement and (2) the nature of the data distribution. Here are some general rules for
selecting the most appropriate measure of central tendency to describe your data.

• Use the mode to describe qualitative variables and distributions that are clearly multi-
modal.

• Use the median to describe quantitative variables that have a skewed data distribution
(i.e., datasets that include extreme outliers) and ordinal level variables.

• Use the mean to describe interval and ratio level variables that have a relatively non-
skewed or symmetrical distribution.

Practice Applications for Learning Objective 2 (LO2):


Select the most appropriate measure of central tendency (mode, median, or mean) in the
following data distributions:

Type of Offender (n=10): Q1. Most appropriate measure of central tendency


robber, drug, burglar, robber, drug, for this data distribution?
drug, drug, robber, burglar, drug A. Mode
B. Median
C. Mean

Number of Pretrial Motions


Q2. Most appropriate measure of central tendency
in 20 Criminal Trials: for this data distribution?
1,0,2,0,1,2,3,2,0,1 A. Mode
B. Median
0,2,3,0,1,2,3,1,3,15
C. Mean

Copyright @2017 68


Analyzing Criminological Data Chapter 5


Number of
Q3. Most appropriate measure of central tendency
Charges Filed f
for this frequency distribution?
1 300
2-3 260 A. Mode
4-5 240 B. Median
> 5 50
C. Mean

N= 850


Number of
Q4. Most appropriate measure of central tendency
Delinquent Friends f
for this frequency distribution?
0 98
1 135 A. Mode
2 215 B. Median
3 214
C. Mean

4 136
5 97
N= 895

Correct Answers: Q1(A), Q2(B), Q3(B), and Q4(C)

LO3: Judge the accuracy of central tendency measures based on data


distributions


A reported measure of central tendency should accurately describe a data distribution. The goal
in selecting a particular measure of central tendency is to report the statistic that best reflects
the typical or average case in your data. You have learned which measures should be used for
particular types of data distributions (e.g., mode for qualitative or multimodal distributions,
medians for skewed quantitative data and ordinal variables, means for interval and ratio level
variables with symmetrical distributions).

You should also be able to articulate the consequences of selecting the “wrong” type of
statistic. These consequences were implied in the previous section. However, we will take a
closer look at the specific impact that different distributions have on the results of univariate
central tendency analyses. Since the mode is the only appropriate measure of central tendency
for qualitative data distributions, we will focus on only quantitative data distributions.


Copyright @2017 69


Analyzing Criminological Data Chapter 5


Symmetrical Bell-Shaped Distributions

Measures of the mode, median, and mean will each produce exactly the same value if you are
analyzing a perfectly symmetrical bell-shaped distribution where cases cluster toward the
middle. Consider the shape and spread of the data distribution below.

Attributes f cf
1 900 900
Mo = 3
2 1,500 2,400 Md = 3

3 3,000 5,400 # = 3
"
4 1,500 6,900
5 900 7,800
n = 7,800
The modal category is 3.
• The category that contains the most cases (3,000) has a value of 3.

The median is equal to 3.
• MP = (7800 + 1)/2 = 3900.5, this falls within the third category.

The mean is equal to 3.
• != ((1 x 900) + (2 x 1500) + (3 x 3000) + (4 x 1500) + (5 x 900))/7800
= (900 + 3000 + 9000 + 6000 + 4500)/7800
= 23400/7800
= 3

All measures of central tendency values are the same. Therefore, it does not matter which
measure of central tendency is used when working with symmetrical bell-shaped distributions. It
is considered “best practice” to report the mean in cases like these since it is the most
commonly recognized measure of central tendency. However, you will rarely, if ever, work with
perfectly symmetrical distributions. So, let’s consider how these statistics are affected by
skewed data distributions.

Positively Skewed Distributions

Data distributions that are positively skewed (or right skewed) contain outliers that pull the tail
end of the distribution toward the right (upper) end of the scale, while most cases cluster
toward the bottom end of the scale.

Examine the skewed data distribution below. There is a strong clustering of cases in the first 3
to 4 categories and relatively fewer cases at the far right end. Data distributions with this type

Copyright @2017 70


Analyzing Criminological Data Chapter 5

of minor to moderate skewedness contain more cases that are closely clustered together and
none or very few extreme outliers pulling the distribution in a particular direction.


Minor to Moderate Positive Skew


Attributes f cf

1 1,500 1,500 Mo = 2

2 3,000 4,500 Md = 2
# = 2.53
3 1,500 6,000 "

4 1,200 7,200
5 600 7,800
n = 7,800

Let’s calculate each measure of central tendency for this distribution.

The modal category is 2.
• The category that contains the most cases (3,000) has a value of 2.

The median is equal to 2.
• Median Position (MP) = (7800 + 1)/2 = 3900.5, so the median (Md) falls within the
category with the value of 2.

The mean is equal to 2.53.
• != ((1 x 1500) + (2 x 3000) + (3 x 1500) + (4 x 1200) + (5 x 600))/7800
= (1500 + 6000 + 4500 + 4800 + 3000)/7800
= 19800/7800
= 2.53

Notice that the mode and median values are the same. This is one indicator that the data have
only a minor or moderate level of skew. Unlike the mean, the median is not sensitive to
extreme outliers. While the majority of scores cluster around 2, the mean is pulled to the right
by the other scores at the upper end of the scale. This is why the mean value is higher than
both the mode and the median. When analyzing data with minor to moderate positive skew,
the mean will somewhat overestimate the value of the average case.

Data distributions with extreme skew have two distinct qualities:

1. a more dispersed distribution of cases across categories (i.e., less tightly clustered), and
2. a greater number of outliers or outliers with more extreme values.

Copyright @2017 71


Analyzing Criminological Data Chapter 5

Compare the data distribution below to the previous distribution. You should notice three
things. First, the cases below are still somewhat clustered in the in the first 3 to 4 categories,
but the clustering is less distinct (i.e., cases are more equally distributed across categories).
Second, there are more extreme scores (e.g., 12’s) representing outliers. Third, far more cases
fall within the distribution’s “tail,” or beyond the fourth category.

Extreme Positive Skew

Attributes f cf
1 1,200 1,200
2 2,200 3,400 Mo = 2
Md = 3
3 1,300 4,700 # = 4.12
"
4 1,000 5,700
7 900 6,600
9 700 7,300
12 500 7,800

n = 7,800


Let’s calculate each measure of central tendency for this distribution.

The modal category is 2.
• The category that contains the most cases (2,200) has a value of 2.

The median is equal to 3.
• MP = (7800 + 1)/2 = 3900.5, so the median (Md) falls within the category with the value
of 3.

The mean is equal to 4.12.
• != ((1 x 1200)+(2 x 2200)+(3 x 1300)+(4 x 1000) +(7 x 900)+(9 x 700)+ (12 x 500))/7800
= (1200 + 4400 + 3900 + 4000 + 6300 + 6300 + 6000)/7800
= 32100/7800
= 4.12

Notice that the mode, median, and mean values are all different. In more extreme or highly
positively skewed distributions, we find that the median value tends to be greater than the
mode value and the mean value is pulled heavily toward the right end of the scale. When
analyzing data with extreme positive skew, the mean will grossly overestimate the value of
the average case.

Whether the mode or median provides a more accurate measure of central tendency will
depend on the specific nature of your data distribution. In general, it is considered “best

Copyright @2017 72


Analyzing Criminological Data Chapter 5

practice” to report the median for distributions with extreme skew since it is not affected by
extreme scores but is still considered a more robust measure of central tendency than the
mode.

Negatively Skewed Distributions

Data distributions that are negatively skewed (or left skewed) contain outliers that pull the tail
end of the distribution toward the left (lower) end of the scale, while most cases cluster toward
the upper end of the scale.

The same principles concerning the relationship between measures of central tendency and
levels of positive skew apply to negatively skewed data distributions.


Minor to Moderate Negative Skew Extreme Negative Skew

Attributes f cf Attributes f cf
1 600 600 1 500 500
2 1,200 1,800
2 700 1,200

3 1,500 3,300 3 900 2,100

4 3,000 6,300 8 1,000 3,100
5 1,500 7,800 9 1,300 4,400
n = 7,800 10 2,200 6,600
11 1,200 7,800

n = 7,800






Mo = 4 Mo = 10
Md = 4 Md = 9
# = 4.81
# = 3.46
" "



Again, the mode and median values are the same in the minor to moderate negatively skewed
distribution, and the mean somewhat underestimates the value of the average case. When
extreme negative skew is present, the median value is less than the mode value, and the mean
grossly underestimates the value of the average case. Whenever data are heavily skewed
(positively or negatively), it is considered “best practice” to report median values to describe
the typical case.

Copyright @2017 73


Analyzing Criminological Data Chapter 5





Other Types of Distributions

It is important to use common sense when determining which measure of central tendency to
report. Examine the distributions below. Sometimes, the statistic that best represents the
distribution is clear (e.g., the mode for bimodal distributions). However, some distributions
cannot be accurately described with a single univariate statistic (e.g., see the mode, median,
and mean for the alternative distribution).

Regardless of which measure is used, it is important to be able to (1) predict how the shape and
spread of a distribution influences each measure of central tendency and (2) assess the degree
to which each measure accurately describes the distribution.

Bimodal Distribution Alternative Distribution

Attributes f cf
Attributes f cf
1 500 500
1 1,500 1,500
2 2,000 2,500 2 1,700 3,200

3 1,250 3,750 3 1,500 4,700
4 300 4,050 4 900 5,600
5 1,250 5,300 10 2,200 7,800
6 2,000 7,300
n = 7,800
7 500 7,800
n = 7,800





Mo = 10
Mo = 2, 6 Md = 3
Md = 4 # = 4.48
"
# = 4
"







Copyright @2017 74


Analyzing Criminological Data Chapter 5

Practice Applications for Learning Objective 3 (LO3):


Identify the correct statements about measures of central tendency based on the shape of the
following data distributions:

Minor Negative Skew: Q1. Which statement is true based on this data distribution?
A. The median and mean values will be identical.

B. The mean will slightly overestimate the value of the
average case.
C. The mean will slightly underestimate the value of the
average case.
D. The median and mode will have very different values.

Strong Negative Skew: Q2. Which statement is true based on this data distribution?
A. The median and mean values will be identical.

B. The mean will greatly overestimate the value of the
average case.
C. The mean will greatly underestimate the value of the
average case.
D. The median and mode will have the same numerical
values.


Minor Positive Skew
Q3. Which statement is true based on this data distribution?
A. The median and mean values will be identical.
B. The mean will slightly overestimate the value of the
average case.
C. The mean will slightly underestimate the value of the
average case.
D. The median and mode will have very different values.

Copyright @2017 75


Analyzing Criminological Data Chapter 5


Q4. Which statement is true based on this data distribution?
Symmetrical Distribution
A. The mode, median and mean will have the same values.
B. The mean will slightly overestimate the value of the
average case.
C. The mean will slightly underestimate the value of the
average case.
D. The median and mode will have very different values.

Correct Answers: Q1(C), Q2(C), Q3(B), and Q4(A)

LO4: Infer a distribution’s shape based on multiple measures of central tendency

As mentioned previously, it is sometimes difficult to accurately describe a data distribution


using only a single statistic. For this reason, researchers will sometimes present multiple
measures of central tendency. Knowing the value of the mode, median, and mean will help you
to infer the shape of a distribution.

For example, you know that a distribution with a minor to moderate skew will have a similar
mode and median, and a mean that slightly underestimates or overestimates the typical case.





Mo Mo
Md Md



x x
You also know that a distribution with an extreme skew will have different median and mode
values, and a mean that grossly underestimates or overestimates the typical case.

Mo Mo



Md Md
Copyright @2017 76


Analyzing Criminological Data Chapter 5


If the mode, median, and mean values are exactly the same (or very close to the same), then
you are working with a symmetrical, bell-shaped distribution with scores clustering toward the
center of the distribution.




Mo
Md


If you have two mode values, and mean and median values that fall somewhere between them,
then you are working with a bimodal and symmetrical distribution.








Mo Md Mo


Using these diagrams, you can infer the shape of a distribution if the values of all three
measures of central tendency are provided.

Practice Applications for Learning Objective 4 (LO4):


Using the following numerical values of measures of central tendency, identify the statement
that best describes the shape of the distribution that underlies these descriptive statistics:

Bail Amount? Q1. Based on these values, what is the shape of the distribution?
A. Extreme positive skew
Mode = $5,000
B. Extreme negative skew
Median = $5,000 C. Minor positive skew
Mean = $5,000 D. Minor negative skew
E. symmetrical, bell-shaped

Copyright @2017 77


Analyzing Criminological Data Chapter 5

Prison length in months? Q2. Based on these values, what is the shape of the distribution?
A. Extreme positive skew
Mode = 18 months
B. Extreme negative skew
Median = 20 months C. Minor positive skew
Mean = 60 months D. Minor negative skew
E. symmetrical, bell-shaped

Probation Caseload? Q3. Based on these values, what is the shape of the distribution?
A. Extreme positive skew
Mode = 60 cases
B. Extreme negative skew
Median = 55 cases C. Minor positive skew
Mean = 20 cases D. Minor negative skew
E. symmetrical, bell-shaped

Trial Length in Days? Q4. Based on these values, what is the shape of the distribution?
A. Extreme positive skew
Mode = 3 days
B. Extreme negative skew
Median = 3.2 days C. Minor positive skew
Mean = 5.2 days D. Minor negative skew
E. symmetrical, bell-shaped

Annual Number of Prison Q5. Based on these values, what is the shape of the distribution?
Infractions per Inmate? A. Extreme positive skew
B. Extreme negative skew
Mode = 3 infractions
C. Minor positive skew
Median = 2.8 infractions D. Minor negative skew
Mean = 1.5 infractions E. symmetrical, bell-shaped

Correct Answers: Q1(E), Q2(A), Q3(B), Q4(C), and Q5(D)















Copyright @2017 78


Analyzing Criminological Data Chapter 5

Review


Use these reflection questions and directed activities to master the material presented in this
chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts in
your own words and give at least two examples of each.

• Descriptive statistics
• Measures of central tendency
• Mode
• Median
• Mean




How?

Answer the following questions and complete the associated activities.

• How do you calculate each measure of central tendency?
o Describe the procedure or write out the formula associated with each.

• How do you know which measure of central tendency to use?
o Describe the factors you should consider when selecting a particular measure.

• How is each measure of central tendency affected by the shape of data distributions?
o Describe how each measure is influenced by different levels of skewness.

• How can you infer a distribution’s shape based on reported values of central tendency?
o Describe two types of distributions and explain how the values of the three
measures of central tendency might differ between them.



Copyright @2017 79


Analyzing Criminological Data Chapter 5


When?

Answer the following questions.

• When should we calculate measures of central tendency?
• When should we use one measure of central tendency over another?
• When should we expect measures of central tendency to be affected by the shape of
data distributions?
• When should we report multiple measures of central tendency?



Why?

Answer the following question.

• Why does knowing this information about measures of central tendency and the
“typical case” help you identify misstatements and misuses of statistical reports on
crime and criminal justice issues?

Copyright @2017 80


Analyzing Criminological Data Chapter 6

CHAPTER 6:
Assessing Differences Among Cases

This chapter focuses on statistical measures that summarize the differences among cases in a
distribution of criminological data. You will learn how to calculate, interpret, and properly apply
these summary measures of variation and dispersion. Mastering the learning objectives in this
chapter will further add to your basic training as an informed consumer of statistical data on
crime and criminal justice practices.

Concepts Learning Objectives (LO)

- Measures of dispersion
- Variation ratio LO1: Compare relative levels of dispersion within data
- Range distributions
- Interquartile range
- Variance LO2: Calculate appropriate measures of dispersion
- Standard deviation
- Kurtosis LO3: Select the most appropriate measure of dispersion
- Box plots
LO4: Assess visual depictions to compare variability
within and across distributions


Introduction

Researchers are interested in two key aspects of univariate data distributions. The middle
location is the first. You learned in Chapter 5 how to select and calculate an accurate measure
of central tendency to find a distribution’s center. This chapter will teach you about the second:
the dispersion of cases in a distribution.

Dispersion is the variability, scatter, or spread of cases within a data distribution. Criminologists
attempt to solve problems by explaining variability (i.e., differences) they find in data. For
example, we might want to know:

• Why do violent crimes tend to happen here and not there?
• Why do some people become criminals, while others remain law-abiding?
• Why are some correctional programs effective, while others are ineffective?

Copyright @2017 81
Analyzing Criminological Data Chapter 6

Before we can explain variability, we must know how much exists. This requires us to measure
differences among observations.

Measures of dispersion are used to assess variability Measures of dispersion: Statistics that
among cases. These univariate descriptive statistics describe the degree to which cases
tell us how different our cases are from each other. are spread across variable attributes

In this chapter, you will learn how to assess dispersion within a variable’s distribution. You will
be able to compare levels of dispersion by examining raw data and calculate five different
statistics that provide summary measures of the amount of variability in these data. You will
also learn how to select the best measure of dispersion and use graphical displays to compare
levels of variability within and across data distributions.

LO1: Compare relative levels of dispersion within data distributions

Dispersion is always expected within data distributions. Examine the frequency tables below.

Ordinal Data Ratio Data
Nominal Data

Gender f Age f Victimizations f
Female 0 <21 0 0 2,500
Male 19 21-40 0 1 0
n = 19 41-60 936

2 0

>60 0 3 0

n = 936 4 0
n = 2,500


Criminologists are interested in explaining differences among cases, yet there are no
differences to explain among these data. A visual inspection of these distributions allows us to
quickly determine that these are constants rather than variables. We do not need to calculate
statistics to know that there are no differences (dispersion) among the cases for each of these
data distributions.

Remember, we never analyze constants, only variables. When analyzing variables, we want to
know the degree to which cases are dispersed or differ from one another within a data
distribution. To illustrate this dispersion or variability in cases within variables, look at the four
data distributions below.




Copyright @2017 82
Analyzing Criminological Data Chapter 6

Level of Variability/Dispersion

Low High

Distribution A Distribution B Distribution C Distribution D
497 425 350 200
498 450 400 300
499 475 450 400
500 500 500 500
501 525 550 600
502 550 600 700
503 575 650 800
x = 500 x = 500 x = 500 x = 500

Each distribution contains the same number of cases (n = 7), and each distribution has an
identical central tendency value ( x = 500). Yet, the differences among case values vary
tremendously across distributions. As you can see, there is very little variability/dispersion
among cases in Distribution A and a great deal of diversity among cases in Distribution D.

Measures of central tendency provide one important piece of information about data
distributions. However, these measures can only describe the typical case or midpoint of our
distributions. They do not tell us how the case values are scattered or spread between
categories or along a scale. This is why we calculate and report both a measure of central
tendency and a measure of dispersion when describing univariate data distributions.

Practice Applications for Learning Objective 1 (LO1):


Identify the data distribution with the most and least variability in the following examples:

Q1. Which of the following distributions of the number of co-defendants in murder cases has the
least dispersion or variability?

A. 0, 0, 0, 1, 0, 1, 0, 1, 0, 0
B. 0, 1, 1, 2, 5, 8, 0, 1, 0, 2
C. 1, 2, 2, 3, 2, 2, 4, 1, 1, 1
D. 1, 1, 3, 2, 2, 4, 1, 1, 1 2

Copyright @2017 83
Analyzing Criminological Data Chapter 6


Q2. Which of the following distributions of the number of co-defendants in 10 murder cases has the
most dispersion or variability?

A. 0, 0, 0, 1, 0, 1, 0, 1, 0, 0
B. 0, 1, 1, 2, 5, 8, 0, 1, 0, 2
C. 1, 2, 2, 3, 2, 2, 4, 1, 1, 1
D. 1, 1, 3, 2, 2, 4, 1, 1, 1 2


Q3. Which of the following distributions of the number of monthly drug arrests in 10 cities has the
least dispersion or variability?

A. 30, 32, 28, 31, 29, 30, 30, 31, 29, 29
B. 33, 30, 27, 34, 31, 33, 36, 24, 30, 27
C. 25, 30, 40, 35, 30, 25, 30, 40, 35, 30
D. 25, 50, 35, 60, 55, 80, 65, 70, 20, 25


Q4. Which of the following distributions of the number of monthly drug arrests in 10 cities has the
most dispersion or variability?

A. 30, 32, 28, 31, 29, 30, 30, 31, 29, 29
B. 33, 30, 27, 34, 31, 33, 36, 24, 30, 27
C. 25, 30, 40, 35, 30, 25, 30, 40, 35, 30
D. 25, 50, 35, 60, 55, 80, 65, 70, 20, 25

Correct Answers: Q1(A), Q2(B), Q3(A), and Q4(D)

LO2: Calculate appropriate measures of dispersion

It is possible to compare levels of dispersion by examining raw scores. However, this becomes a
more difficult and less accurate method of assessing variability in larger datasets. Therefore, we
calculate measures of dispersion to provide more precise variability estimates. The five most
common measures of dispersion are the variation ratio, range, interquartile range, variance,
and standard deviation.

Variation Ratio
Dispersion within nominal and ordinal data can only be measured using the variation ratio
(VR). The variation ratio indicates the proportion of cases that do not fall within the modal
category. Since it is a proportion, the value of this statistic only ranges from 0 to 1. Higher
values (closer to 1) indicate greater levels of dispersion than lower values (closer to 0).

Copyright @2017 84
Analyzing Criminological Data Chapter 6


This statistic is calculated in three Variation Ratio: A measure of dispersion, ranging
simple steps. We will calculate the from 0 to 1, that indicates the proportion of cases
variation ratio for the following two outside of the modal category; symbolized using VR
“cause of death” frequency
distributions.

Distribution A Distribution B

Cause of Death f Cause of Death f

Firearm 9,203 Firearm 4,203
Knife 1,836 Knife 3,836
Other weapon 1,438 Other weapon 3,438
All other causes 1,209 All other causes 2,209
n = 13,686 n = 13,686



Step 1: Identify the mode
“Firearm” is the modal category in both Data Distribution A and B (it is the category that
contains the most cases).

Step 2: Calculate the proportion of cases within the modal category
The formula for calculating a mode’s proportion is:

p(mode) = f/n,

where f = number of cases and n = total sample size.

For the modal category in Data Distribution A, p(mode) = 9203/13686 = 0.67.

For the modal category in Data Distribution B, p(mode) = 4203/13686 = 0.31.

Step 3: Calculate the variation ratio value
The formula for calculating the variation ratio is:

VR = 1 - p(mode).

For Data Distribution A, VR = 1 - 0.67 = 0.33.

For Data Distribution B, VR = 1 - 0.31 = 0.69.

Copyright @2017 85
Analyzing Criminological Data Chapter 6

These statistics indicate that Data Distribution A has less dispersion than Data Distribution B. In
other words, cases in Distribution A are more highly clustered in the modal category than they
are in Distribution B. Conversely, cases in
Distribution B are more scattered among Interpreting the Variation Ratio
categories than they are in Distribution A.
0.0 1.0
In this case, we can conclude that there is
Less dispersion More dispersion
less variability in (or difference between)
(more clustering) (less clustering)
cause of death in the first distribution than
the second. People in the first distribution
were more likely than people in the second distribution to die by the same cause of death.

Range
The range (R) is the most basic measure of dispersion for ordinal, interval, and ratio
(quantitative) data. The range tells us how
different the largest case value is from the Range: A measure of dispersion that indicates
smallest case value. the difference between the largest and
smallest case values; symbolized using R
The range is calculated in two simple steps. We will calculate this statistic for both crime
distributions below. Each distribution contains seven observations (n = 7).


Distribution A Distribution B

Property Crime Violent Crime Rates

Rates 623.6

1,710.4 522.4

2,249.8 974.6
5,053.9 1,193.3
3,894.3 551.7
4,398.0 741.4
2,840.4 519.3
5,966.6




Step 1: Identify minimum and maximum case values
In the tables below, the cases have been ranked from the lowest to highest value. Ordering
cases makes it easy to identify the minimum and maximum case values. For Distribution A
(property crime rate), the minimum value is 1,710.4 and the maximum is 5,966.6. For
Distribution B (violent crime rate), the minimum value is 519.3 and the maximum is 1,193.3.

Copyright @2017 86
Analyzing Criminological Data Chapter 6

Distribution A Distribution B

Property Crime Rates Violent Crime Rates
Minimum value 519.3 Minimum value
1,710.4

2,249.8 522.4

2,840.4 551.7

3,894.3 623.6
4,398.0 741.4
5,053.9 974.6
Maximum value 1,193.3 Maximum value
5,966.6




Step 2: Calculate the difference (range)
The formula for calculating the range is:

R = xmaximum - xminimum,

where xmaximum = largest case value and xminimum = smallest case value.

For Distribution A, R = 5,966.6 – 1,710.4 = 4,256.2.

For Distribution B, R = 1,193.3 – 519.3 = 674.

These statistics indicate that Data Distribution A has more dispersion than Data Distribution B.
In other words, there is more variability (spread or scatter) among Distribution A case values
than among Distribution B case values. Conversely,
Distribution B case values are more tightly clustered together Range for Ordinal Data
than Distribution A case values.
The range for ordinal data is

simply the total number of
In this case, we can conclude that there is more variability in
(or difference between) property crime rates than violent categories containing cases
crime rates. We find greater similarity between violent crime minus 1.
rates and less similarity between property crime rates across
places.

Interquartile Range
The interquartile range (IQR) is a modified range statistic. It divides a data distribution into four
equal parts (quartiles) and examines the spread of cases across the two middle quartiles. The
interquartile range is often referred to as the midspread or middle fifty since it represents the
range of the middle 50 percent of scores in a distribution.

Copyright @2017 87
Analyzing Criminological Data Chapter 6

The interquartile range is calculated in


three simple steps. We will calculate Interquartile range: A measure of dispersion that
this statistic for both age distributions. indicates the difference between the 25th percentile
The first distribution contains eleven (1st quartile) and 75th percentile (3rd quartile) in a
observations (n = 11), while the second data distribution; symbolized using IQR
contains ten observations (n = 10).

Distribution A Distribution B
Age at first arrest Age at first arrest

23, 15, 19, 19, 63, 18, 25, 19, 17, 17,
22, 21, 22, 21, 17, 20 20, 21, 22, 24, 19




Step 1: Find the median value to divide the distribution in half
First, calculate the median for the entire data distribution.

Distribution A Median Calculations Distribution B Median Calculations

Ordered scores: Ordered scores:
15, 17, 19, 19, 20, 21, 21, 22, 22, 23, 63 17, 17, 18, 19, 19, 20, 21, 22, 24, 25

MP = (n + 1)/2 = (11 + 1)/2 = 12/2 = 6. MP = (n + 1)/2 = (10 + 1)/2 = 11/2 = 5.5.

Md = 21 (6th case value). Md = 19.5 (Average of 5th and 6th case values).


Then, use the median value to divide the ordered distribution into two equal halves.

Distribution A Distribution B

(15, 17, 19, 19, 20) 21 (21, 22, 22, 23, 63) (17, 17, 18, 19, 19) (20, 21, 22, 24, 25)


Median Value Median Value

Step 2: Find the median value for each half of the distribution
The median value for the first half of the distribution represents the first quartile (25th
percentile) value, while the median value for the second half of the distribution represents the
third quartile (75th percentile) value.

Copyright @2017 88
Analyzing Criminological Data Chapter 6

Distribution A First Quartile Distribution B First Quartile



(15, 17, 19, 19, 20) (17, 17, 18, 19, 19)
MP = (n + 1)/2 = (5 + 1)/2 = 6/2 = 3. MP = (n + 1)/2 = (5 + 1)/2 = 6/2 = 3.
Md = 19 (3rd case value). Md = 18 (3rd case value).

Distribution A Third Quartile Distribution B Third Quartile

(21, 22, 22, 23, 63) (20, 21, 22, 24, 25)

MP = (n + 1)/2 = (5 + 1)/2 = 6/2 = 3. MP = (n + 1)/2 = (5 + 1)/2 = 6/2 = 3.
Md = 22 (3rd case value). Md = 22 (3rd case value).




Distribution A Distribution B

15 17

17 17
First quartile
19 First quartile 18
= 18
19 = 19 19

20 19

Second quartile
21 Second quartile 20
(median) = 19.5
(median) = 21

21 21

22 22 Third quartile
Third quartile = 22
22 24
= 22

23 25

63



Step 3: Calculate the difference between the first and third quartile
The formula for calculating the interquartile range is:

IQR = Q3 – Q1,

where Q3 = 3rd quartile value and Q1 = 1st quartile value.

For Distribution A, IQR = 22 – 19 = 3.

For Distribution B, IQR = 22 – 18 = 4.

These statistics indicate that Data Distribution A has less dispersion than Data Distribution B.
The IQR shows that there is more variability in (or difference between) people’s age of first

Copyright @2017 89
Analyzing Criminological Data Chapter 6

arrest in Distribution B than Distribution A. Notice that, because this statistic is based on only
the middle 50 percent of scores, it is not influenced by outliers (e.g., age 63 in Distribution A).
Therefore, the IQR is likely to produce values that are very different from the range if outliers
are present.

Variance
The variance (s2) measures dispersion in a distribution by comparing each case value to the
mean value. It tells us how different the average case is from the mean.

The variance is calculated in four Variance: A measure of dispersion that indicates the
straightforward steps. We will average squared distance between each case and
calculate this statistic for the two the mean; symbolized using s2
distributions below.

Distribution A Distribution B

x x

3 0

5 2

2 3
8 11
2 4


Step 1: Calculate the mean
As you know, the mean value is the sum of all scores in the distribution, divided by the total
sample size.

For Distribution A, x = (3 + 5 + 2 + 8 + 2)/5 = 20/5 = 4.

For Distribution B, x = (0 + 2 + 3 + 11 + 4)/5 = 20/5 = 4.

Notice that although the individual scores within the distributions differ, the mean for both
distributions is identical ( x = 4).

Step 2: Calculate deviation scores for each case
The deviation score (di) is the difference between an individual case value and the distribution’s
mean value. The formula for the deviation score is:

di = xi - x ,

where xi = individual case score and x = distribution mean.

Copyright @2017 90
Analyzing Criminological Data Chapter 6

The mean is subtracted from every case in the distribution.


Distribution A Distribution B

x xi - x xi -
3 3 – 4 = -1 0 0 – 4 = -4
5 5 – 4 = 1 2 2 – 4 = -2

2 2 – 4 = -2 3 3 – 4 = -1

8 8 – 4 = 4 11 11 – 4 = 7

2 2 – 4 = -2 4 4 – 4 = 0


Step 3: Square each deviation score and sum the totals
All scores in a distribution are balanced around the mean, some above it and some below. For
this reason, the total deviation scores will sum to zero if they are added together. In order to
use the sum of these scores as a measure of dispersion, each deviation score is squared. This
removes all negative values (since squared negative values become positive values). All square
values are then added together.

Distribution A Distribution B

x xi - (xi - )2 x xi - (xi - )2
2 2
3 3 – 4 = -1 (-1) = 1 0 0 – 4 = -4 (-4) = 16
2 2
5 5 – 4 = 1 (1) = 1 2 2 – 4 = -2 (-2) = 4
2 2 – 4 = -2 2
(-2) = 4 3 3 – 4 = -1 (-1) 2
= 1
8 8 – 4 = 4 (4)2 = 16 11 11 – 4 = 7 2
(7) = 49
2 2 – 4 = -2 (-2)2 = 4 4 4 – 4 = 0 2
(0) = 0

S = 26 S = 70



Step 4: Calculate the sample variance value
The formula for calculating the variance is:

2
s =
∑ (x1 − x)2
,
n −1

where ∑ (x1 − x)2 = sum of squared deviations and
n = total sample size.

For Distribution A, s2 = 26/(5 – 1) = 26/4 = 6.50.

For Distribution B, s2 = 70/(5 – 1) = 70/4 = 17.5.

Copyright @2017 91
Analyzing Criminological Data Chapter 6

These statistics indicate that Data Distribution A has less dispersion than Data Distribution B.

Population Variance

The formula is slightly different for calculating the variance for a population instead of a sample:

σ2 =

Notice three differences:


1. Lowercase sigma (σ) squared is used to symbolize the population variance.
2. The symbol µ is used to signify use of the population mean (rather than sample mean).
3. We do not subtract 1 from the denominator. We only do this when analyzing samples.
Samples are smaller, which artificially reduces variability among cases. Subtracting 1 from the
sample size gives the numerator more weight and helps to correct this problem. This
adjustment is not necessary when analyzing population data.


Standard Deviation
The variance is a robust measure of dispersion since it includes every case value in its
calculation. However, squaring each case deviation score makes the variance statistic difficult
to interpret. Taking the square root of the
variance value easily solves this problem. Standard deviation: A measure of dispersion
The square root of the variance is the that indicates the mean of all deviation scores;
standard deviation (s). symbolized using s

The formula for calculating the standard deviation is:


s =
∑(x − x)
1
2

or s = s 2 .
n −1
Using the same distributions used to calculate the variance:

For Distribution A, s = 6.5 = 2.55.

For Distribution B, s = 17.5 = 4.18.

These statistics indicate that Distribution A has less dispersion than Distribution B. In
Distribution A, the typical case deviates from the mean ( x = 4) by a score of 2.5. In Distribution
B, the typical case deviates from the mean ( x = 4) by a score of 4.18.

Copyright @2017 92
Analyzing Criminological Data Chapter 6

Although both distributions contain the same number of cases (n = 5) and have the same mean,
the standard deviation values tell us that cases cluster more closely around the mean in
Distribution A than Distribution B. Always remember that higher standard deviation values
indicate more dispersion (variability, scatter, or spread) among case values in a distribution.

Population Standard Deviation

The formula is slightly different for calculating a population, rather than sample, standard
deviation:

σ = or σ =

Notice three differences:


1. Lowercase sigma (σ) is used to symbolize the population standard deviation.
2. The symbol µ is used to signify use of the population mean (rather than sample mean).
3. We do not subtract 1 from the denominator since the entire population size is used (N).

Calculating measures of dispersion


The table below summarizes the steps involved in calculating each measure of dispersion.

Calculating Measures of Dispersion for Samples
Measure Steps Equation
1. Identify the mode VR = 1 - p(mode)
Variation
Ratio (VR) 2. Calculate the proportion of cases in the modal category

3. Calculate the variation ratio value

Range (R) 1. Identify minimum and maximum case values R = xmaximum - xminimm
2. Calculate the difference (range)
IQR = Q3 – Q1
Interquartile 1. Find the median value to divide the distribution in half
Range (IQR) 2. Find the median value for each half of the distribution

3. Calculate the difference between the first and third
quartile
Variance (s2) 1. Calculate the mean 2 ∑(x − x)
1
2

s =
2. Calculate deviation scores for each case n −1
3. Square each deviation score and sum the totals
4. Calculate the sample variance value
Standard 1. Take the square root of the variance s = s 2
Deviation (s)
√√

Copyright @2017 93
Analyzing Criminological Data Chapter 6

Practice Applications for Learning Objective 2 (LO2):


Calculate the requested measure of dispersion from the following data distributions:

Q1. What is the value of the variation ratio for Type of Sentence f
this frequency distribution of the type of
Fine 950
criminal sentence?
Probation 700
A. 0.65
Jail 650
B. 0.78
Prison 400
C. 0.71
n = 2,700
D. 0.60


Q2. What is the value of the range for the following number of pretrial motions in 10 trials:
{3, 1, 1 , 1, 5, 2, 15, 2, 4, 5}?
A. 2
B. 3
C. 14
D. 15

Q3. What is the value of the interquartile range for the following ages of 8 convicted embezzlers?
{22, 26, 35, 44, 51, 63, 67, 68}?
A. 34.5
B. 35
C. 36
D. 37


Q4. What is the value of the sample variance for the following number of traffic tickets among 5
private security officers: {3, 1, 2, 4, 5}?
A. 1.0
B. 2.0
C. 2.5
D. 3.0


Q5. What is the value of the sample standard deviation for the following number of traffic tickets
among 5 private security officers: {3, 1, 2, 4, 5}?
A. 1.41
B. 1.58
C. 2.00
D. 2.50

Correct Answers: Q1(A), Q2(C), Q3(A), Q4(C), and Q5(B)

Copyright @2017 94
Analyzing Criminological Data Chapter 6

LO3: Select the most appropriate measure of dispersion

You now know how to calculate five measures of dispersion. However, you must know which
measure – the variation ratio, range, interquartile range, variance, or standard deviation – will
provide the most accurate summary description of dispersion for the variables in a
criminological study. To select the appropriate measure, you must know the strengths and
limitations of each statistic.

The Variation Ratio

1. Strengths
The variation ratio can be used to describe any type of variable. It is the only measure of
dispersion that can be used to describe nominal data.

2. Limitations
The variation ratio simply reports the proportion of cases that do not fall within the modal
category. Therefore, it can only fully describe the spread of cases when analyzing variables with
only two categories. The variation ratio cannot tell us precisely how cases are distributed
among two or more attributes outside the modal category.

The Range

1. Strengths
The range is very easy to calculate. The smallest score is simply subtracted from the highest
score in the distribution.

The range is easy to interpret. It is the difference between the highest and lowest score.

This statistic can be calculated for all quantitative variables, including ordinal, interval, and ratio
data. It is the most robust measure of dispersion for ordinal variables.

2. Limitations
The range is based on only two scores in the distribution. Therefore, it is less useful for
describing larger datasets with many cases spread across a diverse scale.

Since it is based on the highest and lowest scores, the range will include any extreme case
values. Outliers will bias the range as an accurate measure of dispersion.

While it recognizes the upper and lower end of the distribution, it does not tell us how the rest
of the scores are spread across variable categories or scores. As such, it does not describe the
shape of the distribution (i.e., where or how variables are clustered along the scale).

Copyright @2017 95
Analyzing Criminological Data Chapter 6



The Interquartile Range

1. Strengths
Unlike the range, the interquartile range is not biased by outliers. This is because it is based on
only the middle 50 percent of scores in a distribution. Outliers that might fall within the first or
fourth quartile are automatically excluded from the calculation.

2. Limitations
It is based on only two scores in the distribution and is less useful for describing larger datasets
with many cases spread across a diverse scale.

While the interquartile range recognizes the range of the middle 50 percent of cases, it does
not acknowledge the full range of case values. It also does not tell us how the scores are spread
across variable categories or scores. As such, it does not describe the shape of the distribution
(i.e., where or how variables are clustered along the scale).

The Variance

1. Strengths
The variance is calculated by first subtracting the mean from each score in a distribution. So,
every case is used in its computation. Use of all scores, rather than only the modal category or
highest and lowest scores, makes the variance a more robust and informative measure of
dispersion than the variation ratio or range (including the interquartile range).

2. Limitations
Since deviation scores are squared when calculating the variance, this statistic is somewhat
difficult to interpret.

Each case score influences the value of the variance. Therefore, it is biased by outliers.

Since raw numerical information for each case is required to calculate the variance, it can only
be computed for variables measured on an interval or ratio scale. The variance cannot be
calculated for a nominal or ordinal variable.

The Standard Deviation

1. Strengths
The standard deviation is the square root of variance. So, like the variance every case is used in
its computation, making it a more robust and informative measure of dispersion than the
variation ratio or range (including the interquartile range).

Copyright @2017 96
Analyzing Criminological Data Chapter 6

The standard deviation is easier to interpret than the variance. This is because it directly
represents how much the typical case deviates from the mean (rather than the squared
difference).

This statistic is the most popular and widely recognized measure of dispersion.

2. Limitations
Each case score influences the value of the standard deviation. Therefore, it can be biased by
outliers.

Since raw numerical information for each case is required to calculate the standard deviation, it
can only be computed for variables measured on an interval or ratio scale. The standard
deviation cannot be calculated for a nominal or ordinal variable.

Measure Strengths Limitations
Variation • Only measure of dispersion • Does not provide a complete

description of spread when dealing
Ratio (VR) appropriate for nominal data
with more than two categories

Range (R) • No sophisticated calculations • Based on only the highest and
required lowest scores
• Easy to interpret • May be biased by outliers
Mean
• Appropriate for all quantitative • Does not describe the shape of the
data distribution

Interquartile • Is not influenced by outliers • Based on only two scores


Range (IQR) • Does not recognize full range of
cases
• Does not describe the shape of the
distribution

Variance (s2) • More informative than the • Difficult to interpret


variation ratio or range (including • Can be heavily biased by extreme
IQR) outliers
• Only appropriate for interval and
ratio data

Standard • Based on all scores in a distribution • Can be heavily biased by extreme


Deviation (s) • Easier to interpret than the outliers
variance • Only appropriate for interval and
• The most popular measure of ratio data
dispersion

Copyright @2017 97
Analyzing Criminological Data Chapter 6

General rules for selecting measures of dispersion



Much like selecting a measure of central tendency, when deciding which measure of dispersion
to use, consider (1) the variable’s level of measurement and (2) the nature of the data
distribution. Here are some general rules for selecting the most appropriate measure of
dispersion to describe your data.

• Use the variation ratio to describe nominal (qualitative) variables.

• Use the variation ratio and range to describe ordinal variables. Use of the interquartile
range is not necessary, since ordinal categories can be collapsed to absorb extreme
cases (i.e., outliers).

• Use the range, standard deviation, and variance to describe interval and ratio level
variables that have a relatively non-skewed or symmetrical distribution.

• Use the range, interquartile range, standard deviation and variance to describe
quantitative variables with minor levels of skewedness.

Practice Applications for Learning Objective 3 (LO3):


Identify the appropriate measure(s) of dispersion for the following frequency distributions:

Trial Verdict f Q1. Which measure(s) of dispersion is appropriate for this
Guilty 700 frequency distribution of trial verdicts?
Not Guilty 300 A. Variation ratio
n = 1,000 B. Variation ratio and range
C. Interquartile range, variance, and standard deviation


Age of Arrested Q2. Which measure(s) of dispersion is appropriate for this

Shoplifters f frequency distribution of the number of search warrants?
< 18 years old 28,049 A. Variation ratio
18-24 15,542 B. Variation ratio and range
25-29 9,438 C. Range, variance and standard deviation
30-49 4,697
50 or older 1,158
n = 58,884

Copyright @2017 98
Analyzing Criminological Data Chapter 6


Number of Q3. Which measure(s) of dispersion is appropriate for this

Search Warrants f frequency distribution of the number of search warrants?
0 64 A. Variation ratio
1 76 B. Variation ratio and range
2 102 C. Range, variance and standard deviation
3 99
4 79
5 59
n = 479


Years of Service as Q4. Which measure of dispersion is the most appropriate for this

Police Detective f frequency distribution of the years of service as a detective?
0 20 A. Variation ratio
5 200 B. Range
6 250 C. Interquartile range
7 500
8 400
9 450
25 10
n = 1,830

Correct Answers: Q1(A), Q2(B), Q3(C), and Q4(C)



LO4: Assess visual depictions to compare variability within and across


distributions

Measures of dispersion describe variability among cases. This variability, in turn, determines the
shape of a distribution.

Kurtosis

Data distributions may be positively or negatively skewed, and you learned how to assess levels
of skewness by examining graphical depictions and different measures of central tendency
values. Distributions also vary in terms of kurtosis, that is, a
distribution’s degree of peakedness. Here you will learn about the Kurtosis: Distribution’s
various forms of kurtosis in the distribution of quantitative degree of peakedness
variables and how levels of dispersion influence each one.


Copyright @2017 99
Analyzing Criminological Data Chapter 6


All symmetrical, bell-shaped distributions have some level of kurtosis. Kurtosis levels are
determined by the degree to which cases cluster around the mean. Distributions with less
dispersion have more kurtosis, while distributions with more dispersion have less kurtosis.

There are three general levels of kurtosis. Mesokurtic distributions are symmetrical
distributions that have a moderate level of peakedness. These types of distributions are called
normal distributions. You will learn more about characteristics of normal distributions in the
next chapter. However, for now, it is important to be able to recognize a normally distributed
(non-skewed, mesokurtic) distribution.

Leptokurtic distributions are symmetrical distributions with high levels of peakedness. The
peakedness forms as a result of cases clustering tightly around the mean (i.e., there is greater
similarity between case values). These distributions have sharper peaks than normal
distributions and are tall and thin. Measures of dispersion produce smaller values when
analyzing leptokurtic distributions.

Platykurtic distributions are symmetrical distributions with low levels of peakedness. Cases in
platykurtic distributions are only loosely clustered around the mean, with more cases scattered
farther away along the scale (i.e., there is more variability between case values). These
distributions have duller peaks than normal distributions and are wide and flat. Measures of
dispersion produce larger values when analyzing platykurtic distributions.


Degrees of Kurtosis

Mesokurtic
Moderately kurtic
Normal distribution


Leptokurtic
Highly kurtic
Tall and thin



Platykurtic
Slightly kurtic

Flat and wide


Statistics can quantity the degree of kurtosis (e.g., the moment coefficient of kurtosis).
However, it is just as useful to be able to visually recognize the three general levels of kurtosis

Copyright @2017 1
00
Analyzing Criminological Data Chapter 6

based on the shape of distributions or by comparing measures of dispersion values. Remember


that smaller dispersion values indicate higher kurtosis levels (i.e., leptokurtic distributions) and
larger dispersion values indicate lower kurtosis levels (i.e., platykurtic distributions).

The Box Plot

Box plots provide a visual representation of case dispersion for quantitative variables. These
graphical displays show three summary descriptive statistics of central tendency and
dispersion: median, interquartile range, and range (the
Box plot: A visual display of
largest and smallest values in a distribution). These
data dispersion that shows the
graphs are sometimes called “box-and-whiskers
median value, range, and
diagrams” since they look like boxes with extended
interquartile range of the
whiskers.
distribution of scores

Use the following information to interpret a box plot:

How to Read A Box Plot

• The “box” in the plot displays the middle 50% of cases (the interquartile range).
• The line inside the box represents the median (the 50th percentile).
• The lower boundary of the box indicates the 25th percentile (the first quartile or the
“lower hinge”).
• The upper boundary of the box indicates the 75th percentile (the third quartile or the
“upper hinge”).
• The vertical lines – “Ts” – above and below the box are the “whiskers” that extend
outward toward the largest and smallest values in the distribution.



Examine the two box plots below. They represent the distribution of probation risk scores. The
graph on the left shows the distribution for females, while the graph on the right shows the
distribution for males.




Copyright @2017 1
01
Analyzing Criminological Data Chapter 6



You can quickly draw four conclusions by looking at these box plots.
1. The average (median) female risk score is higher than the average male risk score (about
60 for females and 50 for males).
2. Both the interquartile range and range of risk scores are smaller for females than males.
This is shown by the narrower width of the “box” for females’ risk scores and the
smaller combined width of their “box and whiskers”. These visual patterns indicate that
there is less variability among female scores than male scores.
3. Both distributions have a negative skew, with a few cases pulling their distributions
more toward the lower end of the scale. This pattern is indicated by the longer
“whiskers” at the lower end than the upper end of the scale for both genders.
4. The female distribution has more kurtosis (cases are more tightly clustered around the
median) than the male distribution (cases are more spread out across the range of
scores).

As you can see, box plots are useful for comparing central tendency, skewness, dispersion, and
kurtosis.

Box plots will also help you to identify outliers and extreme outliers. Here’s how these extreme
scores are indicated in box plots:

• Outliers are cases with values that fall between 1.5 and 3 box-lengths away from the
upper or lower edge of the box. They are displayed in box plots using the symbol “o”.
• Extreme outliers are cases with values that fall more than 3 box-lengths away from the
upper or lower edge of the box. They are displayed in box plots using the symbol “*”.

Examine the following box plots depicting the average salaries for private and public attorneys.
You should notice that there are no extreme salaries (i.e., outliers) among the public defenders.
However, there are several private attorneys with annual salaries that are outliers (symbolized
as “o”). Some of these outliers are substantially below their median salary of $80,000 and one
outlier reflects a private attorney’s salary that is far above the typical salary for this type of

Copyright @2017 1
02
Analyzing Criminological Data Chapter 6

attorney. Comparing the two boxplots reveals that the median income is higher for private
attorneys and they also have more variability in their annual salaries than is true for public
defenders.

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about identifying different types of kurtosis and patterns of
dispersion and central tendency in box plots:

Q1. Which of the following standard deviations would represent the highest level of kurtosis in a data
distribution?
A. 5.2
B. 7.3
C. 10.6
D. 14.8


Q2. Which of the following standard deviations would represent the most leptokurtic distribution?
A. 2.7
B. 5.9
C. 8.2
D. 17.8


Q3. Which of the following standard deviations would represent the most platykurtic distribution?
A. 2.7
B. 5.9
C. 8.2
D. 17.8

Copyright @2017 1
03
Analyzing Criminological Data Chapter 6



Q4. Based on these boxplots, what type of court has
the longest average court processing time?

A. Traffic court
B. District court
C. Appellate court





Q5. Based on these boxplots, what type of court has
the largest interquartile range in processing
time?

A. Traffic court
B. District court
C. Appellate court





Q6. Based on these boxplots, what type of court has
the most extreme outliers in court processing
time?

A. Traffic court
B. District court
C. Appellate court



Correct Answers: Q1(A), Q2(A), Q3(D), Q4(C), Q5(B), and Q6(C)

Copyright @2017 1
04
Analyzing Criminological Data Chapter 6

Review


Use these reflection questions and directed activities to help understand and apply the material
presented in this chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts in
your own words and give at least two examples of each.

• Measures of dispersion
• Variation ratio
• Range
• Interquartile range
• Variance
• Standard deviation
• Kurtosis
• Box plots



How?

Answer the following questions and complete the associated activities.

• How can you quickly compare levels of dispersion across datasets?
o Describe the characteristics you should look for.

• How do you calculate each measure of dispersion?
o Describe the procedure or write out the formula associated with each.

• How do you know which measure of dispersion to use?
o Describe the factors you should consider when selecting a particular measure.

• How do the shapes of distributions differ based on varying levels of dispersion?
o Describe the three levels of kurtosis and how they are affected by differences in
dispersion.

Copyright @2017 1
05
Analyzing Criminological Data Chapter 6


When?

Answer the following questions.

• When should we compare levels of dispersion?
• When should we calculate measures of central tendency?
• When should we use one measure of dispersion over another?
• When should we use box plots?



Why?

Answer the following question.

• Why does knowing this information about measures of dispersion and variability among
cases help you increase your skills in recognizing the proper application and misuses of
statistics in criminological research?



Copyright @2017 1
06
Analyzing Criminological Data Chapter 7

CHAPTER 7:
Using Sampling Distributions to Make Statistical Inferences

This chapter focuses on how criminologists are able to make statistical claims about population
characteristics from sample data. You will learn about the concept of a sampling distribution
and its critical importance for making sound statistical inferences. Understanding the concepts
and learning objectives in this chapter are necessary skills for conducting and evaluating
criminological research. Subsequent chapters on estimating population values and hypothesis
testing will build directly upon the general coverage of the concepts in this chapter.

Concepts Learning Objectives (LO)

- Statistical Inference LO1: Identify “best practices” for making statistical
- Sampling distributions inference
- Binomial distribution
- Standard normal (z) distribution LO2: Develop areas of rare and common outcomes for
- Student’s t-distribution different types of sampling distributions
- Chi-square distribution
- F-distribution L03: Create standardized scores and derive probabilities
- Standardized scores associated with sampling distributions
- Z-scores
- Standard error (SE) LO4: Demonstrate the impact of the standard error and
- Degrees of freedom (df) degrees of freedom on the shape of sampling
distributions


Introduction

As shown in previous chapters, criminologists "crunch" numbers (like means and standard
deviations) to describe and summarize data from samples. These descriptive statistics provide
the foundation for answering criminal justice research questions.

In this chapter, you will expand your basic knowledge of sampling methods and descriptive
statistics by learning about statistical inference and sampling distributions. These abstract
concepts are easy to define, and you will quickly see how important they are when answering
research questions with statistics. Mastering the learning objectives in this chapter will allow
you to correctly apply these concepts in all of the future chapters of this book and in evaluating
statistical claims about criminal justice practices.

Copyright @2017 107



Analyzing Criminological Data Chapter 7


This chapter will help you to define and apply these concepts. We begin by learning about
statistical inference.

LO1: Explain the general principles of statistical inference

Because population parameters are typically unknown, we draw samples and use statistics to
estimate these values. This process is called statistical inference. We make statistical inferences
whenever we use descriptive sample statistics to draw conclusions about populations. These
inferences allow us to test hypotheses with
sample data. Statistical inference: Using sample data to make
claims about populations from which the sample
For example, let’s say that you wanted to was drawn and test hypotheses
know the average age of police officers in Chicago. You could draw a random sample of 200
officers from Chicago and calculate the mean age of this sample. If the sample statistic were 35
(years of age), you would use statistical inference to assume that the population parameter is
also approximately 35.

However, statistical inferences are subject to the GIGO problem. If probability sampling is not
used, or if there are errors in your sampling method, your sample statistics may not accurately
represent true population values. For example, talking to a convenience sample of cops that
pull you over for speeding in Chicago will NOT produce a valid estimate of the average officer’s
age.

To determine whether claims made about populations are accurate, you must understand the
"best practices" for making statistical inferences. Here are five ways to improve the accuracy of
inferences made from sample data and avoid the GIGO problem.

1. Use valid and reliable measures of concepts.

2. Use current and complete lists of populations when selecting samples. This will reduce
sampling bias (i.e., bias resulting from exclusions of particular population groups). For
example, a sample of 2012 homeowners used to represent all residents is biased
because it excludes homeowners who more recently purchased homes, renters,
temporary residents, and the homeless.

3. Use probability samples (rather than non-probability samples) to minimize sampling
bias. You will learn that this will also allow you to estimate the degree of sampling error
(i.e., discrepancies between sample statistics and their corresponding population values
due to chance alone).

4. Use large samples (rather than small samples) to reduce sampling error.

Copyright @2017 108



Analyzing Criminological Data Chapter 7


5. Include as many selected sample elements in the sample as possible (i.e., strive to
obtain a 100% coverage or response rate). This will reduce sampling bias that may result
from low response rates.

In summary, the "gold standard" for making statistical inferences from sample data is
to draw:

“A large, random (probability-based) sample from a complete population
listing, and obtain a 100% response rate while using valid and reliable
measures of the concepts contained within the research questions.”

Your statistical inferences will be accurate and meaningful if your research design
meets these standards.

Practice Applications for Learning Objective 1 (LO1):


Identify the “best practices” in sampling design and measurement that would provide the most
accurate statistical inferences for each of the following research questions:

Q1. Which of the following samples would provide the most accurate statistical inferences about
"drug addiction in Nevada"?

A. Taking a random sample of 100 coffee drinkers in Nevada
B. Taking a random sample of 100 heroin addicts in Nevada
C. Taking a random sample of 100 soda drinkers in Nevada
D. Taking a random sample of 100 milk drinkers in Nevada


Q2. Which of the following samples would provide the most accurate statistical inferences about
"handgun ownership in U.S. households"?

A. Asking a sample of 500 U.S. residents about their handgun ownership
B. Asking a random sample of 500 U.S. residents about their handgun ownership
C. Asking a random sample of 500 hunters about their handgun ownership
D. Asking a random sample of 500 National Rifle Association (NRA) members about their
gun ownership

Copyright @2017 109



Analyzing Criminological Data Chapter 7



Q3. Which of the following samples would provide the most accurate statistical inferences about
"crime prevention in Florida"?

A. Asking a random sample of 100 Florida residents about their ownership of burglary alarms
B. Asking a random sample of 150 Florida residents about their ownership of burglary alarms
C. Asking a random sample of 200 Florida residents about their ownership of burglary alarms
D. Asking a large sample of 500 Florida residents about their ownership of burglary alarms



Q4. Which of the following samples would provide the most accurate statistical inferences about
"recidivism among parolees from U.S. prisons"?

A. The number of rearrests for new felony offenses among a random sample of 400 parolees
from U.S. prisons.
B. The number of rearrests for new felony offenses among a large sample of 500 parolees
from U.S. prisons.
C. The number of rearrests for technical violations among a random sample of 4,000
parolees from prisons in California.

Correct Answers: Q1(B), Q2(B), Q3(C), and Q4(A)

LO2: Develop areas of rare and common outcomes for different types of
sampling distributions
It is important to know that even the best research designs may be seriously affected by
sampling error. In fact, probability samples may produce population estimates that are flat-out
wrong on the basis of chance alone. If this happens, our statistical inferences will be inaccurate
even if we use “best practices” to avoid GIGO.

Fortunately, statistical theory offers a way to estimate the likelihood that this biased outcome
will occur. It also helps us to evaluate the impact of this potential bias on our statistical
inferences. This theory is based on sampling distributions and the way in which they help us to
identify rare (unexpected) and common (expected) sample outcomes. A sampling distribution is
a hypothetical distribution of all
possible outcomes of a sample statistic Sampling distribution: A hypothetical distribution
(e.g., the mean of a variable). that represents all possible outcomes of a sample
statistic based on an infinite number of samples
To illustrate the idea of a sampling distribution, imagine that you have a population of 100
juvenile mental health therapists and you want to know their average caseload per day. You
randomly select 15 therapists from this population (i.e., select a probability sample) and

Copyright @2017 110



Analyzing Criminological Data Chapter 7

calculate the mean caseload ( x = 8). This value represents one possible sample outcome from
your population. If you randomly selected another set of 15 people, your sample statistic might
be slightly different, simply because different people are included in your sample ( x = 12).

Now, imagine taking different random samples from this population, calculating the mean and
plotting your statistic along a scale (see the top graph with n= 20 cases for 10 samples). If you
were to do this an infinite number of times, this would give you a sampling distribution. Since
we can never actually do this, these sampling distributions are hypothetical distributions.
Notice that taking more and more samples from a population begins to produce a distribution
that looks like a symmetrical, bell-shaped curve (see the bottom graph with n=20 for 4,000
samples).





Distribution of Sample Means
(n=20 cases for 10 samples)
6

4

2

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17




Distribution of Sample Means
(n=20 cases for 4,000 samples)
1000
800
600

400

200
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18




The shapes of sampling distributions tell us the relative probability (likelihood) of obtaining
each statistical outcome. Values that fall towards the tails of sampling distributions are rare
(unexpected), while values that fall in the larger middle areas of the distribution are more

Copyright @2017 111



Analyzing Criminological Data Chapter 7

common (expected). Both rare and common outcomes are easily recognized by visually
inspecting graphic displays of sampling distributions.








= Common outcomes

= Rare outcomes


You will see that we compare our sample statistics to the distribution of scores within sampling
distributions to make statistical inferences about population values. To learn how to make
proper statistical inferences using sampling distributions, you must first learn about different
types of sampling distributions.

Types of Sampling Distributions

All sampling distributions provide a basis for making statistical inferences by identifying rare
and common outcomes. However, several different types of sampling distributions are used in
criminological research. The basic properties of five commonly used sampling distributions are
summarized below.

1. Binomial Distribution

The binomial distribution is a probability Binomial distribution: A sampling distribution
sampling distribution used when dealing that represents the probability of obtaining
with a single dichotomous variable. A particular outcomes for a dichotomous
dichotomous variable is a variable with variable within a given number of trials or
just two possible outcomes (e.g., no/yes, cases
male/female, convicted/not convicted).
The particular shape of this distribution (e.g., its skew, peak, and spread) depends on two
factors:

1. the assumed “success” probability for a given outcome (i.e., the likelihood that one
attribute will appear rather than the other), and

2. the number of sample cases.

Copyright @2017 112



Analyzing Criminological Data Chapter 7

Each time a case is selected for a sample, it is called a trial. Within binominal distributions, we
are interested in predicting the probability of observing one attribute over the other. When the
attribute of interest is observed, the trial is called a “success.” When the other attribute is
observed, it is called a “failure.”

For example, let’s say that you want to know if drug treatment programs are effective. You can
create a binomial distribution by selecting a random sample of drug treatment cases and
finding the number of “successful” cases within your sample. The graph below shows a
binomial distribution for samples that include 10 drug cases (n = 10). The x-axis (horizontal axis)
shows the number of successes that occur within each sample (e.g., 0 out of 10, 1 out of 10).
The y-axis (vertical axis) shows the probability of obtaining a sample with that outcome.


Binomial Distribution (prob [success] =.8, N=10)

0.3
Probability

0.2

0.1

0
0 1 2 3 4 5 6 7 8 9 10
Number of Drug Treatment Successes among 10 cases


Examine the largest bar. You can see that the probability of having 8 successful cases and 2
failures is .30. This means that there is a 30% likelihood that any random sample you select
from the population of all drug treatment cases will include 8 successes and 2 failures. You can
also see that this outcome (i.e., 80% success, 20% failure) is the most common (i.e., has the
highest probability of occurring), while other outcomes are less common. For example, the
probability of selecting a sample that includes 6 successful drug treatment cases (e.g., 60%
success, 40% failure) is only about 9% (probability = .09).

Based on this binomial distribution, we can easily recognize rare and common sample
outcomes or the number of successes more or less likely to be found among 10 drug cases.
Notice how the following conclusions can be derived from this sampling distribution:

• Less than 5 successes in 10 cases is a rare outcome. Such an outcome occurs only about
1% of the time. Adding together the probabilities associated with outcomes that contain
0, 1, 2, 3, or 4 successes produces a value less than .01 (1%).

• Samples are most likely to contain between 7 and 9 successes in samples of 10 cases.
The graph clearly shows that these are the most common outcomes. Samples will have
these outcomes about 77% of the time (probability of [7, 8, or 9] = .20 + .30 + .27 = .77).

Copyright @2017 113



Analyzing Criminological Data Chapter 7


• As stated previously, 8 successes in 10 cases is the single most common outcome. It
occurs about 30% of the time (probability of 8 successes = .30).

So, if you selected a national random sample of 10 drug cases, how many successful cases
would you expect to find? The answer is “8 successes” because this is the single most common
outcome within this binomial probability distribution. If you were examining cases at a national
level, this distribution suggests that the national probability of success for drug treatment is
80%.

Given that the national probability success level is assumed to be 80%, what would it mean if
your sample had only 4 successes among 10 randomly selected cases? There are two possible
explanations for this "rare" outcome: (1) it was due to sampling error (i.e., the binominal
distribution indicates that you have a less than 1% chance of getting 4 or fewer successes by
chance alone and this is one of these rare times that you got such a number) or (2) the national
claim of an 80% success rate is wrong and the real population value is closer to your sample
value of 4 (i.e., 40% success rate). As will be described in later chapters, how you decide which
of these two explanations is correct represents the basic process used to test hypotheses.

Binomial probabilities can be computed for samples with any number of cases. However,
criminologists often replace the binomial distribution with another sampling distribution when
dealing with larger samples (n > 50). This alternative is usually the standard normal (z)
probability distribution.

2. Standard normal (z) distribution

The standard normal (z) distribution (also called the “normal curve” or “bell-shaped curve”) is
a sampling distribution commonly used to make statistical inferences for all types of
quantitative variables. It represents all possible outcomes of a sample statistic when applied to
large samples.
Standard normal (z) distribution: A hypothetical
This sampling distribution has three useful sampling distribution of probabilities that
properties for identifying rare and represents all possible outcomes of a sample
common statistic outcomes: (1) value of statistic for large samples
the mode, median, and mean are the same, (2) the distribution is symmetrical around the
mean, and (3) there is a fixed percentage of cases that fall within defined areas of the
distribution. The graph below shows these basic properties of the normal curve.





Copyright @2017 114



Analyzing Criminological Data Chapter 7

The Standard Normal (z) Distribution



It is easy to interpret this graph of the normal curve if you know four facts:

• First, the mean is at the center of a normal curve. The middle vertical line represents the
mean. The numbers along the x-axis are referred to as “standardized z-scores.” You can
see that the mean is given a standardized z-value of 0.

• Second, the other numbers along the x-axis indicate the number of standard deviations
that fall above and below the mean. For example, +1.00 indicates one standard
deviation above the mean, while -1.00 indicates one standard deviation below the
mean.

• Third, the space between any two vertical lines in this graph represents a specific
proportion of cases within the distribution. You will learn about these specific
proportions later. For now, you should recognize the areas closest to the mean
(between -1.00 and +1.00) are the largest, while the areas beyond two standard
deviations from the mean (beyond -2.00 or +2.00) are smaller.

• Fourth, the entire graph covers all cases in a distribution. Half (50%) of the cases in a
normal distribution fall above the mean and half (50%) fall below the mean.

Like all other sampling distributions, the normal curve is a probability distribution that helps us
to identify rare and common outcomes for the purpose of making statistical inferences. Simply
looking at the normal distribution allows us to draw the following conclusions about samples
that come from populations with a normal distribution of scores:

• Samples should rarely produce statistical values that are greater than or less than two
standard deviations from the mean.

• Samples should commonly produce statistical values that fall near the mean (e.g., within
one standard deviation above and below the mean).

• Nearly all samples should produce statistical values that fall within two standard
deviations of the mean.

Copyright @2017 115



Analyzing Criminological Data Chapter 7

3. Student’s t-distribution

The Student’s t-distribution (also called the t-distribution) is a sampling distribution that is very
similar to the normal curve. However, it should be used when the population standard
deviation is unknown or when dealing with small samples (n < 50). Sample size influences the
level of kurtosis in distributions. Small
samples change the shape of our Student's t-distribution: Hypothetical sampling
sampling distributions from distributions of probabilities that represent the
mesokurtic to platykurtic. Although distribution of all possible outcomes of a sample
the t-distribution is symmetric and statistic; it is similar to the normal curve except
bell-shaped, it is flatter with longer that its shape is affected by sample size
tails. This means that it is more likely to produce statistical values that fall further from the
mean (e.g., beyond 1 and 2 standard deviations). Therefore, we cannot rely on the normal (z)
distribution to make statistical inferences about populations based on small samples.

The Student’s t-distribution is useful because it accounts for changes in distribution shapes
based on sample size. This allows us to make more accurate statistical inferences about
whether a sample statistic is a common or rare value. The actual shape of the t-distribution
depends on a concept called degrees of freedom (df). Degrees of freedom represent the
number of independent
observations (i.e., cases) in our Degrees of freedom (df): The number of independent
sample. For the t-distribution, the observations or scores that are "free" to vary when
degrees of freedom for a single estimating population values from a sample
sample are defined as n – 1, where n refers to the sample size. The df are reduced by one since
we are using a sample rather than population data.

Although t-distributions are a bit flatter than the normal curve, this difference diminishes as the
sample size increases. In fact, with large samples (n > 50), the t-distribution and the normal
curve are virtually identical. The graph below shows differences in the shape of the normal
curve and two different t-distributions based on sample size. For all distributions, you should
notice that common outcomes are found near the center of the distributions and rare
outcomes are found in their tails.

The Standard Normal (z) Distribution and t-distribution

Copyright @2017 116



Analyzing Criminological Data Chapter 7


4. Chi-square distribution (!2)

The chi-square distribution is a hypothetical sampling distribution used to test hypotheses
about the relationship between two qualitative variables (e.g., the relationship between gun
ownership (no, yes) and risks of home burglary victimization (victim, non-victim). Similar to
other sampling distributions, the chi-square distribution represents the probability of all
possible statistical outcomes.
Chi-square distribution ( χ 2 ): A hypothetical
The shape of the chi-square distribution, sampling distribution of probabilities that is
like the t-distribution, is dependent upon used to test hypotheses about the relationship
its degrees of freedom (df). However, between two qualitative variables
degrees of freedom for chi-square are not based on sample size. Instead df for chi-square
distributions are generally based on the number of variable attributes (you will learn to
estimate this later). The graph below shows the different shapes of chi-square distributions for
4 and 10 degrees of freedom (df).

Chi-Square Distributions ( χ 2 )




Notice that chi-square distributions have a positive skew. Values closer to 0 are common
outcomes, while rare outcomes are the values in the far-right tail of the distribution. When
using a chi-square distribution to test hypotheses and make statistical inferences, the areas
designating common and rare outcomes take on special meaning. For example, as will be
illustrated in later chapters, if there is no relationship between two variables, the sample
statistic should produce a chi-square value close to 0. However, a large chi-square value would
indicate a rare outcome, leading you to seriously question the accuracy of the claim that there
is no relationship between these two variables.

The graph below provides a visual representation of how the chi-square distribution is used to
test hypotheses. If there is no relationship between the variables, we should get a sample chi-

Copyright @2017 117



Analyzing Criminological Data Chapter 7

square value that falls somewhere in the graph’s non-shaded area. However, we would reject
this hypothesis if the chi-square value falls anywhere in the graph’s shaded area. The shaded
area represents rare sample outcomes, and would lead you to question whether the no-
relationship assumption is true.


Using Chi-Square Distribution to Test Hypotheses


χ 2 values
Shaded area = reject the hypothesis because these are rare outcomes if it is true
Non-shaded area = outcomes consistent with hypothesis




5. The F-Distribution

The last major type of sampling distribution used in criminological research is the F-
distribution. This probability distribution, like the chi-square distribution, is used to make
statistical inferences about relationships
F-distribution: A sampling distribution of
between two variables. As you know, all
probabilities that is used to test hypotheses
variables “vary” (or else you would have a
about the nature of variation in group
constant), and this is refered to as variation.
differences on a quantitative variable
When comparing relationships between two
variables, you will want to know how much of one variable’s variation can be explained by the
other variable. The F-distribution is used when we want to apply statistics to explain how much
variation in a quantitative variable can be explained by differences in another variable (e.g.,
how much of the variation in the number of "drug free" days among substance abusers is

explained by different types of drug treatment). In particular, the F-distribution allows us to
compare the average amounts of explained and unexplained variation.

The F-distribution has many of the same properties of other sampling distributions. It
represents a probability distribution of all possible outcomes. It allows us to make statistical
inferences. Like the chi-square distribution, the F-distribution is also positively skewed.
Common outcomes have an f-value close to 0 and rarer outcomes have values that fall toward
the far-right tail of the distribution. The shape of the F-distribution also varies by its degrees of
freedom (df), but degrees of freedom depend on both the number of groups being compared

Copyright @2017 118



Analyzing Criminological Data Chapter 7

and the sample size. The term "f-ratios" is often used instead of "f-values" to define the
numerical scores under this sampling distribution because these scores actually represent the
ratio of the average variation between and within each group in the analysis.

The graph below shows an example of the F-distribution for 19 cases. This particular F-
distribution is a based on a comparison of two groups. Degrees of freedom are calculated based
on the number of groups (df1 = # groups(4) - 1= 3) and sample size (df2 = n(19) - # groups(4) =
15). Notice the location of rare outcomes (the red shaded area) in the far-right tail of the F-
distribution. In this example, getting an f-ratio of 3.3 or larger from a random sample would be
a very rare outcome if the true ratio of population values was 0 (meaning that there is no
difference between group values).

The F-Distribution and Areas Representing Rare (Red) and Common (Gray) Outcomes
[n's= 4 groups, 19 cases]


Red area = rare outcomes if population value is true
Non-shaded area = common outcomes if population value is true

Practice Applications for Learning Objective 2 (LO2):


Identify “rare” and “common” outcomes under the following sampling distributions:

Q1. The shaded area represents what outcomes in the
following sampling distribution?

A. Common outcomes at or below the mean
B. Rare outcomes below the mean
C. Common outcomes around the mean
D. Rare outcomes above the mean

Copyright @2017 119



Analyzing Criminological Data Chapter 7


Q2. The shaded area represents what outcomes in the
following sampling distribution?

A. Common outcomes at or above the mean
B. Rare outcomes below the mean
C. Common outcomes at or below the mean
D. Rare outcomes above the mean


Q3. The shaded area represents what outcomes in the
following sampling distribution?

A. Common outcomes at or above the mean
B. Rare outcomes below the mean
C. Common outcomes at or below the mean
D. Rare outcomes above the mean

Q4. The shaded area represents what outcomes in the
following F-distribution?

A. Common outcomes if the population value is
true.
B. Rare outcomes if the population value is true
C. The majority of outcomes in a distribution
Correct Answers: Q1(C), Q2(A), Q3(B), and Q4(B)

LO3: Create standardized scores and derive probabilities associated with


sampling distributions.

Every sampling distribution is based on a set of standardized scores. Standard scores are
useful. They make statistical values easier to interpret and allow us to directly compare these
values across variables and samples. Sampling distribution: A hypothetical distribution
that represents all possible outcomes of a sample
Standardized scores are specific to each statistic based on an infinite number of samples
sampling distribution, meaning that
each distribution (e.g., z-distribution, t-distribution) has its own set of scores. Sets of
standardized scores are typically presented in tables along with the probability of obtaining any
particular score or range of score. We
Standardized scores: Scores that are adjusted to
use distribution-specific tables of
a common metric so that they can be easily
standard scores and their associated
interpreted and compared across variables and
probabilities to make statistical
samples
inferences and determine whether
particular outcomes are “rare” or “common”.

Copyright @2017 120



Analyzing Criminological Data Chapter 7

The table below provides a list of sampling distributions, the name of the associated
standardized scores, and the related conversion formulas (i.e., how we convert sample statistic
scores into standardized scores). We will first focus on creating standardized z-scores under a
standard normal distribution to illustrate the conversion process. Then, we will explain how
probabilities are derived from standarized scores.

Sampling Distributions, Standard Scores, and Conversion Formulas

Sampling Distribution Name of Standard Scores Conversion Formula

Binomial Distribution Binomial values Binomial Expansion = CnrPrQn-r

x1 − x
Standard Normal (z) Distribution z-scores z=
s

x1 − x
Student t-Distribution t-scores t=
s

(o − e)2
Chi-Square Distribution x2-values x2 = ∑
e
F = MSBetween Group / MSWithn
F- Distribution f-ratios
Group



1. Converting Raw Scores to Z-Scores

Z-scores are the standarized scores most commonly used in statistical inference. They
standardize individual scores by converting them to “normal deviates.” Specifically, these
scores tell us how many standard deviations
Z-Scores: Standardized scores that represent
an individual score is from the mean of all
the number of standard deviations an
scores. Any individual score can be
individual score is from the sample mean
standardized to a z-score by using the
following formula:

x −x
z= 1 where xi = individual raw score, x = mean score, and s = standard deviation.
s ,

Let's practice converting raw scores (x1) to z-scores. This requires very basic math calculations.

• Assume that the average LA homicide detective solves 40 cases a year and the standard
deviation for solving homicides among detectives is 10. You are an LA detective and you
solve 60 homicides a year. So, x1 = 60, x = 40, and s = 10. What is your homicide arrest

Copyright @2017 121



Analyzing Criminological Data Chapter 7

rate when expressed as a z-score?



Answer: z = (x1 - x )/s = (60 - 40)/10 = + 2.00. This z-score indicates that your
arrest rate is 2.00 standard deviations above the mean since positive z-scores
indicate values greater than the mean. You are a great detective - your rate of
catching bad guys is better than average.


Your score


Sample
scores
-3.00 -2.00 -1.00 +1.00 +2.00 +3.00
Standard

deviations

• Assume that LA prosecutors get an average sentence of 4 years for convicted sex
offenders with a standard deviation of 1 year. You are an LA prosecutor and you just got
a conviction and sentence of 5 years for a sex offender. So, x1 = 5, x = 4, and s = 1. How
does your sex offender's sentence compare to other prosecutor’s sentences when
converted to a z-score?

Answer: z = (x - x )/s = (5 - 4)/1 = + 1.00. This z-score indicates that the sentence
for this sex offender is 1.00 standard deviation above the mean. You are also a
good prosecutor - your sex offender is serving more time than most convicted by
other prosecutors.

However, your performance as a homicide detective was better than your
performance as a DA. Why? Because a z = +2.00 for your detective work is more
above average (i.e., the mean) than a z = +1.00 for your DA work. We can make
these direct comparisons across variables with different numerical scales (i.e.,
number of arrests vs. years in prison) because we have converted both of them
to standardized scores.



Your score


Sample

scores


Standard
-3.00 -2.00 -1.00 +1.00 +2.00 +3.00
deviations

Copyright @2017 122



Analyzing Criminological Data Chapter 7

• Now, one last question: What if your convicted sex offender gets only 1 year when the
x = 4, and s = 1?

Answer: When you convert 2 years into a z-score, you get a value of z = - 3.00. A
score of 3 standard deviations from the mean is a very rare outcome.
Unfortunately, this z-score is a large negative value. This large negative z-score
tells us that you are far below average, leading some of your critics to claim that
that you are "too soft on crime."


Your score



Sample

scores

Standard
-3.00 -2.00 -1.00 +1.00 +2.00 +3.00
deviations
scores

The process of converting sample scores to other standardized values (e.g., t-scores, x2-values,
f-ratios) is similar to the transformations you just conducted to calculate z-scores. Although the
conversion formulas for each sampling distribution are different (as shown in the previous
table), the same basic principles apply to all sampling distributions.

2. Converting standard scores to probabilities and percentile ranks

After converting raw sample scores to standardized scores, the next step is to convert
standardized scores into probability values. You can evaluate claims about populations and test
hypotheses by determining the probabilities of particular outcomes (i.e., find whether this a
rare or common outcome). We will illustrate this process by using z-scores and the standard
normal curve, but the same logic applies to deriving probabilities in all types of sampling
distributions.

Raw scores can be converted to z-scores for any quantitative variable. When a variable's spread
of scores or its underlying sampling distribution resembles the standard normal curve, these
standard scores can be easily converted into probabilities and percentile rankings. However, in
order to make these conversions, you need to (1) use a “normal curve table” that links z-scores
to probabilities and (2) know how to interpret this table. Fortunately, all statistics books will
have a “Table of the Areas Under the Normal Curve". The table below represents an
abbreviated version of this normal curve table. A more complete "Normal Curve Table" is
contained in Appendix A of this book. How to derive probabilities and percentile rankings from
a normal curve table is explained below.

Copyright @2017 123



Analyzing Criminological Data Chapter 7

The Area Under a Normal Curve for Particular Z-Scores


Column A Column B Column C


z-Score (+ or -) Mean to z Area > z
.00 .0000 .5000
.50 .1915 .3085
1.00 .3413 .1587
1.28 .3997 .1003
1.64 .4495 .0505
1.96 .4750 .0250
2.00 .4772 .0228
2.33 .4901 .0099
3.00 .4987 .0013

Each column of the table conveys information for determining the probability of obtaining
particular z-scores. The highlighted regions in the graphs at the top of columns B and C provide
visual images of the areas of a normal curve that are covered within these columns. For
example, column B is based on the area between the mean and particular standard deviation
scores (i.e., z-scores) from this mean, while column C refers to the area that falls beyond
particular standard deviation scores (z-scores) above or below the man. Specific interpretations
of values within each column are described below.

• Column A (z): This column includes a list of z-scores. More complete “Normal Curve
Tables” contain far more z-values in this column, often involving successive numbers
from .00 to 3.00 in increments of .01 (i.e., .00, .01, .02, … 2.98, 2.99, 3.00). Because the
normal curve is symmetrical, the values in column A represent both positive and
negative z-scores. For example, your actual z-score may be a positive value (e.g., z = +
1.00) or a negative value (e.g., z = - 1.00), but in either case you would look for the value
of 1.00 in Column A. This is because a negative z-score of (e.g., -1.96) is associated with
the same probability as a positive z-score of the same value (e.g., +1.96). So, just ignore
the sign when you are looking for a particular z-score in a normal curve table.

• Column B (Mean to z): This column displays the proportion of the area under a normal
curve that falls between the mean and a particular z-score. For example, the value of
.3413 in column B indicates that about 34% (.3413 x 100 = 34.13%) of the area under a
normal distribution falls between the mean and 1.00 standard deviation (z-score) from
the mean (i.e., 34% of cases fall within this area under the curve). In other words, there
is a 34% probability that a single score in a sample with a normal distribution will fall
between the mean and +1.00 standard deviation above the mean. Because the normal
curve is symmetrical (i.e. each side is a mirror image of the other), there is also a 34%
probability that a score will fall between the mean and -1.00 standard deviation below

Copyright @2017 124



Analyzing Criminological Data Chapter 7

the mean.

If you want to know the percent of cases that fall between the mean and a
particular standard deviation score (z-score), do the following: (1) find the z-
score of interest in column A, (2) read across to column B, and (3) convert this
proportion or probability into a percent (by multiplying the value in column B by
100).

• Column C (Area > z): This column indicates the proportion of cases in a normal
distribution that have more extreme scores than the z-score of interest. More extreme
scores fall further away from the mean on the same side of the distribution (i.e., either
higher than a positive z-score or lower than a negative z-score). For example, the value
of .0250 in column C indicates that only 2.5% (.025 x 100 = 2.5%) of cases within a
normally distributed variable have z-scores that are more extreme than ±1.96. The 1.96
z-score value is obtained by (1) finding the value of .025 in column C and (2) reading
across that row to find the corresponding z-value in column A. Again, because the
normal curve is symmetrical, 2.5% of the cases will be more extreme than 1.96 z-scores
above the mean (+1.96) and 2.5% of the cases will be more extreme than 1.96 z-scores
below the mean (-1.96)

It is also possible to use the normal curve table to generate percentile ranks (e.g., what z-score
represents the “top 1%”, “top 5%, “bottom 2%, or “bottom 10%”?). To answer these questions
about percentile ranks, we use the proportions in column C to find the appropriate z-score in
column A. These examples illustrate how to establish percentile ranks from the normal curve
table:

• If you wanted to know what z-score represents the top 2.5% of cases, you would: (1)
convert 2.5% into a proportion (.025), (2) find the value closest to .025 in column C, (3)
read across that row to find the z-score of 1.96 in column A, and (4) place a positive sign
in front of this z-score of +1.96 (because the “top” 2.5% represents z-scores above the
mean).

You should notice that a score in the top 2.5% (z = +1.96) is higher than 97.5% of all
other scores in a normal distribution. This should make sense since subtracting 2.5%
from 100% (the total area under a normal curve) gives you 97.5% (100% - 2.5% = 97.5%).
By similar logic, scoring among the top 16% of the LSAT exam test takers (area = .1587; z
= + 1.00) is equivalent to saying that you outperformed about 84% of the other test
takers.

• To find the z-score representing the bottom 2.5%, follow the same logic: (1) convert
2.5% into a proportion (.025), (2) find the value closest to .025 in column C, (2) read
across that row to find the z-value of 1.96 in column A, and (3) place a negative sign in
front of this z-score of -1.96 (because the “bottom” 2.5% represents z-scores below the
mean). If your z-score on an exam is -1.96 (i.e., you are in the bottom 2.5%), you need
Copyright @2017 125

Analyzing Criminological Data Chapter 7

serious help. Nearly everyone else in the class (97.5% of students) performed better
than you.

Finally, the last important application of z-scores (and, by extension, other standardized scores)
involves finding the scores that represent specific areas of a normal distribution. We are most
often interested in finding the middle or most common segments of a distribution.

Let’s say that we wanted to identify the z-scores that represent the "middle 68% of cases" in a
normal distribution. The graph below shows the location of these cases.

Middle 68% of Cases in Normal Curve



Use these steps to find the z-scores that represent the middle 68% of cases in a normal
distribution.

1. Divide 68% by 2 (68%/2 = 34).

2. Convert this value into a proportion (34/100 = .34).

3. Use a Normal Curve Table and find the value closest to .34 in column B.

4. Read over to column A to find the z-score (1.00) that corresponds with this value of
.34. A z-score of 1.00 tells you that 34% of the cases in normal curve are between
the mean and +1.00 standard deviation above the mean.

5. Because the normal curve is symmetrical, 34% of the cases will also fall between the
mean and -1.00 standard deviation (z-score) below the mean.

6. Add these percentages together (34% + 34% = 68%) to reach the conclusion that
68% of the scores on a normally distributed variable will fall within 1 standard
deviation of the mean (i.e., the range of ± 1.00 z-score covers 68% of the area of a
normal distribution).

To find the z-scores that capture the "middle range" of any percent value, you simply replace
the value of 68% in Step 1 with your new value (e.g., 80%) and follow the remaining steps. Try
it. For the "middle 80%", you should get z-scores equal to ± 1.28 (i.e., the middle 80% of scores
within a normal distribution will range from -1.28 to +1.28 z-scores from the mean). If you use

Copyright @2017 126



Analyzing Criminological Data Chapter 7

the same steps to calculate the z-scores associated with the "middle 95%", you should find z = ±
1.96.

So, why do we want to know which z-scores represent the "middle 68%, "middle 95%", or any
other percent value? This information will allow you to make sound statistical inferences or
estimates about unknown population values. This is a necessary and important step in all
hypothesis tests.

Practice Applications for Learning Objective 3 (LO3):


Answer the following questions about converting raw scores to z-scores, probabilities, and
percentile ranks:

Q1. If scores on a law enforcement entrance exam are normally distributed with a mean of 70 (" =70)
and standard deviation of 10 (s=10), what standardized score (z-score) represents a person with
an exam score of 90 (x = 90)?
A. z = 0
B. z = +1.00
C. z = +1.50
D. z = +2.00
E. z = -2.00

Q2. If scores on a law enforcement entrance exam are normally distributed with a mean of 70 (" =70)
and standard deviation of 10 (s = 10), what percentage of the exam takers will get a score
between 70 and 90?
A. 2.28%
B. 15.87%
C. 34.17%
D. 47.72%
E. 97.72%


Q3. If scores on a law enforcement entrance exam are normally distributed with a mean of 70 (" =70)
and standard deviation of 10 (s=10), what percentage of the exam takers will get a score of 90 or
higher (x=90)?
A. 2.28%
B. 15.87%
C. 34.17%
D. 47.72%
E. 97.72%

Copyright @2017 127



Analyzing Criminological Data Chapter 7


Q4. What z-score represents the "top 5%" of the area of
a normal distribution (the "red shaded area" in the
adjoining graph)?
A. z = +1.00
B. z = +1.28
C. z = +1.64
D. z = -1.64

E. z = +1.96

Q5. What z-score represents the bottom 10% of the scores on a normally distributed variable?
A. z = -1.00
B. z = -1.28
C. z = +1.28
D. z = -1.64
E. z = -1.96

Q6. What z-scores represents the middle 80% of the scores on a normally distributed variable?
A. z = ± 1.00
B. z = ± 1.28
C. z = ± 1.64
D. z = ± 1.96
E. z = ± 1.96
Correct Answers: Q1(D), Q2(D), Q3(A), Q4(C), Q5(B), and Q6(B)

LO4. Demonstrate the impact of the standard error and degrees of freedom on
the shape of sampling distributions
So far, we have discussed different types of sampling distributions, how to identify rare and
common outcomes under them, how to convert raw scores into standardized scores, and how
to derive probabilities and percentile ranks from these standardized scores.

To complete your understanding of sampling distributions and statistical inference, you need to
know how two particular factors affect the shape (i.e., dispersion) of values in a sampling
distribution. These two factors are (1) the standard error of population estimates and (2) the
degrees of freedom associated with sampling distributions.

1. Standard error of the estimate

You already know that all sampling distributions used by criminologists and statisticians share
some common properties. For example, all of them have areas that represent common
outcomes (indicated by the highest probabilities or peak in the distribution) and they all have a
specific dispersion or spread of scores around that peak area.

Copyright @2017 128



Analyzing Criminological Data Chapter 7


In a frequency distribution, the spread of scores around the mean is measured by the standard
deviation or variance. In sampling distributions, the standard deviation of all possible sample
means is called the standard error of the estimate (and, in this case, the standard error of the
means). The standard error can be
Standard error of the estimate: The standard
estimated for various kinds of sample
deviation of a sampling distribution and
statistics (e.g., median, range, mean).
measure of sampling error
However, in all applications, the standard
error is the estimated standard deviation of a sampling distribution and a measure of sampling
error. It measures the discrepancy between population values and sample estimates of them.

The standard error of a sampling distribution and the standard deviation for any quantitative
variable are similar in three respects.

1. The wider the dispersion of scores, the larger the numerical value for both estimates of
the standard error and standard deviation.

2. Both are standardized measures because they are divided by their respective sample
sizes. The formula for the standard error of the population mean, like the standard
deviation formula, includes the sample size in the denominator.

σ
SEu = , where σ = the population standard deviation and n = sample size.
n

3. Both measures are influenced by sample size. In general, larger samples usually produce
smaller standard errors because (1) larger samples provide more consistent estimates of
population means and, as a result, (2) the spread or dispersion of these sample
estimates are more concentrated (i.e., tightly clustered) within these sampling
distributions.

The following graphs illustrate the effect of sample size on the standard error of the estimate.
Notice the greater concentration of scores (i.e., the lower standard error) when the sample size
for the sampling distribution of means increases from 5 cases to 20 cases.

Distribution of Sample Means
(n=5 cases in 600 samples)
100

50

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Copyright @2017 129



Analyzing Criminological Data Chapter 7

Distribution of Sample Means


160 (n= 20 cases in 600 samples)
120
80
40
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

The different kurtosis levels of these two sampling distributions clearly demonstrate why
criminologists prefer larger samples. If large random samples are repeatedly taken from a
population, statistical theory tells us that the sampling distribution of the means will exhibit a
tighter pattern around the unknown population mean. In other words, we decrease sampling
error by increasing the sample size (because the standard error is a measure of sampling error)
and narrow the range of our true population value estimates.

2. Degrees of Freedom

The shape of most sampling distributions (except the normal curve) is also affected by its
degrees of freedom (i.e., the number of scores that are "free" to vary when estimating
population values from sample data). We previously illustrated the effect of degrees of
freedom on the shape of the t-distribution and chi-square square distributions.

Another way to demonstrate the influence of degrees of freedom on statistical inferences is to
compare the standardized scores associated with particular probabilities across sampling
distributions that vary in degrees of freedom. For example, let’s look at the standardized scores
in two different sampling distributions that represent the “top 5%" of cases in each of them.
These “top 5%" scores for different degrees of freedom are summarized in the table below.

Standardized Values Representing the Top 5% of Cases for
Different Degrees of Freedom for the t- and f-Distributions.
Degrees of Freedom (df) t-score f-ratio*
2 > 2.920 > 18.5
4 > 2.132 > 7.71
6 > 1.943 > 5.99
8 > 1.860 > 5.32
10 > 1.812 > 4.96
20 > 1.725 > 4.35
30 > 1.697 > 4.17
40 > 1.684 > 4.08
50 > 1.676 > 4.03
*The f-ratios in this table are based on 1 and n-k degrees of freedom.

Copyright @2017 130



Analyzing Criminological Data Chapter 7


For both the t- and f-distribution, the standardized values representing the top 5% of cases
decrease as the number of degrees of freedom increase. For example, the top 5% of cases in
the t-distribution fall above a t-score of 2.920 with 2 df, above 2.132 with 4 df, above 1.943
with 6 df, and so forth. Similar reduction represents the impact of increased degrees of
freedom on the value of the f-distribution. This pattern indicates that both types of sampling
distributions become more compressed, or tightly clustered, as the degrees of freedom
increase. We previously discussed a comparable pattern when we examined the effects of
sample size on the standard error. This similarity is explained by the fact that the degrees of
freedom for both the t- and f-distributions are a function of sample size.

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about the standard error, degrees of freedom (df) and the shape
of sampling distribution:

Q1. How is the standard error of sampling distributions influenced by sample size?

A. As the sample size increases, the size of the standard error also increases.
B. As the sample size increases, the size of the standard error decreases.
C. The size of the standard error will be identical for both large and small samples.


Q2. How is the shape of the t- and F-distributions affected by the degrees of freedom (df)?

A. As the degrees of freedom increase, these sampling distributions become more dispersed
and flatter.
B. As the degrees of freedom increase, these sampling distributions become more compressed
and tightly clustered.
C. The shape of these distributions is identical regardless of their degrees of freedom.


Q3. Which of these sampling distributions is
based on the larger sample size?

A.
A. The red sampling distribution

B. The blue sampling distributions

B.

Copyright @2017 131



Analyzing Criminological Data Chapter 7


Q4. Which of these sampling distributions has the
larger standard error?

A. The red sampling distribution A.
B. The blue sampling distributions

B.

Q5. As sampling distributions, the standard
deviation of the sample means is larger in
which of these distributions?
A.
A. The red sampling distribution
B. The blue sampling distributions

B.
Correct Answers: Q1(B), Q2(B), Q3(B), Q4(A), and Q5(A)

Review

Use these reflection questions and directed activities to help understand and apply the material
presented in this chapter.

What?

Can you define the critical concepts covered in this chapter? Describe each of these concepts in
your own words and give at least two examples of each.

• Statistical Inference
• Sampling distributions
• Binomial distribution
• Standard normal (z) distribution
• Student’s t-distribution

Copyright @2017 132



Analyzing Criminological Data Chapter 7

• Chi-square distribution
• F-distribution
• Standardized scores
• Z-scores
• Standard error (SE)
• Degrees of freedom (df)


How?

Answer the following questions and complete the associated activities.

• How can sampling distributions be used for making sound statistical inferences?

o Describe the characteristics of sampling distributions that help achieve this goal.

• How do the standard normal (z) distribution and the student's t-distribution differ?

o Describe the basic differences between these two basic sampling distributions.

• How do you convert raw scores to z-scores and z-scores to percentile ranks based on the
standard normal distribution?

o Perform these conversions to percentile ranks using the following data: a mean
of 5, a standard deviation of 2, and raw score of 10 for a normally distributed
variable.



When?

Answer the following questions.

• When should criminologists use the principles of statistical inference?

• When is the binomial distribution used in criminological research?

• When is the chi-square distribution used in criminological research?

• When is the f-distribution used in criminological research?

Copyright @2017 133



Analyzing Criminological Data Chapter 7


Why?

Answer the following questions.

• Why do you want large random samples in criminological research?

• Why does knowing this information about statistical inference and sampling
distributions help you understand the proper application and misuses of statistics in
criminological studies?



Copyright @2017 134



Analyzing Criminological Data Chapter 8

CHAPTER 8:
Estimating Population Values from Sample Data

This chapter focuses on how confidence intervals are applied to sample data to provide
accurate estimates of unknown population values. Understanding the concepts and learning
objectives in this chapter will improve your skills in conducting and evaluating criminological
research.

Concepts Learning Objectives (LO)

- Sample statistics LO1: Explain how sample statistics are used to estimate
population parameters
- Population parameters
LO2: Calculate and interpret confidence intervals from large
- Point estimate random samples
- Confidence intervals LO3: Calculate and interpret confidence intervals from small
- Confidence level random samples

- Margin of error LO4: Demonstrate how the width of a confidence interval is


influenced by the confidence level, the sample size, and
the standard error of the estimate

Introduction
As described in the previous chapters, criminologists use sample data to make statistical
inferences about unknown population values. You should now know the “best practice”
methods for selecting samples and measuring concepts (e.g., select large random samples, have
high response rates, use valid/reliable measures) that are essential for making sound statistical
inferences. This knowledge is critical. Your ability to make sound statistical inferences from
sample data is the basic foundation for answering many criminal justice research questions.

In this chapter, we extend your basic knowledge of statistical inference and sampling
distributions. You will learn how sample data is used to make two types of statistical inferences.
The first involves using sample data to estimate the true population value. The second involves
using sample data to assess how confident we are that this estimate is accurate. These two
types of statistical inference form the basis for every inference we make about population
values from sample data.

Copyright @2017 135


Analyzing Criminological Data Chapter 8

We begin by reviewing the principles of statistical inference and explaining how sample values
are used to estimate population values.

LO1: Explain how sample statistics are used to estimate population parameters

You know that criminologists make statistical inferences whenever they use descriptive
statistics from sample data to make claims or test hypotheses about the characteristics of a
larger population. Because population values are typically unknown, we rely on the principles of
statistical inference to estimate them.

Statisticians use more concise terminology to describe the relationship between sample and
population values. In particular, they define statistical inference as “the process of using sample
statistics to estimate or evaluate population parameters”.

1. Sample Statistics and Population Parameters

Sample statistics are descriptive measures of a variable that are computed from a sample of
observations. They represent a variable’s central tendency (e.g., mean, median), dispersion
(e.g., standard deviation, variance), or other characteristic (e.g., proportion/percentage of cases
in a particular category). Latin characters Sample statistics: Measures of a variable’s
are used to signify sample statistics in characteristics that are derived from a sample
symbolic form (e.g. ! = sample mean; s = of observations, symbolized by Latin characters
sample standard deviation; se = sample
standard error).

Population parameters are descriptive measures of a variable that are based on observations of
an entire population. Like sample statistics, they include a variable’s central tendency,
dispersion, or other characteristics. Population parameters: Measures of a
However, Greek characters are used to variable’s characteristics that are derived by
signify population parameters (e.g., μ = observing an entire population, symbolized by
population mean; σ = population standard Greek characters
deviation; σμ= population standard error of
the mean).

Sample data can sometimes be confused for population data. For example, many census data
analysts (e.g., users of the decennial census of American households, the national census of jail
and prison inmates) assume that the descriptive statistics in these reports represent actual
population parameters. However, these statistical measures are better classified as sample
statistics. Census data do not include all U.S. residents or inmates (e.g., the U.S. census
undercounts the homeless and migrants; not all correctional institutions respond to this
national survey). This issue highlights two important points: (1) most criminological research
involves computing sample statistics to represent population parameters and (2) there is a
need for statistical inference even in data that is assumed to represent entire populations.

Copyright @2017 136


Analyzing Criminological Data Chapter 8


Let’s use this new language of sample statistics and population parameters to describe the role
of sampling distributions in statistical inference.

2. Sampling Distributions and Estimating Population Parameters

You know that sampling distributions are used to make statistical inferences. As probability
distributions, sampling distributions help us to identify rare (unexpected) and common
(expected) outcomes of any descriptive statistic (e.g., mean, proportion, median). In particular,
they represent the distribution of all possible outcomes of a sample statistic when an infinite
number of random samples are taken from a population.

Now, let’s look closer at two of these sampling distribution properties:

1. Sampling distributions are based on an infinite number of random samples from a
population.

2. Sampling distributions are theoretical or hypothetical distributions of all possible
outcomes from this infinite number of random samples—they are “theoretical or
hypothetical” because we could never select an infinite number of anything!

These two properties should cause you some confusion since they suggest that sampling
distributions are not real. Although they are based on an infinite number of random samples,
criminologists rarely take more than one sample in a research study.

So, if these theoretical sampling distributions derive from an infinite number of samples (and
we can’t take an infinite number of anything), how can they help us make sound inferences
from only one random sample? In other words, what’s the big deal about sampling distributions
and why are they considered the core foundation for using sample statistics to estimate
population parameters?

The statistician’s answer to this “big deal” question about sampling distributions is the key to
understanding everything about statistical inferences. Here are the three basic principles that
explain how and why sampling distributions are important for making statistical inferences
from one random sample:

1. If you take an infinite number of large random samples from a population, the sampling
distribution of these sample statistics will “converge” on the true population parameter.
In other words, the true population value will fall within the center of the area
representing “common” sample outcomes (i.e., it will be the modal value). Based on
what statisticians call the “Central Limit Theorem”, the sampling distribution of sample
means will be normally distributed around the true population value if the sample sizes
are large (n > 50).

Copyright @2017 137


Analyzing Criminological Data Chapter 8

For example, if you flipped 100 coins (n = 100) an infinite number of times, the sampling
distribution of the sample means will converge on the true population mean of 50 heads
(i.e., 50% of the time you flip a fair coin you should get a head). According to the Central
Limit Theorem, this sampling distribution of the sample means will also be normally
distributed around the mean of 50.

2. Based on the properties of the normal curve, we also know that a fixed percentage of
these sample means will fall within given standard deviations from the true population
mean. In this previous example, these properties of the normal curve indicate that 68%
of these sample means will be within ± 1 standard deviations of this mean of 50, while
95% will be within ± 1.96 and 99% will be within ± 2.58 standard deviations of it.

3. Because we know the properties of sampling distributions, we can make probability-
based inferences about the likelihood of obtaining a particular outcome associated with
just one random sample of the population.

In particular, if we take one large random sample from a population, the single most
likely outcome is that our sample statistic (e.g., !) will have the same numerical value as
our unknown population parameter (e.g., μ). This is the case because the most probable
outcome in a sampling distribution is its mean value. Similarly, we have a 68% chance
that our sample estimate is within ± 1 standard deviation from this true population
parameter (and a 95% chance that it is within ± 1.96 and a 99% chance that it will be
within ± 2.58 standard deviations from this true population parameter).

So, here’s a short answer to the question about why sampling distributions are a “big deal” for
making statistical inferences about unknown population parameters:

“Knowing the properties of sampling distributions enables criminologists to use one
random sample to derive (1) a single best estimate of the population parameter (i.e., a
“point estimate”) and (2) a range of estimated values that have a given probability of
including this unknown population parameter (i.e., a “confidence interval”).”

Next, you will learn how to construct and interpret these point estimates and confidence
intervals from sample data.







Copyright @2017 138


Analyzing Criminological Data Chapter 8

Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about sample statistics and estimating population parameters:

Q1. What is the proper symbol for the sample mean?
A. "
B. μ
C. s
D. σ


Q2. What is the proper symbol for the population mean?
A. "
B. μ
C. s
D. σ


Q3. A Gallup poll finds that 65% of senior citizens are afraid to walk in their neighborhood at night.
Are these results based on data from a sample or population?
A. Sample data
B. Population data
C. Can’t tell from the statement whether it is based on sample or population data


Q4. If you take a large random sample of 500 people and compute the sample mean for a particular
variable ("= 50), your single best guess of the population mean (μ) is
A. 50
B. 500
C. .10 (50/500)
D. 10 (500/50)


Q5. What is the best guess of the numerical
value of the population mean based on this
sampling distribution of sample means?
A. 3
B. 4
C. 5
D. 6
E. 7

Copyright @2017 139


Analyzing Criminological Data Chapter 8


Q6. What is the best guess of the numerical
value of the population mean based on this
sampling distribution of sample means?

A. 6
B. 8
C. 10
D. 12
E. 14

Correct Answers: Q1(A), Q2(B), Q3(A), Q4(A), Q5(C), and Q6(C)

LO2: Calculate and interpret confidence intervals from large random samples

When criminologists use sample data to estimate the numerical values of unknown population
parameters, they are applying the concepts of point estimation and confidence intervals.
Definitions of these concepts and how they are calculated and interpreted in large random
samples are described below.

1. Point Estimates and Confidence Intervals

Within the context of statistical inference, a point estimate is a sample statistic that represents
the single best estimate of the unknown population
parameter. In most criminological research, this point Point estimate: A sample statistic
estimate is usually a variable’s mean (!) in the sample that represents the single best
or the proportion/percent (p) of cases that fall within a estimate of the unknown
particular category of the variable. population parameter

Examples of point estimates in criminological research include the following:

• The average convicted sex offender in a sample has 6.2 prior arrests (! = 6.2)
à The point estimate of the population mean is 6.2 (μ = 6.2).

• The average homicide detective in a sample solves 12 homicides per year (! = 12)
à The point estimate of the population mean is 12 (μ = 12).

• The success rate in a sample of drug courts is 75 % (psample = 75 %)
à The point estimate of the population proportion is 75 % (Ppopulation = 75 %).

• The proportion of death sentences in a sample of murderers is 20 % (psample = 20 %)
à The point estimate of the population proportion is 20 % (Ppopulation = 20 %).

Copyright @2017 140


Analyzing Criminological Data Chapter 8

A confidence interval represents a range of values that have a particular probability of


capturing the true population parameter. These confidence intervals are centered around the
point estimate derived from our sample statistics. Confidence interval: A range of
Confidence intervals are always based on a particular values with a particular probability
confidence level (e.g., 68%, 95%, or 99%) that represents of capturing the true population
the probability that this interval contains the unknown parameter, symbolized CI
population parameter.

Examples of statements that we can make based on confidence intervals include the following:

• We are 68% confident that the average convicted sex offender has between 5 and 7
prior arrests.

• We are 80% confident that the average homicide detective solves between 8 and 16
murders per year.

• We are 95% confident that between 65% and 85% of drug addicts get successful
treatment.

• We are 99% confident that between 5% and 35% of convicted murders are given a
death sentence.

To calculate a confidence interval, you need particular information about the sample and
sampling distribution. This required information includes:

• The random selection of a sample from the population.

• The value of particular sample statistics (e.g., the sample size, the variable’s mean,
standard deviation, proportion of cases in a particular category), and

• The correct sampling distribution (e.g., we use a normal curve when n > 50; we use the
t-distribution when n < 50).

We must also establish a confidence level and calculate estimates of the standard error and
margin of error. These concepts (i.e., confidence levels, standard error, and margin of error) and
their interrelationships are described below.

2. Confidence Levels, Standard Errors, and Margin of Error

A confidence level represents the probability that a Confidence level: The probability that a
confidence interval will capture the true population confidence interval will capture the true
parameter. It tells you how confident you should population parameter; symbolized CL
be that the true population value falls within a
confidence interval. For example, if you say, “I am 95% confident that the average offender

Copyright @2017 141


Analyzing Criminological Data Chapter 8

sentenced to prison will serve between 4 and 8 years,” you are using a 95% confidence level
(and the confidence interval is 4 to 8). Researchers determine which confidence level to use.
Probability values that are commonly used for confidence levels include, 68%, 80%, 90%, 95%,
and 99%. A researcher may use a lower confidence level (68%) or higher confidence level (99%)
for estimating a population parameter, depending on their research question and how sure
they must be that their interval captures the true population value.

Confidence levels are directly tied to sampling distributions. In fact, confidence levels represent
the probability of the various “middle ranges” of a sampling distribution. For example, you
learned in the previous chapter that the “middle 68%” of the normal curve is represented by
the z-scores of ± 1.00. This middle value is actually a confidence level (i.e., we are 68%
confident that the true population parameter falls within ± 1.00 z-scores from the sample
mean). Similarly, the “middle 95%” is the 95% confidence level and is represented by z-scores
of ± 1.96.

When computing confidence intervals, the selected confidence level is converted into
standardized scores. For large samples (n > 50), confidence levels are converted into z-scores.
For small samples (n < 50), they are converted into t-scores. The table below shows the
conversion of confidence levels into z-scores for the most commonly used confidence levels.

Converting Confidence Levels into Standardized Scores in Large Samples

Confidence Level (CL) Location in Sampling Distribution Standardized Scores


68 % CL “Middle 68% of Cases” z = ± 1.00
80 % CL “Middle 80% of Cases” z = ± 1.28
90 % CL “Middle 90% of Cases” z = ± 1.64
95 % CL “Middle 95% of Cases” z = ± 1.96
98 % CL “Middle 98% of Cases” z = ± 2.33
99 % CL “Middle 99% of Cases” z = ± 2.58


As described in Chapter 7, the standard error is the standard deviation of the sampling
distribution. It is a measure of sampling error and is estimated in a sample by taking the sample
standard deviation and dividing it by the square root of the sample size (se = s/ $). Because
sampling error reduces our ability to make sound statistical inferences from random samples,
most criminologists try to minimize this error by increasing their sample size. Notice in the
formula for the sample standard error that increasing the sample size (n) decreases the
estimated value of the standard error.

Copyright @2017 142


Analyzing Criminological Data Chapter 8

The final basic concept in developing confidence intervals is the margin of error. The margin of
error (ME) is the expected error in a confidence interval. It is the product of (1) the confidence
level and (2) the standard error. In most polls, the
Margin of error: The expected amount
margin of error is the “+ or –“ term that is
of error in a confidence interval;
presented in a footnote below the sample
symbolized ME
estimate of the population parameter. In a large
random sample, the margin of error is estimated using the following formula:

ME = ZCL (se), where ZCL represents the z-scores that correspond to the confidence
level and se is the standard error of the sample estimate.

As an example of how to compute the margin of error, assume you have a large sample, a
standard error of 10 (se=10), and you have selected a 95% confidence level (CL). Follow these
steps:
1. Use the table above to the convert 95% CL to z-scores of ± 1.96 (“middle 95%” = z of
± 1.96)

2. Plug in the value for se (se=10) to derive the following answer for the margin of error:
ME = ZCL (se) = 1.96 (10) = 19.6.

The value of 19.6 helps us to find the “lower” and “upper” limits of our confidence interval. We
subtract the margin of error from the mean to obtain the lower limit, and we add it to the mean
to find the upper limit. For example, let’s say that we are estimating a population parameter for
age. If our sample mean is 40 and the margin of error is 19.6 (based on the 95% CL used above),
we can be 95% confident that our true population value (the true average age) falls between
20.4 and 59.6 years of age. You will learn to compute these confidence intervals next.

3. Computing Formulas for Confidence Intervals for Means and Proportions in Large Samples

Now that you have a general understanding of the key statistical inference concepts, we are
ready to apply the basic formulas for using confidence intervals in large samples to estimate
population parameters. The table below summarizes the computing formulas for confidence
intervals for large sample means and proportions. The formulas in this table also show the
relationship between the population parameters (in Greek characters) and the sample statistics
(in Latin characters) that are used to estimate them.

Formulas for Confidence Intervals of Means and Proportions (Large Samples)

1. Confidence Interval for Means: Computing Formula:

Population Data µ ± ZCL sµ

Sample Data ! ± ZCL se

Copyright @2017 143


Analyzing Criminological Data Chapter 8

2. Confidence Interval for Proportions: Computing Formula:

Population Data PPopulation ± ZCL sp

Sample Data PSample ± ZCL sep




Because most criminological research uses sample statistics to estimate unknown population
parameters, we almost always use the formulas for sample data for confidence intervals. For
example, to calculate a confidence interval for a population mean, we use the sample estimate
of this mean (!) and the sample standard error (se) for developing this interval.

The next section explains how to compute and interpret confidence intervals in large samples.

4. Examples of Computing Confidence Intervals for Means and Proportions in Large Samples

Once you understand the major concepts and principles in estimating population parameters, it
is easy to construct and interpret confidence intervals. Here are the general steps to follow
when you have a large random sample:

1. Take a large random sample from the population (n > 50).

2. Compute basic sample statistics [e.g., a variable’s mean (!) or proportion in a particular
category (psample), standard deviation (s), and standard error (se = s/ $)].

3. Use the sample mean (!) or sample proportion (psample) as the single best point estimate
of the unknown population parameter (e.g., using ! to estimate μ; using psample to
estimate Ppopulation).

4. Select a confidence level (e.g., 68%, 95%, 99%) and convert this percentage into
standardized z-scores (e.g., 95% confidence level is equivalent to z-scores of ± 1.96).

5. Develop a confidence interval around this sample mean (!) or sample proportion
(psample) by plugging values into one of the following formulas:

• ! ± ZCL se (for sample means)
• psample ± ZCL sep (for sample proportions)

6. Interpret the results by describing the (1) numerical value of the point estimate, (2)
confidence level, and (3) “lower limit” and “upper limit” of the confidence interval. We
would describe our results in the following way: “Based on this random sample, our
single best estimate is that the average criminal penalty for convicted murderers in the

Copyright @2017 144


Analyzing Criminological Data Chapter 8

U.S. is 6 years and we are 95% confident that the true value lies somewhere between 4
and 8 years”.

A. Example of Confidence Intervals for the Population Mean using a Large Random Sample
Here’s a criminological example of how to calculate and interpret confidence intervals for a
population mean when you have a large random sample.

Criminologists have long been interested in the study of criminal careers. The number of
previous arrests of convicted felons is often used as a valid measure of “criminal
careers.” Given that the average number of previous arrests for convicted U.S. felons is
unknown, you decide to (1) take a large random sample of convicted felons and (2)
develop a confidence interval that provides an estimate of the mean number of previous
arrests among U.S. felons.

Here are the steps in the computation and interpretation of this estimated confidence
interval of the population mean from a large random sample:

1. Draw a large random sample of convicted felons in the U.S. (n= 400).

2. Compute sample statistics for the average number of previous arrests. You calculate
the sample’s mean (! = 10.5), its standard deviation (s = 4.0) and the estimate of
sampling error (se = s/ $) = 4/ 400)= .2).

3. Notice that the sample mean of 10.5 is our single best estimate of the unknown
population mean (i.e., ! —> μ).

4. Select a confidence level (CL) and convert it into z-scores. In this case, we select a 95%
confidence level that converts into z-scores of ± 1.96 (i.e., 95% CL —> z = ± 1.96).

5. Compute the margin of error (ME) -- > ME = ± ZCL se = ± 1.96(.2) = ± .39

6. Create the confidence interval (CI) by adding and subtracting the margin of error (ME)
from the estimate of the population mean (! ): CI = ! ± ME
= 10.5 - .39 = 10.11 (lower range)
= 10.5 + .39 = 10.89 (upper range)

7. Reach conclusions about the unknown population parameter: Based on this random
sample, the single best estimate of the average number of previous arrests among
convicted U.S. felons is 10.5. We are 95% confident that the actual population value
falls somewhere between 10.1 and 10.9 previous arrests. Thus, the typical felon in
the U.S. has an extensive criminal career.

Copyright @2017 145


Analyzing Criminological Data Chapter 8


B. Example of Confidence Interval for the Population Proportion (large random sample)

Here’s a criminological example of how to calculate and interpret a confidence interval for a
population proportion when you have a large random sample.

Gun control is one of the most controversial issues within the criminal justice system.
Questions have long been raised about the prevalence of handgun ownership in U.S.
households. However, the true proportion of American households that own a handgun
is currently unknown. As a criminologist, you decide to (1) take a large random sample
of U.S. households and (2) develop a confidence interval that provides an estimate of
the percent of American households that own a handgun.

Here are the steps in the computation and interpretation of this estimated confidence
interval of the population proportion from a large random sample:

1. Draw a large random sample of U.S. residents (n= 100).

2. Compute sample statistics for the proportion of these residents who have a handgun
in their home (psample = .40), the standard deviation of this proportion (s = .5), and the
estimate of its sampling error (se = s/ $ = .5/ 100 = .05).

3. Notice that the sample proportion of .40 is our single best estimate of the unknown
population proportion (i.e., psample—> Ppopulation).

4. Select a confidence level (CL) and convert it into z-scores. In this example, we select a
90% confidence level that converts into z-scores of ± 1.64 (i.e., 90% CL —> z= ± 1.64).

5. Compute the margin of error (ME) -- > ME = ± ZCL se = ± 1.64(.05) = ± .08.

6. Create the confidence interval (CI) by adding and subtracting the margin of error (ME)
from the estimate of the population proportion (psample): CI = psample ± ME

= .40 - .08 = .32 (lower range)
= .40 + .08 = .48 (upper range)

7. Reach conclusions about the unknown population parameter: Based on this random
sample, the single best estimate of the proportion of U.S. households that own a
handgun is .40 (40%). We are 90% confident that the actual population proportion
falls somewhere between .32 and .48. These sample data suggest that a sizable
minority of U.S. households own a handgun.

Copyright @2017 146


Analyzing Criminological Data Chapter 8

For any criminal justice research question that involves estimating population parameters, just
follow these basic steps for developing confidence intervals for means and proportions. You
should practice applying the major concepts of statistical inference and constructing and
interpreting confidence intervals for means and proportions for large random samples.

Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about point estimates and confidence intervals for large random
samples:

Q1. In a large random sample of homicide detectives (n=200), the average detective investigated 100
homicides over a 10-year period (" = 100). What is the point estimate of the population mean in
this example?
A. μ = 10 years
B. Ppopulation= 100%
C. μ = 100 homicide investigations
D. μ = 200 detectives

Q2. If you take a large random sample of 500 people and compute the sample mean for a particular
variable ("= 50), your single best guess of the population mean (μ) is
A. 50
B. 500
C. .10 (50/500)
D. 10 (500/50)


Q3. For a large random sample, the 99% confidence level (CL) is equivalent to what area of the
standard normal (z) sampling distribution?
A. The "middle 80%" of the normal curve
B. The "middle 98%" of the normal curve
C. The "middle 99%" of the normal curve
D. The "middle 95%" of the normal curve
E. The "middle 68%" of the normal curve


Q4. Assume you have a large random sample of shoplifters and want to estimate the average age of
shoplifters in the U.S. population. You have the following sample statistics: " = 25 years old,
sex = 5. What is the 95% confidence interval for the average age of shoplifters in the U.S.?
A. 12.1 to 37.9 years old
B. 18.6 to 31.4 years old
C. 15.2 to 34.8 years old
D. 16.8 to 33.2 years old

Copyright @2017 147


Analyzing Criminological Data Chapter 8

Q5. Assume you have a large random sample of felony offenders and want to estimate the average
number of charges per defendant in the U.S. population. You have the following sample
statistics: " = 4 charges, sex = .5. What is the 99% confidence interval for the estimated average
number of charges per defendant in the U.S. population?
A. 3.18 to 4.82 charges
B. 2.84 to 5.16 charges
C. 2.71 to 5.29 charges
D. 3.5 to 4.5 charges
E. 3.02 to 4.98 charges


Q6. Assume you have a large random sample of murders and wanted to estimate the proportion of
murders in the U.S. that are committed with handguns. You have the following sample
statistics: psample= .60 (60% were murders with handguns), sep = .05. What is the 98%
confidence interval for the estimated proportion of murders in the U.S. that are committed
with handguns?

A. .48 to .72 committed with handguns.
B. .54 to .66 committed with handguns
C. .55 to .65 committed with handguns.
D. .50 to .70 committed with handguns.
E. .52 to .68 committed with handguns

Correct Answers: Q1(C), Q2(A), Q3(C), Q4(C), Q5(C), and Q6(A)

LO3: Calculate and interpret confidence intervals from small random samples
The process for developing confidence intervals is identical for both large and small samples.
Regardless of the sample size, the construction of confidence intervals requires that you have a
random sample, compute sample statistics, convert a selected confidence level into
standardized scores, estimate the margin of error, and develop a range of scores around the
point estimate of the unknown population parameter.

As shown in the table below, the formulas for confidence intervals of means and proportions in
small samples are similar to their large-sample counterparts. The only difference is that the t-
distribution, rather than the standard normal (z) distribution, is used to establish the "middle
values" that represent the confidence level. We use the t-distribution and convert the
confidence level (e.g., 95% confidence) into t-scores when a research study is based on small
random samples (n < 50).




Copyright @2017 148


Analyzing Criminological Data Chapter 8


Formulas for Confidence Intervals of Means and Proportions (Small Samples)

1. Confidence Interval for Means: Computing Formula:

Population Data µ ± tCL sµ

Sample Data ! ± tCL se

2. Confidence Interval for Proportions: Computing Formula:

Population Data PPopulation ± tCL sp

Sample Data PSample ± tCL sep




The properties of the t-distribution were discussed in Chapter 7. You should know that the
shape of this sampling distribution depends on its degrees of freedom (i.e., n - 1, where n is the
sample size). The practical difference between the t- and z-distributions in developing
confidence intervals is the slightly greater difficulty in identifying particular t-scores that
represent a particular confidence level (i.e., what t-scores represent the 90% CL [the "middle
90%"]?) because the answer to this question for the t-distribution depends on its degrees of
freedom. How we convert any confidence level to its equivalent t-scores is described below.

The table below provides an abbreviated list of t-scores that correspond to selective
probabilities of extreme scores (α) based on particular degrees of freedom. Alpha, symbolized
as α, represents the likelihood of obtaining a sample value that falls outside of a particular
confidence interval. For example, if we are dealing with a sample with 10 degrees of freedom,
an alpha of .10 indicates that there is only a 10% chance of obtaining a t-score beyond 1.812.
Most summary tables of the t-distribution in statistics books are far more detailed and only
report alpha levels (i.e., they do not include directly the values for confidence levels: 90%, 95%,
99%). However, this table is sufficient to illustrate how we derive particular t-scores to
represent particular confidence levels. Here are the basic conversion steps.

• Convert the value of alpha (e.g., α = .10) in any t-distribution table into a confidence
level (CL) by applying the following formula: CL = 100 - α (100). For example, if α =.05,
the confidence level is 95% (CL95% = 100 - .05 (100) = 95%). The confidence levels
associated with each α value are in parentheses in the table below. To convert a
confidence level into an alpha value, use the following conversion formula: α = (100 -
CL)/ 100.

• Compute the number of degrees of freedom (df) based on the sample size (df = n - 1).

Copyright @2017 149


Analyzing Criminological Data Chapter 8


• Find the degrees of freedom (df) in your sample in the rows of a t-table and then read
across the columns until you find the confidence level you have selected. The
intersection of these rows and columns give you the t-score that corresponds to your
confidence level.

t-scores for selective values of α, confidence levels (CL) and
degrees of freedom (df)
α=.10 α=.05 α=.01
df (CL = 90%) (CL = 95%) (CL = 99%)

10 1.812 2.228 3.169


15 1.753 2.131 2.947
25 1.708 2.060 2.787

40 1.684 2.021 2.704


50 1.676 2.009 2.678

100 1.660 1.984 2.626


200 1.657 1.978 2.613



Work through these examples to master the logic of converting confidence levels into t-scores.

• If df = 10, what t-scores represent the 90% confidence level (i.e., the "middle 90% of
scores" in a t-distribution)?

Answer: t = ± 1.812. How to get this answer --> (1) convert confidence level of 90% into
an alpha by using the formula α = (100 - CL)/ 100 = (100-90)/100 = 10/100 = .10, (2) find
the row in the t-table for 10 df, and (3) then read over until you find the column marked
" α = .10" to find the proper t-score of ± 1.812.

• If df = 40, what t-scores represent the 95% confidence level (i.e., the "middle 95% of
scores” in a t-distribution)?

Answer: t = ± 2.021. Find this answer on your own using the same steps presented in
the previous example.

• If you have a sample size of 26, a 99% confidence level corresponds to what range of
values on a t-distribution?

Copyright @2017 150


Analyzing Criminological Data Chapter 8

Answer: t = ± 2.787. These values are found by (1) converting the sample size to degrees
of freedom (df = n-1 = 26-1 = 25), (2) converting 99% confidence level to an alpha value
(CL95% --> α = .01), and (3) find the t-value that is contained in the intersection of these
rows and columns in the t-table (this t-value is ± 2.787).

• If you have a sample size of 41, a 90% confidence level corresponds to what range of
values on a t-distribution? Answer: t = ± 1.684.

Now that you can convert confidence levels into t-scores, you will be able to compute and
interpret confidence intervals for small sample means and proportions. Specific applications of
these confidence intervals are described below.

A. Example of Confidence Interval for the Population Mean (for small random sample)

Here’s a criminological example of how to calculate and interpret a confidence interval for a
population mean when you have a small random sample:

Based on national UCR data, the FBI claims that the average dollar loss per burglary
offense was $2,119. You are curious about whether this average dollar loss is a
reasonable estimate of the true population mean (µ). So, you decide to evaluate the
FBI’s claim by (1) taking a small national random sample of burglary cases and (2)
developing a confidence interval that provides an estimate of the mean dollar loss in
U.S. burglaries.

Here are the steps in the computation and interpretation of this estimated confidence
interval of the population mean from a small random sample:

1. Draw a small random sample of U.S. burglary cases (n= 41). Notice that it will be hard
to draw a random sample of all U.S. burglaries (because a complete list of burglaries
is difficult to get from the FBI), but let’s assume that we are able select a random
sample from this population.

2. Compute sample statistics for the average dollar loss in these burglaries, including
the mean (!= $2,000), the standard deviation (s = $500) and an estimate of standard
error (se = s/ $ = = 500/ 41 = 78.1).

3. Notice that the sample mean of $2,000 is our single best estimate of the unknown
population mean (i.e., ! —> μ).

4. Select a confidence level (CL) and convert it into t-scores. In this example, we select a
95% confidence level that converts into the t-value of ± 2.021 at 40 degrees of
freedom (i.e., α = .05 —> t 40 df = ± 2.021).

Copyright @2017 151


Analyzing Criminological Data Chapter 8



5. Compute the margin of error (ME) -- > ME = ± tCL se = ± 2.021(78.1) = ± 157.8.

6. Create the confidence interval (CI) by adding and subtracting the margin of error
(ME) from the estimate of the population mean (():

CI = ! ± ME
= $2,000 – 157.8 = $1,842.2 (lower range)
= $2,000 + 157.8 = $2,157.8 (upper range)

7. Reach conclusions about the unknown population parameter: Based on this random
sample, the single best estimate of the average dollar loss from U.S. burglaries is
$2,000. We are 95% confident that the actual population mean falls somewhere
between $1,842 and $2,158. Because this confidence interval contains the FBI’s
estimate of $2,119, our random sample supports the FBI’s claim about the average
loss in U.S. burglaries.




B. Example of Confidence Interval for the Population Proportion (small random sample)

Here’s a criminal justice example of how to calculate and interpret confidence interval for a
population proportion when you have a small random sample:

Financial expenditures in the criminal justice system have skyrocketed over the last two
decades. A major source of these expenditures at the adjudication stage involves the
costs of using public defenders for indigent defendants. To estimate the prevalence of
using public defenders in felony case processing, you designed a study that included the
following features: (1) you selected a small random sample of felony cases processed in
U.S. state courts and (2) you developed a confidence interval from the sample statistics
to provide an estimate of the percent of U.S. felony cases that involve legal
representation by public defenders.

Here are the steps in the computation and interpretation of an estimated confidence
interval of this population proportion from a small random sample:

1. Draw a small random sample of felony cases in state court (n= 26).

2. Compute sample statistics for the proportion of felony cases with a public defender
(psample = .60), the standard deviation of this proportion (s = .5), and the estimate of
its sampling error (i.e., standard error = se = s/ $ = .5/ )* = .10).

Copyright @2017 152


Analyzing Criminological Data Chapter 8



3. Notice that the sample proportion of .60 is our single best estimate of the unknown
population proportion (i.e., psample—> Ppopulation).

4. Select a confidence level (CL) and convert it into t-scores. In this example, we select a
90% confidence level that converts to the t-value of ± 1.708 at 25 degrees of freedom
(i.e., α = .10 —> t25 df = ± 1.708).

5. Compute the margin of error (ME) -- > ME = ± tCL se = ± 1.708(.10) = ± .17.

6. Create the confidence interval (CI) by adding and subtracting the margin of error (me)
from the estimate of the population proportion (ppopulation):
CI = psample ± tCL se = psample ± ME
= .60 - .17 = .43 (lower range)
= .60 + .17 = .77 (upper range)

7. Reach conclusions about the unknown population parameter: Based on this random
sample, the single best estimate of the proportion of U.S. felony cases that involve a
public defender is .60 (60%). We are 90% confident that the actual population
proportion falls somewhere between .43 and .77. These sample data suggest that the
true percentage of U.S. felony cases involving public attorneys ranges anywhere from
a large minority (43%) to over three-fourths (77%) of these cases. The range of values
that may reflect the true population proportion is large in this example because we
have a relatively small sample (n=26). Small samples create a relatively large amount
of sampling error.


For any criminal justice research question that involves estimating population parameters for
small samples, just follow these basic steps for developing confidence intervals for means and
proportions. You should practice applying the major concepts of statistical inference and
constructing/interpreting confidence intervals for means and proportions for small random
samples.









Copyright @2017 153


Analyzing Criminological Data Chapter 8

Practice Applications for Learning Objective 3 (LO3):


Answer the following questions about point estimates and confidence intervals for small random
samples:

Q1. In a small random sample of crime scene analysts (n=35), the average crime analyst had 8 years
of experience in forensic science (" = 8). What is the point estimate of the population mean in
this example?
A. μ = 8 years
B. Ppopulation= 100%
C. μ = 35 investigations
D. μ = 35 crime scene analysts

Q2. If you take a small random sample of 16 people and compute the sample mean for a particular
variable ("=320), your single best guess of the population mean (μ) is
A. 16
B. 320
C. .05 (16/320)
D. 20 (320/16)


Q3. For a small random sample (n<50), the 80% confidence level (CL) is equivalent to what area of
the student's t sampling distribution?
A. The "middle 80%" of the t-distribution
B. The "middle 90%" of the t-distribution
C. The "middle 95%" of the t-distribution
D. The "middle 98%" of the t-distribution
E. The "middle 99%" of the t-distribution


Q4. Assume you have a small random sample of 11 drug dealers and want to estimate the average
profit per week of all drug dealers in the U.S. population. You have the following sample
statistics: " = $500 per week and sex = $50. What is the 95% confidence interval for the average
profit for U.S. drug dealers?
A. $400 to $600 per week
B. $389 to $611 per week
C. $370 to $630 per week
D. $347 to $653 per week

Copyright @2017 154


Analyzing Criminological Data Chapter 8

Q5. Assume you have a small random sample of 26 embezzlers and want to estimate the average
age of all embezzlers in the U.S. population. You have the following sample statistics: " = 45
years , sex = 5, n=26. What is the 99% confidence interval for the estimated average of
embezzlers in the U.S. population?
A. 40 to 50 years old
B. 31.1 to 58.9 years old
C. 27.4 to 60.6 years old
D. 26.3 to 61.7 years old
E. 25.2 to 65.4 years old


Q6. Assume you have a small random sample of 16 chronic juvenile violent offenders and want to
estimate the proportion of chronic juvenile violent offenders in the entire U.S. who are treated
as adults (i.e., waived to adult court or "certified as adults"). You have the following sample
statistics: n=16, psample= .38, sep = .12. What is the 95% confidence interval for the proportion of
chronic juvenile violent offenders who are prosecuted in adult courts in the U.S.?

A. .08 to .72 certified as adults
B. .12 to .64 certified as adults
C. .18 to .58 certified as adults
D. .24 to .52 certified as adults
E. .30 to .46 certified as adults

Correct Answers: Q1(A), Q2(B), Q3(A), Q4(B), Q5(B), and Q6(B)

LO4: Demonstrate how the width of a confidence interval is influenced by the


confidence level, the sample size, and the standard error of the estimate
Confidence intervals are the primary statistical technique used to estimate the value of
unknown population parameters. When making these statistical inferences, however, it is
important to recognize that the width of any confidence interval (i.e., the range of its lowest to
highest estimated value) is affected by several factors. The impact of the confidence level, the
sample size, and the standard error on the width of confidence intervals is summarized below.

Examine the following table. It shows the z- and t-scores associated with particular confidence
levels. Notice that the values of the standardized z- and t-scores increase as the confidence
level increases.




Copyright @2017 155


Analyzing Criminological Data Chapter 8


The relationship between confidence levels and width of
confidence intervals

Confidence t-scores t-scores
Level (CL) z-scores (df =10) (df =40)

80% ± 1.28 ± 1.372 ± 1.303

90% ± 1.64 ± 1.812 ± 1.684

95% ± 1.96 ± 2.228 ± 2.021

99% ± 2.58 ± 3.169 ± 2.704



For both the normal curve and t-distributions, the width of the confidence interval increases as
the value of the confidence level increases. For example, an 80% confidence level for a large
sample represents z-scores of ± 1.28, but the z-scores more than double to ± 2.58 for a 99%
confidence level. Because the standard scores associated with a confidence level are added to
and subtracted from the point estimate to create confidence intervals (i.e., ! ± ZCL se), the
larger the standard scores, the larger the confidence level, and the wider the resulting
confidence interval.

The effect of sample size on confidence interval width is tied to its impact on the level of
sampling error. In all confidence intervals, the standard error (se) is estimated using the
following formula: se = s / $. So, larger sample sizes will produce smaller estimates of the
standard error. Similar to what is done with the confidence level, the value of the standard
error is also added to and subtracted from the point estimate, leading to narrower confidence
interval in larger samples.

The following conclusions can be reached about the interrelationships between confidence
levels, sample size, the standard error, and the width of confidence intervals:

• Criminological studies that use large confidence levels (e.g., 95% rather than 80% CL)
will generally have wider confidence intervals.

• Criminological studies that are based on larger sample sizes will generally have narrower
confidence intervals.

• Criminological studies with smaller standard errors will have narrower confidence
intervals.

Copyright @2017 156


Analyzing Criminological Data Chapter 8

Remember that criminologists use larger random samples rather than smaller to reduce
sampling error and produce narrower confidence intervals for capturing the value of the
unknown population parameters.

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about point estimates and confidence intervals for small random
samples:

Q1. Which of the following confidence levels will have a wider confidence interval?

A. 68% confidence level
B. 80% confidence level
C. 90% confidence level
D. 95% confidence level
E. 98% confidence level


Q2. Which of the following standard deviations (s) will result in a wider confidence interval?

A. s = .5
B. s = 1.0
C. s = 1.5
D. s = 2.0
E. s = 3.0

Q3. Which of the following standard errors (se) will result in a wider confidence interval?

A. se = 5
B. se = 10
C. se = 15
D. se = 20
E. se = 30


Q4. Which of the following margin of errors (ME) will result in a wider confidence interval?

A. ME = 5
B. ME = 10
C. ME = 15
D. ME = 20
E. ME = 30

Copyright @2017 157


Analyzing Criminological Data Chapter 8


Q5. Which of the following sample sizes (n) will result in a wider confidence interval?

A. n = 25
B. n = 50
C. n = 100
D. n = 500
E. n = 1,000

Correct Answers: Q1(E), Q2(E), Q3(E), Q4(E), and Q5(A)





Review

Use these reflection questions and directed activities to help understand and apply the material
presented in this chapter.

What?
Can you define the critical concepts covered in chapter? Describe each of these concepts in
your own words and give at least two examples of each.
- Sample statistics
- Population parameters
- Point estimate
- Confidence intervals
- Confidence level
- Margin of error


How?

Answer the following questions and complete the associated activities:

• How are sample statistics used to estimate population parameters? Describe how
probability-based inferences are made.

• How are confidence intervals (CI) calculated from large samples? Describe how these CIs
are interpreted.

• How are confidence intervals (CI) calculated from small samples? Describe how these
CIs are interpreted.

Copyright @2017 158


Analyzing Criminological Data Chapter 8

• How is the width of a confidence interval influenced by the confidence level, the sample
size, and the standard error of the estimate? Describe the effect of each of these
influences.




When?

Answer the following questions.

• When should we use sample statistics instead of population parameters?

• When should we calculate and interpret confidence intervals for large samples?

• When should we calculate and interpret confidence intervals for small samples?

• When should we expect the width of a confidence interval to be larger? Smaller?



Why?

Answer the following questions.

• Why do you want to develop confidence intervals in criminological research?

• Why does knowing this information about confidence intervals and sampling
distributions help you understand the proper application and misuses of statistics in
criminological studies?





Copyright @2017 159


Evaluating Criminological Data Chapter 9

CHAPTER 9:
Testing Claims and Other Hypotheses

This chapter focuses on the methods used by criminologists to test claims and other
hypotheses. Understanding the concepts and learning objectives in this chapter will improve
your analytical skills for developing hypotheses and evaluating them in criminological research
practices.


Concepts Learning Objectives (LO)

- Hypothesis testing LO1: Recognize and establish various types of null and
- Types of hypothesis tests alternative hypotheses
- Null hypotheses (Ho)
- Alternative hypotheses (Ha) LO2: Explain how decision rules in hypothesis testing
- One-tail and two-tail tests are established and what factors influence them
- Decision rules
- Type I and Type II errors LO3: Apply and interpret hypothesis test results about
- Zone of rejection population means and proportions in large samples
- Significance level (alpha)
- Critical value(s) of test statistics LO4: Apply and interpret hypothesis test results about
- Statistical vs. Substantive population means and proportions in small
Significance samples

Introduction

In the two previous chapters, we’ve described how criminologists use sample data and
knowledge about sampling distributions to make statistical inferences about unknown
population parameters. You learned how to construct and interpret confidence intervals to
estimate population means and proportions for both large and small random samples.

In this chapter, we expand your knowledge of statistical inference to show how sample data is
also used to test claims about population parameters. These claims are called hypotheses and
they derive from criminological research questions. There are various forms of hypothesis
testing (e.g., they may be based on one or two samples, means or proportions, and large or
small samples). However, just like confidence intervals, once you understand the basic
concepts, the process of analyzing and interpreting sample data to test hypothesized claims is
very straightforward.

Copyright@ 2017 160


Evaluating Criminological Data Chapter 9

This chapter focuses on defining and applying the concepts used in testing hypotheses about
population parameters. We begin with a short overview of the logic of hypothesis testing,
before discussing different types of hypotheses and the formal tests used to evaluate claims
about them.

LO1: Recognize and establish various types of null and alternative hypotheses

Hypothesis testing involves the use of sample data to evaluate claims about population
parameters. Claims or predictions about population parameters are called hypotheses. The
formal process underlying hypothesis testing
is simple and involves the following steps: Hypothesis testing: A statistical approach
for evaluating claims about population
parameters
1. Someone makes a claim about a population value.

2. You make a counter claim based on theory, past research, or sound logical reasoning.

3. Decision rules are established to determine what sample outcomes would support each
claim.

4. Sample data are collected and sample statistics are converted to standardized scores.

5. Standardized scores are used to compare sample statistics and the claimed population
value.

6. Based on the decision rules, we identify the claim that is best supported by the sample
data.

This chapter provides a full description of each step in this process. For now, it is helpful to
think about hypothesis testing as “testing claims under rules.”

Let’s start by examining the two major types of claims underlying most scientific research.

The Null Hypothesis (Ho) and Alternative Hypothesis (Ha)

Statisticians identify two specific types of hypotheses to evaluate claims. These hypotheses are
called the null hypothesis and the alternative
hypothesis. Null hypothesis (Ho): A claim about the
assumed value of a population parameter
The null hypothesis (symbolized Ho) is a claim or differences in the assumed population
about an assumed population parameter or parameters across groups
differences in assumed population parameters across groups.

Copyright@ 2017 161


Evaluating Criminological Data Chapter 9

Claims that represent null hypotheses can be about a mean of a variable in the population. For
example, you might hypothesize that the mean financial loss in U.S. street muggings is $825
(symbolized by Ho: μ = $825). Claims can also be about a proportion. For example, you might
hypothesize that the proportion of Americans who support capital punishment is .75
(symbolized by Ho: Ppop = .75). In both cases, we draw a random sample of cases and compute
sample statistics to evaluate the accuracy of these claims. These examples represent a one-
sample test of a population mean and a population proportion.

Claims can also be about group differences in population means (Ho: μ1 - μ2 = 0 where μ1 is the
population mean for group 1 and μ2 is the population mean for group 2) or population
proportions (Ho: P1 - P2 = 0). A claim that there is no difference in average sentence length based
on gender is an example of a null hypothesis about group differences in population means
(Ho: μM – μF = 0). To test this null hypothesis, we use sample statistics to compare the average
sentence for males to the average sentence for females. A claim that there is no difference in
willingness to convict in death penalty cases based on gender is a null hypothesis about group
differences in population proportions (Ho: PM – PF = 0). To test this null hypothesis, we use
sample statistics to compare the proportion of males who convict to the proportion of females
who convict in these cases. These examples represent a two-sample test of differences in
population means and population proportions.

There are four distinct types of hypothesis tests about population mean(s) and population
proportion(s):

1. One-Sample Test of a Population Mean (e.g., Ho: μ = 6.5 years)

2. One-Sample Test of a Population Proportion (e.g., Ho: Ppop = .60)

3. Two-Sample Test of Differences in Population Means (e.g., Ho: μ1 - μ2 = 0)

4. Two-Sample Test of Differences in Population Proportions (e.g., Ho: P1 - P2 = 0)

Since we evaluate claims about differences (e.g., differences in the estimated population
parameters of two groups), the term “null” hypothesis makes sense because it implies that
there are “no differences.” In other words, any differences are “null” (i.e., zero), which is
symbolized as Ho: μ1 - μ2 = 0 for “null” differences in group means and Ho: P1 - P2 = 0 for “null”
differences between group proportions.

It is important to note that when we test hypotheses, we always test null hypotheses. Our
assumption when testing hypotheses about group differences is always that there are "no
differences" across groups. The table below illustrates how various types of claims about
population parameters are translated into symbolic representations of different types of null
hypotheses (Ho).


Copyright@ 2017 162


Evaluating Criminological Data Chapter 9



Converting Verbal Statements into Symbolic Forms of Null Hypotheses (Ho)

Null Hypothesis Statement Symbolic Ho Form


One Population Mean:
• The true population average starting salary for police officers Ho: μpo = $45,000
is $45,000.
• The true population average length of a criminal trial is 3.1 Ho: μdays = 3.1
days long.
One Population Proportion:
• The true population proportion of drug addicts who fail drug Ho: Pfail = .75
rehab is .75.
• The true population percent of U.S. residents who own guns is Ho: Pgun = 45%
45%.
Two Population Means:
• There is no difference in length of prison sentence based on Ho: μmale - μfem = 0
gender.
• There is no difference between homicide rates in Southern Ho: μsouth – μns = 0
and Non-Southern states.
Two Population Proportions:
• There is no difference in recidivism rates between white- Ho: PR-wc - PR-v = 0
collar offenders and violent offenders.
• There is no difference in risks of pretrial detention between Ho: Ppd-B – Ppd-W = 0
Black and White defendants.


The alternative hypothesis (symbolized as Ha) is a rival claim about an assumed population
parameter or differences in the assumed population parameters across groups. The exact
nature of this alternative claim is based on Alternative hypothesis (Ha): A rival claim
previous research, theory, or sound logical about an assumed population parameter
reasoning. or differences in the assumed population
parameters across groups
Both null and alternative hypotheses stem directly
from research questions.

Copyright@ 2017 163


Evaluating Criminological Data Chapter 9



Research question: What is the impact of CCTV cameras on crime?

Null hypothesis: There is no difference in crime between places with CCTV cameras and
places without CCTV cameras.

Alternative Places with CCTV cameras have less crime than places without CCTV
hypothesis: cameras.


For all research questions, there are three different types of alternative hypotheses. The three
types of alternative hypothesis for the impact of CCTV cameras on crime include the following:

• Places with CCTV cameras have less crime than places without CCTV cameras.
§ Symbolized as: (Ha: µwith CCTV - µwithout CCTV < 0).

• Places with CCTV cameras have more crime than places without CCTV cameras.
§ Symbolized as: (Ha: µwith CCTV - µwithout CCTV > 0)

• Places with CCTV cameras have more or less crime than places without CCTV
cameras.
§ Symbolized as: (Ha: µwith CCTV - µwithout CCTV ¹ 0)

The first two alternative hypotheses presented above are directional in nature. This is because
they specify the direction of the relationship we expect to find, whether it is more crime (> 0) or
less crime (< 0). Directional alternative hypotheses derive from theory and past research. The
third statement is a non-directional hypothesis. In this case, we use “¹ 0” since we are claiming
that there will be a difference, but we do not specify the precise direction of this difference (i.e.,
differences in crime between places with and without CCTV cameras are expected but we can't
clearly predict which place will have more crime).

Remember, there are always three possible forms of the alternative hypothesis (Ha). The
expected population value or difference between expected population values can be greater
than (>), less than (<), or not equal to (¹) any proposed value or the hypothesized value of no
difference (0).

The table below presents the symbolic form of all possible alternative hypotheses for null
hypotheses about population means and proportions in criminological research.



Copyright@ 2017 164


Evaluating Criminological Data Chapter 9


Symbolic Forms of Alternative Hypotheses (Ha)


Null Hypothesis (Ho) All Possible Alternative Hypotheses (Ha)

One Population Mean

Ho: µ = (any proposed value) Ha: µ > (any proposed value)
Ha: µ < (any proposed value)
Ha: µ ¹ (any proposed value)

One Population Proportion

Ho: Pu = (any proposed value) Ha: Pu > (any proposed value)
Ha: Pu < (any proposed value)

Ha: Pu ¹ (any proposed value)
Two Population Means

Ho: µa - µb = 0 Ha: µa - µb > 0

Ha: µa - µb < 0
Ha: µa - µb ¹ 0

Two Population Proportions

Ho: Pa - Pb = 0 Ha: Pa - Pb > 0
Ha: Pa - Pb < 0

Ha: Pa - Pb ¹ 0




When the alternative hypothesis is directional (Ha: Pu > .55, or Ha: µa - µb < 0), we conduct a
directional test. Directional tests are also called one-tailed tests of the null hypothesis. This is
because only sample outcomes in one tail of the sampling distribution will be in a direction
consistent with the alternative hypothesis (see graphs below)

One-Tailed Test One-Tailed Test

(Left-Tailed Test) (Right-Tailed Test)

Ha: µa - µb < 0 or Ha: Pa - Pb < 0 Ha: µa - µb > 0 or Ha: Pa - Pb > 0




Copyright@ 2017 165


Evaluating Criminological Data Chapter 9

A two-tailed test is used when the alternative hypothesis does not specify a particular
direction--instead, it specifies both directions (Ha: Pu ¹ .55). We use two-tailed tests when
theory and past research do not provide enough information to determine whether the
observed values should be higher or lower. When this happens, we can only predict that the
observed values will be different.

Two-Tailed Test
Ha: µa - µb ¹ 0 or Ha: Pa - Pb ¹ 0







The selected “tails” of a sampling distribution
One- and Two-Tailed Tests: Hypothesis
provide the basis for establishing decision
tests based on areas in the tails of
rules that we use to determine when to reject
sampling distributions that are
a null hypothesis in favor of the alternative
consistent with the alternative
hypothesis. How these “tails” are used to
hypothesis (Ha)
represent decision rules in hypothesis testing
is our next topic.

Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about the null and alternative hypotheses:

Q1. What is the symbolic form of the following null hypothesis: The average age of convicted
embezzlers is 38 years old?

A. Ho: μAge = 38
B. Ho: PAge = 38%
C. Ho: μAge1 – μAge2 = 0
D. Ho: PAge1 – PAge2 = 0


Q2. What is the symbolic form of the following null hypothesis: Gender differences in the proportion of
residents that have been a victim of violent crime?

A. Ho: μGender = 38
B. Ho: PGender = 38%
C. Ho: μMale – μFemale = 0
D. Ho: PMale – PFemale = 0

Copyright@ 2017 166


Evaluating Criminological Data Chapter 9


Q3. Which of the following null hypotheses is an example of a one-sample test of a population
proportion?

A. Ho: μcharges = 2.5 charges
B. Ho: Pconvict = 74%
C. Ho: μMurder – μRobbery = 0
D. Ho: PWest – PEast = 0


Q4. Which of the following null hypotheses is an example of a two-sample test of population means?

A. Ho: μprison = 3.5 years
B. Ho: Pprison = 65%
C. Ho: μprison1 – μprison2 = 0
D. Ho: Pprison1 – Pprison2 = 0


Q5. Criminologists often compare sex offenders and non-sex offenders on their likelihood of repeat
offending (recidivism). Recidivism is measured as a nominal variable with 2 categories (Yes, No).
Assume that sex offenders are group 1 and non-sex offenders are group 2. Given that most
criminologists contend that sex offenders are more likely to recidivate than non-sex offenders,
what is the proper alternative hypothesis (Ha) for this research question in symbolic form?

A. Ha: P1 - P2 = 0, where P1 = the population proportion for group 1 and P2 = the
population proportion for group 2.
B. Ha: P1 - P2 > 0, where P1 = the population proportion for group 1 and P2 = the
population proportion for group 2.
C. Ha: P1 - P2 < 0, where P1 = the population proportion for group 1 and P2 = the
population proportion for group 2.
D. Ha: P1 - P2 ≠ 0, where P1 = the population proportion for group 1 and P2 = the
population proportion for group 2.

Q6. Which of the following is an example of a two-tailed test?


A. Ha: PDrug > 25%
B. Ha: PDrug < 25%
C. Ha: PDrug = 25%
D. Ha: PDrug ≠ 25%

Correct Answers: Q1(A), Q2(D), Q3(B), Q4(C), Q5(B), and Q6(D)

Copyright@ 2017 167


Evaluating Criminological Data Chapter 9

LO2: Explain how decision rules in hypothesis testing are established and what
factors influence them


Specific decision rules are objective standards used to determine which hypothesis – the null or
the alternative – is correct. These rules help us to
evaluate the accuracy of each claim by Decision rules: Objective standards used
establishing the probability of obtaining specific to evaluate the relative accuracy of the
outcomes if the null hypothesis is true. null and alternative hypotheses

Decision rules in hypothesis testing involve sampling distributions, zone(s) of rejection,
significance levels (α), and critical values. Each concept is described below.

1. Sampling Distributions

As described in previous chapters, sampling distributions are probability distributions of all
possible sample statistic outcomes over an infinite number of samples. Sampling distributions
allow us to define rare and common outcomes when making statistical inferences with only one
random sample. In the previous chapter on confidence intervals, you learned how to make
statistical inferences based on the properties of the normal curve. For example, based on the
normal curve, we can be 95% confident that our sample statistic will fall within ± 1.96 z-scores
from the true population value. In other words, we know that falling within about 2 standard
deviations of the mean is a common outcome simply because we know the properties of this
sampling distribution.

When conducting a hypothesis test, sampling distributions are used to identify rare and
common outcomes under the assumption that the null hypothesis is true. For example, if the
null hypothesis is true (Ho: µ = 0) and you are using the z-distribution because you have a large
sample (n > 50), your sample estimates of the hypothesized population mean will converge on
(center around) 0 and 95% of them will be within ± 1.96 z-scores of 0. Only about 5% of the
time will you get a sample mean more extreme than 1.96 standard deviations from any
hypothesized population value if the null hypothesis is true.

The figure below gives a visual display of rare and common outcomes in a standard normal (z)
distribution under the assumption that the null hypothesis is true. Notice how the idea of rare
outcomes (red areas) and common outcomes (non-shaded areas) are visually displayed when
the null hypothesis is assumed to be true.



Copyright@ 2017 168


Evaluating Criminological Data Chapter 9

Visual image of rare (red shaded) and common (non-shaded)


outcomes if null hypothesis (Ho) is true






2. Zone of Rejection

The zone of rejection is the area (or areas) under a sampling distribution that represents rare
outcomes if the null hypothesis is true. These areas of rare probabilities are located in the tail(s)
of sampling distributions. They define the
particular sample outcomes that will guide our Zone of rejection: The area (or areas)
decision to reject or not reject the null under a sampling distribution that
hypothesis. represents rare outcomes if the null
hypothesis (Ho) is true
The number of zones of rejection and their location in a sampling distribution depend entirely
on the nature of the alternative hypothesis. Here’s what you should know about the alternative
hypothesis and zone(s) of rejection:

• If the alternative hypothesis is non-directional (e.g., Ha: µ ¹ 0), there will be two zones of
rejection. These two areas will be located in the two tails of the sampling distribution.

• If the alternative hypothesis specifies a value greater than the null value (e.g., Ha: µ > 0),
there will be one zone of rejection. This area will be located in the tail of the sampling
distribution that represents standardized scores far greater than the value of the null
hypothesis (i.e., right tail).

• If the alternative hypothesis specifies a value less than the null value (e.g., Ha: µ < 0), there
will be one zone of rejection. This area will be located in the tail of the sampling distribution
that represents standardized scores far less than the value of the null hypothesis (i.e., left
tail).

The zone(s) of rejection for these different alternative hypotheses are visually displayed the
figure below.


Copyright@ 2017 169


Evaluating Criminological Data Chapter 9

Zone(s) of rejection (red shaded areas) for alternative hypotheses (Ha)


a. Ha: µ ¹ 0



b. Ha: µ > 0



c. Ha: µ < 0





3. Significance Level (α) and Critical Value(s)

When testing hypotheses, we must select a particular Significance level (α): The specific
probability level for defining rare outcomes. This probability level associated with
selected probability level is called the significance level outcomes defined as rare under a null
of the test and is symbolized as alpha (α). Conventional hypothesis
values for alpha in criminological research include α
=.05, α =.01, and α =.001.

Researchers select a particular significance level (α) based on concerns about making either one
of two types of errors in hypothesis testing: a Type I or a Type II error. A Type I error is made
when we reject a null hypothesis that is actually true
Type I and Type II Errors: Errors in
(i.e., conclude that there are differences when no
statistical inference caused by
differences exist). A Type II error is made when we
rejecting a true null hypothesis (Type I
fail to reject a null hypothesis when it is not true (i.e.,
Error) or failing to reject a false null
conclude that no differences exist when, in fact, they
hypothesis (Type II Error)
do). The significance level (α) represents the
probability of making a Type I error. Decreasing the α-value of a test (e.g., from .01 to .001)
makes it more difficult to commit a Type I error, but the probability of making a Type II error
increases. There is always a chance that we will make either a Type I or Type II error in any
hypothesis test. We must pick our alpha level based on our willingness to commit one type of
error over another.

Copyright@ 2017 170


Evaluating Criminological Data Chapter 9

Researchers select an alpha level based on a number of factors (e.g., the purpose of the
research, conventional practices in previous research, and number of cases included in their
sample). As you will learn, smaller samples make it more difficult to reject the null hypothesis,
so researchers typically use larger alpha levels (α=.05) with smaller samples and smaller alpha
values (α=.001) with larger samples. However, an alpha level of .05 is a common standard for
hypothesis testing in criminal justice research.

The table below summarizes the potential outcomes of researcher’s decisions when testing
hypotheses.

Hypothesis Testing Decisions and Errors of Inference

Consequence of Researcher’s
Researcher’s Decision Actual Reality about Ho Decision

Accept Ho Ho is actually true Correct Decision


Reject Ho Ho is actually false Correct Decision
Reject Ho Ho is actually true Type I Error (α)
Accept Ho Ho is actually false Type II Error (1- α)


Critical values are standardized scores that
mark the beginning point of the zone(s) of Critical value(s): The standardized scores
rejection. These critical values are tied to the that mark the boundary of the zone(s) of
probability (α) of obtaining particular sample rejection in testing a null hypothesis
outcomes if the null hypothesis is true. When the alternative hypothesis is non-directional (e.g.,
Ha: µ ¹ 0), there are two critical values, one in each “tail” of the sampling distribution. There is
one critical value for directional alternative hypotheses (e.g., Ha: µ > 0; Ha: µ < 0). The particular
numerical score of these critical value(s) depend on (1) the sampling distribution used (e.g., z-
or t-distribution), (2) the alternative hypothesis (e.g., 2 critical values are needed for a non-
directional Ha), and (3) the significance level selected for the test (α).

4. Examples of Applying Decision Rules in Hypothesis Testing

Assume that the FBI claims that the U.S. homicide rate has not changed over the last 10 years
(i.e., Ho: μyear10 – μyear0 = 0). You question this claim so you test it by counting the number of
homicides over time in a large random sample of U.S. cities. You set the significance level for
your test of the FBI’s null hypothesis at .05 (α = .05). With this information, we can now
demonstrate how decision rules define the particular areas of rare and common outcomes if
the null hypothesis is actually true.

Copyright@ 2017 171


Evaluating Criminological Data Chapter 9

The following three tables describe the decision rules for testing this null hypothesis under
different alternative hypotheses. Notice how each major concept (i.e., sampling distribution,
zone(s) of rejection, significance level, and critical values) is used to construct the decision rules
for testing the null hypothesis (Ho).

Decision rules for testing hypothesis when:
Ho: μyear10 – μyear0 = 0; Ha: μyear10 – μyear0 ¹ 0; large sample;
α =.05; (red shaded areas= zones for rejecting Ho)


Explanations:

1. You took a large random sampleà Thus, the standard normal (z)
curve is the sampling distribution used for testing the null
hypothesis.

2. You have a non-directional alternative hypothesis
(Ha: μyear10 – μyear0 ¹ 0) à Thus, you have 2 zones of rejection (red
shaded areas in the picture above).

3. The significance level is set at .05 (α =.05) à Thus, the combined
areas in the two zones of rejection must sum up to .05. When
dealing with two zones of rejection, we divide the level in half to
find the alpha level for each tail (i.e., .025 + .025 = .05).

4. The critical values are z = ± 1.96 because these standardized
scores represent areas that contain a proportion of .025 cases
falling furthest from the mean (see column C in a z-score table).
Values more extreme than these two points in a normal
distribution are considered rare outcomes (i.e., there is only a
2.5% probability that the score will fall in the upper critical region,
and a 2.5% probability that the sample statistic will fall in the
lower critical region if the null hypothesis is true).

5. Conclusion à any observed sample statistic that converts to a z-
score more extreme than ± 1.96 will lead you to reject the null
hypothesis. The red areas in the picture above represent the
sample outcomes that would lead to the rejection of the null
hypothesis in favor of the alternative hypothesis.

Copyright@ 2017 172


Evaluating Criminological Data Chapter 9

Decision rules for testing hypothesis when:



Ho: μyear10 – μyear0 = 0; Ha: μyear10 – μyear0 > 0; large sample;

α =.05; (red shaded area= zone for rejecting Ho)




Explanations:

1. You took a large random sampleà Thus, the standard normal
(z) curve is the sampling distribution for testing the null
hypothesis.

2. You have a directional alternative hypothesis
(Ha: μyear10 – μyear0 > 0) à Thus, you have 1 zone of rejection
(red area) that falls above the value of the null hypothesis.

3. The significance level is set at .05 (α = .05) à Thus, the area in
the zone of rejection is equal to .05 (red shaded area in the
picture above).

4. The critical value is z = + 1.64 because this standardized score
represents the area in which only 5% (α = .05) of the cases in a
normal distribution are greater than the mean (see column C
in a z-score table). Values larger than this point in a normal
distribution are considered rare outcomes (i.e., there is only a
5% probability that the sample statistic will fall in this upper
critical region if the null hypothesis is true).

5. Conclusion à any observed sample statistic that converts to a
z-score greater than + 1.64 will lead you to reject the null
hypothesis. The red area in the picture above represents the
sample outcomes that would lead to the rejection of the null
hypothesis in favor of the alternative hypothesis.

Copyright@ 2017 173


Evaluating Criminological Data Chapter 9

Decision rules for testing hypothesis when:


Ho: μ2012 – μ2002 = 0; Ha: μ2012 – μ2002 < 0; large sample;
α = .05; (red shaded area= zone for rejecting Ho)



Explanations:

1. You took a large random sampleà Thus, the standard normal
(z) curve is the sampling distribution for testing the null
hypothesis.

2. You have a directional alternative hypothesis
(Ha: μyear10 – μyear0 < 0) à Thus, you have 1 zone of rejection
(red area) that falls below the value of the null hypothesis.

3. The significance level is set at .05 (α = .05) à Thus, the areas in
the zone of rejection is equal to .05 (red shaded area in the
picture above).

4. The critical value is z = - 1.64 because this standardized score
represents the area in which only 5% (α = .05) of the cases in
a normal distribution are less than the mean (see column C in
a z-score table). Values less than this point in a normal
distribution are considered rare outcomes (i.e., there is only a
5% probability that the sample statistic will fall in this lower
critical region if the null hypothesis is true).

5. Conclusion à any observed sample statistic that converts to a
z-score less than - 1.64 will lead you to reject the null
hypothesis. The red area in the picture above represents the
sample outcomes that would lead to the rejection of the null
hypothesis in favor of the alternative hypothesis.



If you understand how these concepts are used in developing decision rules for testing
hypotheses, you are ready to conduct hypothesis tests about population means and population
proportions.

Copyright@ 2017 174


Evaluating Criminological Data Chapter 9

Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about sampling distributions, zone(s) of rejection, significance
levels (α), and the critical values of hypothesis testing:

Q1. Under this alternative hypothesis (Ha: P > 0), there are how many zone(s) of rejection?
A. 1 rejection zone in the “left tail” of the distribution below the value of Ho
B. 1 rejection zone in the “right tail” of the distribution above the value of Ho
C. 2 rejection zone in the “two tails” of the distribution


Q2. Under this alternative hypothesis (Ha: μ1 – μ2 ≠ 0), there are how many zone(s) of rejection?
A. 1 rejection zone in the “left tail” of the distribution below the value of Ho
B. 1 rejection zone in the “right tail” of the distribution above the value of Ho
C. 2 rejection zone in the “two tails” of the distribution


Q3. Assume the following: Ho: μ =150; Ha: μ ≠ 150; large random samples, α = .05. What critical
value(s) would lead you to reject the null hypothesis in favor of the alternative hypothesis?

A. z = + 1.64
B. z = - 1.64
C. z = +1.96
D. z = ± 1.64
E. z = ± 1.96

Q4. Assume the following: Ho: μ1 – μ2 = 0; Ha: μ1 – μ2 > 0; large random samples, α = .05. What
critical value(s) would lead you to reject the null hypothesis in favor of the alternative
hypothesis?

A. z = + 1.64
B. z = - 1.64
C. z = +1.96
D. z = ± 1.64
E. z = ± 1.96

Copyright@ 2017 175


Evaluating Criminological Data Chapter 9


Q5. Assume the following: Ho: Pm – PF = 0; Ha: Pm – PF > 0; large random samples, α = .01. What
critical value(s) would lead you to reject the null hypothesis in favor of the alternative
hypothesis?

A. z = + 1.96
B. z = - 1.96
C. z = + 2.33
D. z = - 2.33
E. z = ± 2.33

Q6. Assume the following: Ho: P1 = 0; Ha: P1 < 0; large random samples, α = .01. What critical
value(s) would lead you to reject the null hypothesis in favor of the alternative hypothesis?

A. z = - 1.64
B. z = - 1.96
C. z = - 2.33
D. z = +2.33
E. z = ±2.33

Correct Answers: Q1(B), Q2(C), Q3(E), Q4(A), Q5(C), and Q6(C)

LO3: Apply and interpret hypothesis test results about population means and
proportions in large samples

Understanding the major concepts and principles underlying hypothesis development and
testing makes it easy to apply and interpret the hypothesis test results. Here are the general
steps in hypothesis testing:

1. Develop a null (Ho) and an alternative hypothesis (Ha)

Ex.: Ho: Pr = .60 (recidivism rate is 60%)
Ha: Pr > .60 (you expect a higher recidivism rate than claimed by the Ho)


2. Identify the critical value(s) of standardized scores that would lead to the rejection of
the null hypothesis (Ho). Remember that critical value(s) are based on the sampling
distribution (large sample = z-scores), the nature of the alternative hypothesis (non-
directional [±] or directional [>, <]), and the significance level (α) selected for the test.

Copyright@ 2017 176


Evaluating Criminological Data Chapter 9


Ex.: large sample, Ha: Pr > .60, α = .05 à critical value = z > +1.64.
Thus, we will reject the null hypothesis if the observed sample
statistic converts to a z-score greater than +1.64.


3. Draw a random sample and compute the necessary sample statistics.

Ex.: sample size (n), mean ( x ), proportion (p), standard deviation (s),
and the standard error (se).


4. Convert the sample statistic into the appropriate standardized score.

Ex.: zobs = x - μ / se; tobs = x - μ / se



5. Compare the observed value of the test statistic and the expected value under the null
hypothesis and decision rule.

Ex.:
1. Observed z-value = + 2.4 < ----- > Expected z-value = + 1.64
2. Observed t-value = + 1.5 < ------> Expected t-value = ± 2.021

6. Reach decision about whether the sample data leads to the rejection or non-rejection of
the null hypothesis (Ho).

Ex.:
1. Based on our sample data, we reject the null hypothesis
because our observed z-value of 2.4 is greater than the
expected z-value of 1.64. A z-value of 2.4 is a rare outcome if
the null hypothesis is true. Thus, we reject the hypothesized
claim of no difference.

2. Based on our sample data, we cannot reject the null hypothesis
because our observed t-value of 1.5 does not exceed the
expected critical value of t = ± 2.021. This observed t-value is a
fairly common outcome if the null hypothesis is true. Thus, we
fail to reject the hypothesized claim of no difference.


Copyright@ 2017 177


Evaluating Criminological Data Chapter 9

Examples of Hypothesis Testing with Large Random Sample(s)



As mentioned earlier in this chapter, there are four different types of null hypotheses about
population means and proportions that are tested in criminological research:

1. One-sample tests of a population mean (e.g., Ho: μ = 100).
2. One-sample tests of a population proportion (e.g., Ho: Ppop = .60)
3. Two-sample tests of population means (e.g., Ho: μ1 – μ2 = 0)
4. Two-sample tests of population proportions (e.g., Ho: P1 – P2 = 0)

The logic concerning statistical inference is similar across each type of hypothesis test. To
illustrate this common logic, examples of each type of hypothesis test with large random
samples are provided below.


Example #1: One-Sample Tests of a Population Mean (Large Sample)

The FBI’s Uniform Crime Reports claim that the average financial loss in a U.S. street mugging is
$800. You don’t have a good theoretical reason for believing this claim is either too high or too
low. Instead, you just doubt the accuracy of this claim.

Due to your curiosity about the FBI’s claim, you decide to test this null hypothesis with a large
random sample of street muggings. The process you followed to conduct a large, one-sample
test of this population mean is summarized below.

1. Establish Ho and Ha.

Ho: μmug = 800
Ha: μmug ≠ 800 (you have no basis for believing this dollar loss is higher or lower than
$800—you just don’t expect it to be $800).

2. Develop critical values that define the zone of rejection based on the directionality of the Ha
and the alpha (α) level selected for this test.

Example: Ha: μmug ≠ 800 and α = .05 —> the critical value(s) = z ± 1.96 for a two-tailed test.
Thus, we will reject Ho if the obtained z-score is more extreme than z = ± 1.96 because this
outcome would occur only about .05 (5%) of the time if the null hypothesis is true (i.e., if
Ho: μmug = $800 is actually true). In other words, z-scores more extreme than +1.96 or -1.96
are rare outcomes if $800 is the true population parameter.

Copyright@ 2017 178


Evaluating Criminological Data Chapter 9

3. Draw a large random sample of mugging incidents (n > 50), compute the mean dollar loss (!)
and standard deviation (s) of the dollar loss for this sample, and estimate the standard error
(se) of the mean.

Note: Notice that the sample mean listed below (! = $825) is not very different than the
claimed population mean (µ = $800), providing some immediate visual evidence that the null
hypothesis may be reasonable. A formal test will determine if these differences between the
sample and population means are statistically significant.


n = 100, ! = $825, s = $200,
se = s/ Ön = 200/ 10 = 20

4. Convert the sample mean into standardized (z) scores by using the following conversion
formula:

z = (! – ()*+) / se = (825 – 800) / 20 = + 1.25


5. Compare the obtained z-score (+1.25) with the expected z-score (± 1.96). Use the decision
rule for rejecting Ho based on the direction of the Ha and alpha level (see graph below).



6. Reach a conclusion about Ho and Ha:

Conclusion: Because the obtained z-score of +1.25 does not fall in the zone of rejection
(i.e., the shaded area in the graph above), you can not reject the null hypothesis. While
the sample mean of $825 is slightly higher than the hypothesized value of $800, these
numbers are not statistically different once you take sampling error into account. Thus,
we do not reject the Ho on the basis of our sample. Our results support the FBI’s claim
that the average monetary loss from street muggings is $800.






Copyright@ 2017 179


Evaluating Criminological Data Chapter 9

Example #2: One-Sample Test of a Population Proportion (Large Sample)



Suppose you are told that a drug treatment center is claiming an 80% success rate (i.e., that a
large proportion [.80] of their clients become “drug free” in 60 days). You have doubts about
this claim, especially because you have strong reasons to believe that these types of treatment
centers may artificially inflate their success rates to look good and get more business. So, the
null hypothesis is Ho: Pu = .80 (80% success rate) and your alternative hypothesis is Ha: Pu < .80
(i.e., the actual success rate is lower than they claim).

Here is how you would test this null hypothesis:

1. Establish Ho and Ha.

Ho: Pu = .80 (the alleged success rate is 80%)
Ha: Pu < .80 (the drug treatment center is exaggerating its success)

2. Develop critical values that define the zone of rejection based on the directionality of the Ha
and the alpha (α) level selected for this test.

Example: Ha: Pu < .80 and α = .01 —> the critical value of z = - 2.33 (it is a negative z-
score because you expect the success rate to be less than 80%). Thus, we will reject Ho if
the obtained z-score is more extreme than z = -2.33 because these outcomes would
occur less than 1% (.01) of the time if the null hypothesis is true (i.e., if Ho: Pu = 80% is
actually true). In other words, an obtained z-score more extreme than -2.33 is a rare
outcome if 80% is the true population parameter.

3. Draw a large random sample of patients (n > 50), compute the sample percent of patients
who became “drug free” (Ps), the standard deviation in this sample (s), and an estimate of
the standard error (se) for this proportion.

Note: Notice that the sample proportion (Ps = .60) listed below is substantially lower the
population claim (Pu = .80), providing some immediate visual evidence that the null
hypothesis may not be correct. A formal test will determine if these differences between the
sample and population proportion are statistically significant.

n = 64, Ps = .60 (60%),
s = Ps (1 − Ps) = . 60 (.40) = .49

se = 5/ 7 = .49 / 64 = .06
4. Convert the sample proportion into standardized (z) scores by using the following conversion
formula:
z = (Ps - Pu) / se = (.60 - .80) /.06 = - 3.33.

Copyright@ 2017 180


Evaluating Criminological Data Chapter 9

5. Compare the obtained z-score (-3.33) with the expected z-score (- 2.33). Use the decision rule
for rejecting Ho based on the direction of the Ha and alpha level (see graph below).




6. Reach a conclusion about Ho and Ha:

Conclusion: Because the obtained z-score of -3.33 exceeds the critical value of -2.33, we
can reject the null hypothesis in favor of the alternative hypothesis. In other words, our
sample proportion, when converted to a z-score, falls in the zone of rejection (i.e., the
shaded area in the graph above). Based on this random sample, our results indicate that
the success rate of the drug center is far lower than the claimed rate of 80%. We would
get a success rate of 60% when the true value is 80% far less than 1 time out of 1000
(.001). This is a rare outcome if the Ho is true. Therefore, we have strong statistical
evidence to reject this claim. On the basis of this sample, our best guess is that the true
success rate is closer to 60% than 80%.


Example #3: Two-Sample Tests of Differences in Population Means (Large Samples)

Are there gender differences in the length of prison sentences given to male and female
robbery offenders? Based on most criminological research and theories about differential
treatment, males are expected to get longer prison sentences than females. This theory and
research implies that the most reasonable alternative hypothesis (Ha) is that Ha: µM – µF > 0.

Here is how we would evaluate the null and alternative hypotheses about gender differences in
the length of prison sentences:


1. Establish Ho and Ha.

Ho: µm - µf = 0 (no gender differences in sentence length)
Ha: µm - µf > 0 (males > females based on past research)

Copyright@ 2017 181


Evaluating Criminological Data Chapter 9

2. Develop critical values that define the zone of rejection based on the directionality of the Ha
and the α level selected for this test.

Example: Ha: µM- µF > 0 and α = .05 —> the critical value of z = +1.64. Thus, we will reject
Ho if the obtained z-score is more extreme than z = +1.64 since these outcomes would
occur less than 5% (.05) of the time if the null hypothesis is true (i.e., if Ho: µM - µF = 0 is
actually true). In other words, an obtained z-score more extreme than +1.64 is a rare
outcome if there is no true difference between the average length of male and female
sentences.

3. Draw two large random samples of drug offenders (one male and one female), compute the
mean sentence length for males (! M) and females (! F), their sample standard deviations (sm,
sf), sample variances (sm2, sf2), and estimate the standard error (se) for the differences in
these means.

Note: Notice that the difference in sample means listed below (40.1 vs. 39.2 months) is not
very large, suggesting some visual support for the null hypothesis of no gender differences.
However, a formal test of these differences is necessary to evaluate this accuracy of the visual
observation.

nM = 1,500 nF = 2,000
! M = 40.1 months ! F = 39.2 months
sM = 8.4 months sF = 12.1 months
sM2 = 70.6 sF2 = 146.4

9 9
se = (58 / 78 ) + ( 5; /7; ) = . 12 = .35


4. Given large sample sizes (nm = 1,500; nf = 2,000), we can transform the differences in sample
means into standardized (z) scores by using the following conversion formula:

z = ( x m - x f ) / se = (40.1 - 39.2) / .35 = + 2.57.

5. Compare the obtained z-score (+2.57) with the expected z-score (+1.64). Use the decision
rule for rejecting Ho based on the direction of the Ha and alpha level (see graph below).

Copyright@ 2017 182


Evaluating Criminological Data Chapter 9

6. Reach conclusion about Ho and Ha:



Conclusions: Because the obtained z-score of +2.57 exceeds the critical value of +1.64,
we can reject the null hypothesis in favor of the alternative hypothesis. In other words,
our differences in sample means, when converted to a z-score, falls in the zone of
rejection (i.e., the shaded area in the graph above). Based on these two random
samples, our results indicate that we should reject the null hypothesis that there are no
gender differences in the length of prison sentences given to robbery offenders. Instead,
our results indicate that males receive significantly longer sentences than females. The
obtained z-score of +2.57 has a probability of far less than 1% if the null hypothesis of
no gender differences was true, providing strong support for rejecting this claim.

Comments about this example

1. To find the probability of z = +2.57, look in a Normal Curve Table for this z-value and
then read over to Column C to find the probability of a value more extreme than +2.57.
This probability is .0049 (i.e., you would get a z-score of > 2.57 less than 5 times out of
1,000).

2. The difference between the male mean (40.1 months) and the female mean (39.2
months) is substantively small but statistically significant. These statistically significant
differences are due to the large sample sizes (nm=1,500, nf=2,000) used in this study.
Larger samples make it easier to reject the null hypothesis. Based on the magnitude of
these gender differences, however, most criminologists would interpret them as
substantively trivial despite their “statistical significance” based on the formal
hypothesis test.


Example #4: Two-Sample Tests of Differences in Population Proportions (Large Samples)

Are there racial differences in the support for capital punishment for 1st degree murders? Both
theory and past research are inconclusive about whether you would expect Black Americans to
be more or less supportive of the death penalty (DP) than White Americans, suggesting that the
most reasonable alternative hypothesis is that Ha: Pb - Pw ≠ 0.

Here is how we would evaluate the null and alternative hypotheses about racial differences in
the support for capital punishment.

1. Establish Ho and Ha.

Ho: Pb – Pw = 0 (no racial differences in support for the death penalty)
Ha: Pb – Pw ≠ 0 (Blacks ≠ Whites in support for the death penalty)

Copyright@ 2017 183


Evaluating Criminological Data Chapter 9

2. Develop critical values that define the zone of rejection based on the directionality of the Ha
and the α level selected for this test.

Example: Ha: Pb – Pw ≠ 0 and α = .05 —> the critical value of z = ± 1.96. Thus, we will
reject Ho if the obtained z-score is more extreme than ± 1.96 because these outcomes
would occur less than 5% (.05) of the time if the null hypothesis is true (i.e., if Ho: Pb – Pw
= 0 is actually true). In other words, an obtained z-score more extreme than ±1.96 is a
rare outcome if there are no actual racial differences.

3. Draw two large random samples of Black and White adults, compute the sample proportions
of those who support capital punishment for Black (Pb) and White (Pw) adults, proportion
who oppose capital punishment for Black (Qb) and White (Qw) adults, and the estimate of the
standard error (se) for the differences in these proportions.

Note: Notice that the difference in sample proportions listed below (.541 vs. .749) is quite
large, suggesting some immediate visual evidence that the null hypothesis of no racial
differences may not be true. However, a formal test of these differences is necessary to
evaluate this accuracy of the visual observation.


nb = 159 nw = 1,249
Pb = .541 (54.1%) Pw = .749 (74.9%)
Qb = .459 Qw = .251

(nb + nw )
SE = PQ
(nb )(nw )

where P = (nb*pb + nw*pw ) / (nb + nw ) and Q = 1- P.
P = .716, Q = .284

4. Given large sample sizes (nb = 159; nw = 1,249), we can transform the differences in sample
proportions into standardized (z) scores by using the following conversion formula:
z = (Pb - Pw) / SE = = - 5.54
(.541!(!.749)
!=!
! ! (159!+!1,249) $$
#(.726)(.274)# &&
" " (159)(1,249) %%

5. Compare the obtained z-score (-5.54) with the expected z-scores of ± 1.96. Use the decision
rule for rejecting Ho based on the direction of the Ha and alpha level (see graph below).

-5.54

Copyright@ 2017 184


Evaluating Criminological Data Chapter 9

6. Reach a conclusion about Ho and Ha.



Conclusions: Because the obtained z-score of -5.54 far exceeds the critical value of
±1.96, we can reject the null hypothesis in favor of the alternative hypothesis. In other
words, our differences in sample proportions, when converted to a z-score, falls in the
zone of rejection (i.e., the shaded area in the graph above). Based on these two random
samples, our results indicate that we should reject the null hypothesis that there are no
racial differences in support for capital punishment for murderers. Instead, our results
indicate that Blacks are significantly less supportive of capital punishment (Pb = 54%
Support) than Whites (Pw =75% Support). An obtained z-score of -5.54 has a probability
of far less than 1 out of a million if the null hypothesis of no racial differences was true,
providing strong support for rejecting this claim.

Practice Applications for Learning Objective 3 (LO3):


Answer the following questions about testing hypotheses about population means and
proportions with large random samples:

Q1. The null hypothesis is that the average U.S. bail amount is $15,000 (Ho: μ$ = $15,000). The sample
mean from a random sample of 100 criminal defendants is $8,000 (=$ = $8,000). Based on this
information alone, what do you think a formal test of this null hypothesis would conclude?

A. reject the null hypothesis
B. do not reject the null hypothesis
C. can’t tell from this information

Q2. Government documents and previous research contend that 95% of felony cases result in a guilty
plea (Ho: PPopulation= 95%). In a random sample of 120 felonies, you find that 93% of them
resulted in a guilty plea (PSample = 93%). Based on this information alone, what do you think a
formal test of this null hypothesis would conclude?

A. reject the null hypothesis
B. do not reject the null hypothesis
C. can’t tell from this information

Q3. The null hypothesis is that there are no gender differences in the years of employment in adult
probation (Ho: μM – μF = 0). The mean employment period from two large random samples are
10.4 years for men and 3.5 years for women (=M = 10.4, =F = 3.5). Based on this information
alone, what do you think a formal test of this null hypothesis would conclude?

A. reject the null hypothesis
B. do not reject the null hypothesis
C. can’t tell from this information

Copyright@ 2017 185


Evaluating Criminological Data Chapter 9


Q4. The critical value of the expected test statistic based on the alpha level and nature of the
alternative hypothesis is Z = +1.64. Your obtained z-score based on your sample statistics is
Z = +1.61. What decision do you make in this case about the null hypothesis?
A. Reject the null hypothesis in favor of the alternative hypothesis
B. Do not reject the null hypothesis
C. Reject both the null and alternative hypotheses


Q5. The critical value of the expected test statistic based on the alpha level and nature of the
alternative hypothesis is Z = - 2.33. Your obtained z-score based on your sample statistics is
Z = - 2.54. What decision do you make in this case about the null hypothesis?
A. Reject the null hypothesis in favor of the alternative hypothesis
B. Do not reject the null hypothesis
C. Reject both the null and alternative hypotheses

Q6. The critical value of the expected test statistic based on the alpha level and nature of the
alternative hypothesis is Z = ± 1.96. Your obtained z-score based on your sample statistics is
Z = + 2.74. What decision do you make in this case about the null hypothesis?
A. Reject the null hypothesis in favor of the alternative hypothesis
B. Do not reject the null hypothesis
C. Reject both the null and alternative hypotheses

Correct Answers: Q1(A), Q2(B), Q3(A), Q4(B), Q5(A), and Q6(A)

LO4: Apply and interpret hypothesis test results about population means and
proportions in small samples
The process of hypothesis testing for small samples is similar to hypothesis testing for large
samples. The primary difference is that the t-distribution is used as the basis for establishing
standardized scores and defining rare and common outcomes if the null hypothesis is true.

The table below provides an abbreviated t-distribution table and the critical t-values that define
the zone(s) of rejection associated with the null hypothesis. This abbreviated t-table contains
critical values for both two- and one-tailed tests (i.e., 2-tailed test are used when Ha is non-
directional [Ha: μ ≠ 80]; 1-tailed tests are used when Ha is directional [Ha: μ > 80 or Ha: μ < 80]].



Copyright@ 2017 186


Evaluating Criminological Data Chapter 9

Critical t-values for selective values of alpha (α)


and degrees of freedom (df)

One-Tailed
α=.05 α=.025 α=.005
Two-Tailed
df α=.10 α=.05 α=.01

10 1.812 2.228 3.169
15 1.753 2.131 2.947

25 1.708 2.060 2.787

40 1.684 2.021 2.704

50 1.676 2.009 2.678

100 1.660 1.984 2.626

200 1.657 1.978 2.613




1. Examples of Hypothesis Testing with Small Random Sample(s)

The logic concerning statistical inference is similar across each type of hypothesis test for small
samples. You begin by stating the null and alternative hypotheses, set up decision rules for
determining when you reject the null hypothesis, draw random sample(s), convert the sample
statistics (e.g., means, proportions) to standardized t-values, compare your observed t-values
with the expected t-values established by your decision rules, and then reach a decision about
whether or not to reject the null hypothesis based on your sample data. Only the method used
to estimate the standard error (i.e., the measure of sampling error) differs across these
different statistical tests.

To illustrate this common logic, examples of each type of hypothesis test with small random
samples are provided below.








Copyright@ 2017 187


Evaluating Criminological Data Chapter 9


Example #1: One-Sample Test of Population Mean (Small Sample)

Suppose a Drug Enforcement Agency (DEA) claims that the average teenager smokes 35 ounces
of marijuana per year (note: 35 ounces is about 2.2 pounds or 1 kilo). You have serious doubts
about this claim, thinking that the DEA might be exaggerating this amount to scare people
about drug-crazed teenagers. Based on previous studies and theories about the social
construction of crime problems, a reasonable alternative hypothesis is that the average amount
of marijuana smoked per year by teenagers is far less than 35 ounces. This information
translates into an alternative hypothesis (Ha) of Ha: µmj < 35.

Here is how you would test this hypothesis using a small sample (since you don’t have a lot of
money to conduct this research).

1. Establish Ho and Ha.

Ho: μmj = 35 ounces of marijuana smoked per year
Ha: μmj < 35 ounces (DEA is exaggerating their claim)

2. Develop critical values that define the zone of rejection based on the directionality of the Ha,
the α level selected for this test, and the degrees of freedom. You have a small sample
(n=41) so you are going to develop critical t-values based on the t-distribution.

Example: Ha: μmj < 35 and α = .05 —> the critical value of t < -1.684 for a one-tailed test
at 40 degrees of freedom (df=n-1=40). Thus, we will reject Ho if the obtained t-score is
more extreme than t = -1.684 because this outcome would occur only about .05 (5%) of
the time if the null hypothesis is true (i.e., if Ho: μmj = 35 ounces is actually true). In other
words, a t-score more extreme than -1.684 is considered a rare outcome if the average
teenager really smokes 35 ounces of marijuana a year.

3. Draw a small random sample of teenagers (n = 41), compute the mean ounces of marijuana
consumed (! mj) and standard deviation of drug consumption (s) for this sample, and
estimate the standard error (se) of the mean.

Note: Notice that the sample mean listed below (! = 19 ounces) is very different than the
claimed population mean (( = 35 @*7AB5), providing some immediate visual evidence that
the null hypothesis may not be correct. A formal test will determine if these differences
between the sample and population means are statistically significant and thus lead to the
rejection of the null hypothesis


n = 41; x mj = 19 ounces; s = 8 ounces; se = s / n = 8 / 6.4 = 1.25

Copyright@ 2017 188


Evaluating Criminological Data Chapter 9

4. Transform the sample mean into a standardized (t) score using the following conversion
formula:
t = ( x mj - μ mj) / se = (19 - 35)/1.25 = - 12.8

5. Compare the obtained t-score (-12.8) with the expected t-score (-1.684). Use the decision
rule for rejecting Ho based on the direction of the Ha, the alpha level, and 40 degrees of
freedom (n-1) (see graph below).



6. Reach a conclusion about Ho and Ha.

Conclusions: Because the obtained t-score of -12.8 falls within the zone of rejection (i.e.,
the shaded area in the graph above), we reject the null hypothesis. As expected based on
our alternative hypothesis (Ha), our sample mean of 19 ounces is far less than the DEA’s
claim of 35 ounces, translating into a t-score of -12.8 that has a probability of less than 1
in 10 million of occurring if the Ho is true. Thus, even with this small sample, our sample
results strongly suggest that the DEA’s claim is not likely to be true. Based on our sample
mean of 19 ounces, our best single guess is that the DEA overestimated the average
yearly teenage consumption of marijuana by about 16 ounces (1 pound).


Example #2: One-Sample Test of Population Proportion (Small Sample)

Suppose that a USA Today poll claims that 61% of American homes are armed with guns. Given
that gun ownership is a sensitive topic and the previous research provides mixed results, you
don’t know whether this claim is too high or too low. Thus, the most reasonable alternative
hypothesis is that Ha: Pgun ≠ 61%.

Here is how you would test this alleged claim about the prevalence of gun ownership in U.S.
households:

1. Establish Ho and Ha.

Ho: Pgun = .61 (61% of U.S. homes allegedly have guns)
Ha: Pgun ≠ .61 (we don’t know if it’s higher or lower)

Copyright@ 2017 189


Evaluating Criminological Data Chapter 9

2. Develop critical t-values that define the zone of rejection based on the directionality of the
Ha, the α level selected for this test, and the degrees of freedom (df=n-1).

Example: Ha: Pgun ≠ .61, α = .01, df=25 (n-1) —> the critical value of t = ± 2.787. Thus, we
will reject Ho if the obtained t-score from our sample is more extreme than t = ±2.787
because these outcomes would occur less than 1% (.01) of the time if the null
hypothesis is true (i.e., if Ho: Pgun = 61% is actually true). In other words, an obtained t-
score more extreme than either -2.787 or +2.787 is considered a rare outcome if 61% of
American homes are actually armed with guns.

3. Draw a small random sample of households (n = 26), compute the sample proportion who
own guns (Ps), the standard deviation in this sample (s), and an estimate of the standard
error (se) for this proportion.

Note: Notice that the sample proportion listed below (PS = .54) is somewhat lower than the
claimed population proportion of American households with guns (PP = .61), providing some
immediate visual evidence that the null hypothesis may not be correct. A formal test will
determine if these differences between the sample and population proportions are
statistically significant and thus lead to the rejection of the null hypothesis.

n = 26, Ps = .54 (54%),

s = C5(1 − C5) = . 54 (.46) = .50; se = s / 7 = .50 / 26 = .1

4. Transform the sample proportion into a standardized (t) score by using the following
conversion formula:

t = (Ps - Pgun) / sep = (.54 - .61) / .1 = -.70.

5. Compare the obtained t-score (-.70) with the expected t-score (±2.787). Use the decision rule
for rejecting Ho based on the direction of the Ha, the alpha level, and degrees of freedom
(see graph below).

Copyright@ 2017 190


Evaluating Criminological Data Chapter 9

6. Reach a conclusion about Ho and Ha:



Conclusions: Because the obtained t-score of -.70 does not exceed the critical values of
±2.787, we cannot reject the null hypothesis in favor of the alternative hypothesis. In
other words, our sample proportion, when converted to a t-score, falls outside of the
zone of rejection (i.e., not within the shaded area in the graph above), so it is a common
outcome if the null hypothesis is true. Based on this small random sample, the
differences between our sample value of 54% and the alleged population value of 61%
are not statistically different once we adjust for sampling error. Thus, our sample results
provide no evidence to dispute the claim by USA Today that 61% of American
households are armed with guns.


Example #3: Two-Sample Test of Differences in Population Mean (Small Samples)

Are there gender differences in the number of Federal Trade Commission (FTC) violations by
male and female stock traders? Based on most criminological research and theories about
gender differences in criminal behavior, male stockbrokers are expected to have more FTC
violations than female stockbrokers. Thus, theory and research imply that the most reasonable
alternative hypothesis (Ha) is that Ha: µm – µf > 0.

Here is how we would evaluate the null and alternative hypotheses about gender differences in
criminal behavior.

1. Establish Ho and Ha.

Ho: µm - µf = 0 (no gender differences in FTC violations)
Ha: µm - µf > 0 (males > females based on past research)

2. Develop critical t-values that define the zone of rejection based on the directionality of the
Ha, the α level selected for this test, and the number of degrees of freedom in the combined
samples (df = (nm -1) + (nf - 1)).

Example: Ha: µm - µf > 0, α = .05, nm = 15 and nf = 20 (df = (15-1) + (20-1) = 33—> the
critical value of t = +1.692. Thus, we will reject Ho if the obtained t-score is more
extreme than t > +1.692 because these outcomes would occur less than 5% (.05) of the
time if the null hypothesis is true (i.e., if Ho: µm - µf = 0 is actually true). In other words,
an obtained t-score more extreme than +1.692 is considered a rare outcome if there are
no actual gender differences in FTC violations.



Copyright@ 2017 191


Evaluating Criminological Data Chapter 9



3. Draw two small random samples of FTC violators (one male and one female), compute the
mean violations for males (! M) and females (! F), their sample standard deviations (sM, sF),
sample variances (sM2, sF2), and estimate the standard error (SE) for the differences in these
means. Calculate the degrees of freedom (df).

Note: Notice that the difference in sample means listed below (24 vs. 22 violations) is not very
large, suggesting some visual support for the null hypothesis of no gender differences.
However, a formal test of these differences is necessary to evaluate this accuracy of the visual
observation.

nm = 15 nf = 20
! M= 24 violations ! F = 22 violations
sm = 8.4 violations sf = 12.1 violations
sm2 = 70.6 sf2 = 146.4

9 9
se = (58 / 78 ) + ( 5; /7; ) = 12 = 3.5

df = nm -1 + nf – 1 = (15-1) + (20-1) = 33

4. Given small sample sizes (nm = 15; nf = 20), we can transform the differences in sample means
into standard (t) scores by using the following conversion formula:

t = (! M - ! F) / se = (24 - 22) / 3.5 = +.57

5. Compare the obtained t-score (+ .57) with the expected t-score (+1.692). Use the decision
rule for rejecting Ho based on the direction of the Ha, alpha level, and the degrees of
freedom (see graph below).



6. Reach conclusion about Ho and Ha:

Conclusions: Because the obtained t-score of + .57 does not exceed the critical t-value
of +1.692, we cannot reject the null hypothesis in favor of the alternative hypothesis. In
other words, our differences in sample means (24 vs. 22) are not significantly different

Copyright@ 2017 192


Evaluating Criminological Data Chapter 9

once we take into account sampling error (as measured by the standard error). The
observed t-value of + .57 falls within the outcome area that is considered common if the
Ho is true (i.e., the non-shaded area in the t-distribution above). Based on these two
random samples, our results provide no evidence to reject the null hypothesis that men
and women stock traders have similar number of FTC violations. Men in our sample did
receive slightly more violations (24 vs. 22 for women), but these gender differences
were statistically insignificant and not greater than what would be expected by chance
alone if the null hypothesis were true.



Example #4: Two-Sample Tests of Differences in Population Proportions (Small Samples)

Suppose you are interested in determining whether there are political party differences in
public support for the insanity defense in criminal trials (i.e., verdicts of "not guilt by reason of
insanity” that mandate civil commitment rather criminal sanctions). Based on prior research
and theories about political ideology, you have a general idea that Republicans will have
different attitudes than Democrats, but you don’t know for sure whether support for the
insanity defense will be more prevalent in one political party than the other. Under these
conditions, the most reasonable alternative hypothesis (Ha) is that Ha: PDem – PReb ≠ 0.

Here is how you would test the null hypothesis of no political party differences in the
proportion of citizens who support the insanity defense using two small random samples.


1. Establish Ho and Ha.

Ho: PDem - PRep = 0 (no differences between the proportions of Democrats [PDem] and
Republics [PRep] who support insanity defense).

Ha: PDem - PRep ≠ 0 (There are differences by political party in their support for the insanity
defense, but we're not absolutely sure if these proportions should be higher for
Democrats or Republicans)

2. Develop critical t-values that define the zone of rejection based on the directionality of the
Ha, the α level selected for this test, and the degrees of freedom.

Example: Ha: PDem- PRep ≠ 0, α = .05, nDem = 13, nRep = 37, df = (13-1) + (37-1) = 48 -> the
critical value of t = ± 2.011. Thus, we will reject Ho if the obtained t-score is more
extreme than ± 2.011 because these outcomes would occur less than 5% (.05) of the
time if the null hypothesis is true (i.e., if Ho : PDem - PRep = 0 is actually true). In other
words, an obtained t-score more extreme than either -2.011 or +2.011 is considered a
rare outcome if there are no actual differences between political parties.

Copyright@ 2017 193


Evaluating Criminological Data Chapter 9

3. Draw two small random samples of Democrats and Republicans (nDem = 13; nRep =37),
compute the sample proportions who support the insanity defense among Democrats (Pdem)
and Republicans (Pns), the proportion who don’t support the insanity defense among
Democrats (QDem) and Republicans (QRep), and the estimate of the standard error (se) for the
differences in these proportions.

Note: Notice that the difference in sample proportions listed below (.31 vs. .16) is somewhat
large, suggesting some immediate visual evidence that the null hypothesis of no political
party differences in support for the insanity defense may not be true. However, a formal test
of these differences is necessary to evaluate this accuracy of the visual observation.

nDem = 13 nRep = 37
PDem = .31 (31%) PRep = .16 (16%)
QDem = .69 QRep = .84

(7GHI + 7JHK )
DE = CF
(7GHI )(7JHK )

where P = (ndem* pdem + nrep*prep ) / (ndem + nrep ) and Q = 1- P.
P = .20, Q = .80

4. Given small sample sizes (nDem = 13; nRep = 37), we can transform the differences in sample
proportions into standardized (t) scores by using the following conversion formula:

t = (PDem - PRep) / se

t (.31!'!.16) .15
!!=!! !!=! !!=!!+!7.50
⎛ ⎛ (13!+!37) ⎞ ⎞ .02
⎜⎝ (.20)(.80) ⎜⎝ (13)(37) ⎟⎠ ⎟⎠
!

5. Compare the obtained t-score (+7.50) from our samples with the expected t-scores ( ±
2.011). Use the decision rule for rejecting Ho based on the direction of the Ha, alpha level,
and 48 degrees of freedom (see graph below).

Copyright@ 2017 194


Evaluating Criminological Data Chapter 9


6. Reach a conclusion about Ho and Ha.

Conclusions: Because the obtained t-score of +7.50 far exceeds the critical t-value of
±2.011, we can reject the null hypothesis in favor of the alternative hypothesis. In other
words, our differences in sample proportions, when converted to a t-score, fall in the
zone of rejection (i.e., the shaded area in the graph above). Based on these two random
samples, our results indicate that we should reject the null hypothesis that there are no
political party differences in public support for the insanity defense. Instead, our results
indicate that Democrats are significantly more likely (PDem = 31%) than Republicans (PRep
=16%) to support the insanity defense. An obtained t-score of +7.5 has an extremely low
probability of occurring (about 1 out of a million times) if the null hypothesis of no
political party differences was true, providing strong support for rejecting this null
hypothesis.

2. Statistical Significance vs. Substantive Significance

Procedures for testing null hypotheses in criminological research are virtually identical across
statistical tests for large and small samples. However, sample size strongly affects the results of
hypothesis testing. Even extremely small differences between sample values and the
hypothesized population values become statistically significant in very large samples (n >
10,000). In contrast, somewhat large differences between sample values and their
hypothesized population values may not be statistically large enough to reject the Ho in small
samples (n < 50).

These effects of sample size on hypothesis test Statistical significance: A hypothesis test
results are often discussed as the difference outcome that indicates a true difference
between statistical significance and between the observed and hypothesized
substantive significance. values; the observed difference is not likely
to occur based on chance alone
For statisticians, statistical significance refers to testing outcomes in which the observed
sample value has a lower probability of occurrence than the pre-established significance level
(α), thereby leading to the rejection of the null hypothesis. In these cases, the difference
between our sample value and the hypothesized population value are considered "statistically
significant". A statistically significant finding only indicates that a true difference exists; it
cannot tell us whether this difference is meaningful. In other words, statistical tests evaluate
whether differences are empirically observed, but these differences may not be important.

In contrast, substantive significance refers to the importance of observed differences. It
indicates whether differences between
observed sample values and hypothesized Substantive significance: The degree to
values are large enough to be meaningful to which statistical results are meaningful for
criminologists or the public. For example, if criminologists or the general public
you find that the average salary of police officers in your sample is $53,800 and the

Copyright@ 2017 195


Evaluating Criminological Data Chapter 9

hypothesized population value is $54,000, would you consider a $200 difference an important
substantive difference? Probably not. However, if your sample value is only $48,000, most
people would consider this $6,000 difference ($48,000 - $54,000) to be substantively important
even if it is not statistically significant.

The difference between statistical and substantive significance is important to consider when
interpreting hypothesis test results. Large differences in small samples may be substantively
important but statistically insignificant, whereas small differences in large samples may be
statistically significant but substantively trivial. The best practice is to balance these two
concerns. Do not use statistical significance as the only indicator of substantive differences,
especially when you have very small or very large samples.

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about testing hypotheses about population means and
proportions with small random samples:

Q1. The null hypothesis is that the average U.S. citizen spends $1,000 per year on crime prevention
equipment and activities (Ho: μ$ = $1,000). The mean annual expenditure from a small random
sample of 25 citizens was $985 (=$ = $985). Based on this information alone, what do you think a
formal test of this null hypothesis would conclude?

A. reject the null hypothesis
B. do not reject the null hypothesis
C. can’t tell from this information

Q2. Previous research indicates that 60% of U.S. residents are afraid to walk alone in their
neighborhood at night (Ho: PPopulation= 60%). In a small random sample of 15 residents, 25% of
them reported being afraid in their neighborhood at night (PSample = 25%). Based on this
information alone, what do you think a formal test of this null hypothesis would conclude?

A. reject the null hypothesis
B. do not reject the null hypothesis
C. can’t tell from this information

Q3. The null hypothesis is that there are no gender differences in the prevalence of excessive force
complaints against law enforcement officials (Ho: μM – μF = 0). Based on two small random
samples of 15 male and 20 female officers, the mean number of excessive force complaints per
officer was 1.3 for men and 1.1 for women (=M = 1.3, =F = 1.1 complaints). Based on this
information alone, what do you think a formal test of this null hypothesis would conclude?

A. reject the null hypothesis
B. do not reject the null hypothesis
C. can’t tell from this information

Copyright@ 2017 196


Evaluating Criminological Data Chapter 9


Q4. The critical value of the expected test statistic based on the alpha level and nature of the
alternative hypothesis is t = +1.68. Your obtained t-score based on your sample statistics is
t = +1.61. What decision do you make in this case about the null hypothesis?
A. reject the null hypothesis in favor of the alternative hypothesis
B. do not reject the null hypothesis
C. reject both the null and alternative hypotheses


Q5. The critical value of the expected test statistic based on the alpha level and nature of the
alternative hypothesis is t = - 2.45. Your obtained t-score based on your sample statistics is
t = - 2.75. What decision do you make in this case about the null hypothesis?
A. reject the null hypothesis in favor of the alternative hypothesis
B. do not reject the null hypothesis
C. reject both the null and alternative hypotheses

Q6. The critical value of the expected test statistic based on the alpha level and nature of the
alternative hypothesis is t = ± 2.11. Your obtained z-score based on your sample statistics is
t = - 2.15. What decision do you make in this case about the null hypothesis?
A. reject the null hypothesis in favor of the alternative hypothesis
B. do not reject the null hypothesis
C. reject both the null and alternative hypotheses

Correct Answers: Q1(B), Q2(A), Q3(B), Q4(B), Q5(A), and Q6(A)



Review

Use these reflection questions and directed activities to help master the material presented in
this chapter.

What?
Can you define the critical concepts covered in this chapter? Describe each of these concepts
in your own words and give at least two examples of each.
• Hypothesis testing • Type I and Type II errors
• Types of hypothesis tests • Zone of rejection
• Null hypotheses (Ho) • Significance level (alpha)
• Alternative hypotheses (Ha) • Critical value(s) of test statistics
• One-tail and two-tail tests • Statistical vs. Substantive Significance
• Decision rules

Copyright@ 2017 197


Evaluating Criminological Data Chapter 9

How?

Answer the following questions and complete the associated activities:
• How are null hypotheses used in hypothesis testing? Describe the purpose of a null
hypothesis.
• How do decision rules influence hypothesis test outcomes? Describe the decision rules used
in hypothesis testing and explain their impact on hypothesis tests.
• How are hypothesis tests conducted for large samples? Describe the steps involved.
• How are hypothesis tests conducted for small samples? Describe the steps involved.

When?

Answer the following questions.
• When should we use null hypotheses instead of alternative hypotheses?
• When do we make different decisions rules as part of hypothesis testing?
• When should we use the z-distribution in hypothesis testing?
• When should we use the t-distribution in hypothesis testing?

Why?
Think about the material you learned in this chapter and consider the following question.
• Why does knowing this material about hypothesis testing help you do your own
criminological research and identify distorted claims and misuses of data by other
researchers, government agencies, and the mass media?

Copyright@ 2017 198


Evaluating Criminological Data Chapter 10

CHAPTER 10:
Measuring the Association Between Two Qualitative Variables

This chapter begins our examination of the methods for analyzing the nature of the relationship
between two variables. It focuses on using contingency tables to measure the association
between two qualitative variables. Understanding the concepts and learning objectives in this
chapter will improve your analytical skills for evaluating relational and causal research
questions in criminology.


Concepts Learning Objectives (LO)

- Bivariate association LO1: Construct and interpret a bivariate contingency table
- Contingency table
- Joint frequencies LO2: Construct and interpret percentage contingency
- Marginal frequencies tables to assess bivariate relationships
- Table of total percentages
- Table of row percentages LO3: Visually assess the strength of associations in
- Table of column percentages bivariate contingency tables
- Independent variable
- Dependent variable LO4: Calculate and interpret observed and expected chi-
- Chi-square test square values for testing the hypothesis of statistical
- Statistical independence independence
- Chi-square distribution



Introduction

In the previous chapter, you learned how sample data is used to test hypotheses about
population parameters. One such hypothesis involves a two-sample test of differences in
population proportions (e.g., Ho: P1 - P2 = 0). When we reject this particular null hypothesis, we
conclude that significant differences exist between two groups. Another way to describe this
significant result is to say that a strong bivariate association exists between these two
qualitative variables.

In this chapter, we describe the basic principles, concepts, and statistical method for assessing
the nature and strength of relationships between two qualitative variables. Contingency table
analysis is the statistical technique used to evaluate these bivariate relationships. We
demonstrate how to (1) construct and interpret contingency tables, (2) visually assess the

Copyright@ 2017 199



Evaluating Criminological Data Chapter 10

nature and strength of causal relationships in these tables, and (3) formally test hypotheses
about the relationship between two qualitative variables.

We begin by defining the basic principles and concepts that underlie bivariate contingency table
analyses.

LO1: Construct and interpret a bivariate contingency table



Many relational and causal research questions explore relationships between two qualitative
variables. Examples include:

• Are there gender differences in the risks of being convicted for motor vehicle theft?

Variables Gender Conviction
Attributes Male, Female Yes, No

• Are there racial differences in whether a defendant’s charges are dismissed?

Variables Race Dismissal
Attributes Black, White, Other Yes, No

• Does support for capital punishment vary by region?

Variables Support Region
Attributes Yes, No North, South

• Do novice and professional burglars use different methods to break into residential
properties?

Variables Experience Method
Attributes Novice, Professional Force, Non-force

• Are convicted drunk drivers or convicted drug addicts more likely to be rearrested after
their conviction?

Variables Offender Type Rearrested
Attributes Drunk Driver, Drug Addict Yes, No

Copyright@ 2017 200




Evaluating Criminological Data Chapter 10

In each of these research questions, criminologists are asking whether there is a bivariate
association between two qualitative variables. If men and women are equally likely to be
convicted when charged with auto theft, there is no bivariate association between gender and
conviction. But, if 80% of men and only 30% of Bivariate association: The degree to
women car thieves were convicted for their crimes, which two variables are related to
then we would find a bivariate association between or associated with each other
gender and the likelihood of conviction for car
thieves (i.e., men are more likely than women to be convicted). Remember, we are examining
differences in proportions (P), rather than differences in means (μ) when dealing with
qualitative variables.

Different statistical techniques can be used to measure the strength of bivariate associations.
The particular type of statistical technique, however, depends on each variable's level of
measurement. A contingency table analysis is the appropriate statistical method to explore a
bivariate association between two qualitative variables.

1. The Structure of Contingency Tables

A contingency table displays the relationship between two or more qualitative variables. This
table represents a cross tabulation, which is a statistical procedure used to summarize
qualitative or categorical data. Contingency table: A table that
represents the cross tabulation of
The table below shows the basic structural form of a two or more qualitative variables
two-variable contingency table. If each variable has
two attributes (e.g., Gender: Male, Female; Conviction: Yes, No), it will have two rows (labeled
R1 and R2) and two columns (labeled C1 and C2). The intersection of particular rows and
particular columns is called the “cell” of the table. For example, the cell “r1c2” involves cases
that have the attributes of row 1 (r1) and column (c2). All of the other cells of this table are
similarly defined.

Structure of Contingency Tables

C1 C2 Row Total

R1 r1c1 r1 c2 NR1

R2 r2 c1 r2 c2 NR2

Column Total NC1 NC2 Ntotal




Copyright@ 2017 201




Evaluating Criminological Data Chapter 10

To illustrate how to create a contingency table, let’s say that you are working with a small
dataset of ten sentencing cases. For each case, you have data on gender and conviction. These
data are presented in the table below.

Case # Gender Conviction
1 Male Yes
2 Male No
3 Female No
4 Male Yes
5 Male Yes
6 Female Yes
7 Female No
8 Female No
9 Male Yes
10 Male Yes

To conduct a cross tabulation and create a contingency table for this raw data, the variable
attributes are placed in the R1, R2 row headings (Conviction: Yes, No) and C1, C2 column
headings (Gender: Male, Female). Then, the joint Joint frequency: The number of
frequencies of these attributes are calculated. A joint cases that possess a particular
frequency represents the number of cases that possess a combination of variable attributes
particular combination of attributes. Each cell in a
contingency table reflects the number of cases with these particular joint combinations of
attributes.

In the following table, the joint frequency of Male/Yes is equal to 5 because there are five cases
in your sentencing dataset above in which the person was both male and convicted. The other
cell frequencies are derived from that fact that 1 male was not convicted (Male/No), 1 female
was convicted (Female/Yes), and 3 females were not convicted (Female/No).


Contingency Table of the Relationship Between Gender and Conviction

Male Female Row Total

Yes 5 1 6

No 1 3 4

Column Total 6 4 10




Copyright@ 2017 202




Evaluating Criminological Data Chapter 10

The number of rows and columns, or number of attributes for each variable, determine the
number of cells in any contingency table. For example, a table with 2 rows and 3 columns is
called a 2 x 3 contingency table with 6 distinct cells (2 rows x 3 columns = 6 cells). A table with 4
rows and 2 columns is considered a 4 x 2 contingency table with 8 distinct cells. A contingency
table analysis may include any number of rows and columns, but many criminological analyses
of the bivariate association between qualitative variables involve tables with 2 rows and 2
columns (i.e., 2 x 2 tables).

The last major components in a contingency table are the row and column totals, referred to as
marginal frequencies. Marginal frequencies are simply frequency distributions for each variable
attribute in the table. For example, the row marginal total of NR1 (row “Yes” below) indicates
the number of observations in attribute 1 for variable R (R1). The row marginal total for “Yes” is
equal to 6 because there are six convictions in your Marginal frequencies: Frequency
dataset, and the row marginal for “No” is equal to 4 since distributions for each variable
there are four non-convictions. Similarly, the column attribute in a contingency table
marginal total of NC1 indicates the number of observations
in attribute 1 of column variable C (C1). There are 6 males (NC1) and 4 females (NC2) in your
dataset. Finally, the sum of each marginal total (Ntotal) is equal to the total sample size (N = 10).

Converting Raw Data Into a Contingency Table

Case # Gender Conviction Contingency Table –
1 Male Yes Relationship Between Gender and
2 Male No Conviction
3 Female No
4 Male Yes Male Female Row Total
5 Male Yes
Yes 5 1 6
6 Female Yes
7 Female No No 1 3 4
8 Female No
9 Male Yes Column 6 4 10
Total

10 Male Yes



The 3 x 2 contingency table below depicts the relationship between two qualitative variables:
the victim-offender relationship and the gender of homicide offenders.





Copyright@ 2017 203




Evaluating Criminological Data Chapter 10

Contingency Table – Victim-Offender Relationship by the


Homicide Offender's Gender

Male Female Row Total

Family 1,548 409 1,957

Acquaintance 4,756 548 5,304

Stranger 2,542 145 2,687

Column Total 8,846 1,102 9,948




The table above allows us to conduct a contingency table analysis of the bivariate association
between a homicide offender’s gender and the victim-offender relationship. Proper
interpretations of the frequency distributions and cross tabulations in this contingency table
include the following:

• The row marginal frequencies in the contingency table depict the univariate
frequency distribution for the variable “victim-offender relationship.” They show the
relative frequency of family members, acquaintances, and strangers as homicide
victims in the dataset. Acquaintances (n = 5,304) are the most common victims of
homicides, followed by strangers (n = 2,687) and family members (n = 1,957).

• The column marginal frequencies in the contingency table depict the univariate
frequency distribution for the variable “homicide offender’s gender.” They show the
relative frequency of males and females as homicide offenders in the dataset. Males
(n = 8,846) are far more likely to be identified as homicide offenders than females
(n = 1,102).

• The joint [cell] frequencies in the contingency table represent the cross tabulation of
the offender’s gender and victim-offender relationship. As shown in this table,
acquaintances who are killed by males (n = 4,756) are the most frequently occurring
homicide situation, followed by strangers killed by males (n = 2,542) and family
members killed by males (n = 1,548). The least frequent situation for homicide
involves the killing of strangers by females (n = 145).

Copyright@ 2017 204




Evaluating Criminological Data Chapter 10

Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about contingency tables and their structural characteristics:

Q1. What is the joint frequency for prison and
violent felony in this contingency table?

A. 100
B. 92
C. 36
D. 31



Q2. What is the row marginal frequency for
probation sentences in this contingency table?

A. 193
B. 96
C. 95
D. 16



Q3. What is the most commonly occuring type of
offense and sentence in this contingency
table?

A. Prison and violent felony
B. Prison and property felony
C. Jail and property felony
D. Probation and drug felony

Copyright@ 2017 205




Evaluating Criminological Data Chapter 10


Q4. What is the least commonly occuring type of
offense and sentence in this contingency
table?

A. Jail and violent felony
B. Probation and violent felony
C. “other sentence” and property felony
D. “other sentence” and drug felony



Q5. What is the proper way to define the
dimensions or size of this contingency table?

A. 2 x 2 contingency table
B. 3 x 4 contingency table
C. 4 x 4 contingency table
D. 5 x 4 contingency table


Correct Answers: Q1(B), Q2(C), Q3(A), Q4(A), and Q5(C).

LO2: Construct and interpret percentage contingency tables to assess bivariate


relationships

A contingency table of joint cell frequencies tells us how many people/observations have a
particular trait for one variable and a particular trait for another variable. To visually assess
whether or not a bivariate association exists between the two variables, joint cell frequencies
are converted into tables of standardized percentage scores. Three distinct types of percentage
tables can be derived from the observed cell frequencies in a contingency table: (1) a total
percentage table, (2) a row percentage table, and (3) a column percentage table.

1. Constructing and Interpreting a Total Percentage Table

Comparisons across contingency table cells are easier if the joint frequencies are standardized
into percentages. One type of percentage conversion of the observed cell frequencies involves
the calculation of a total percentage table. Creating a total percentage table requires two basic

Copyright@ 2017 206




Evaluating Criminological Data Chapter 10

steps: (1) divide each observed cell frequency by the


total sample size and (2) multiply these proportions by Total percentage table: A
100 to convert them into percentages. contingency table that presents
the percentage of the total
The tables A and B below show how to construct a total number of observed cases
percentage table. Table A depicts a 3 x 2 contingency within each cell
table, including joint frequencies and marginal frequencies. Table B depicts the associated total
percentage table and shows the calculations used to standardize the joint frequencies (e.g.,
how to compute percentages for each cell).

Table A. Observed Joint (Cell) Frequency Table


Gender of Homicide Offender
Victim-Offender
Relationship? Male Female Total
Family 1,548 409 1,957

Acquaintances 4,756 548 5,304

Strangers 2,542 145 2,686

Total 8,846 1,102 9,948

Table B. Total Percentage Table


Gender of Homicide Offender
Victim-Offender
Relationship? Male Female Total

Family 15.6 % 4.1 % 1,957


(1548/9948) (409/9948)
Acquaintances 47.8 % 5.5 % 5,304
(4756/9948) (548/9948)
Strangers 25.5 % 1.5 % 2,686
(2542/9948) (145/9948)
Total 8,846 1,102 9,948
100 %

Cell Calculations = [cell frequency/ total N] x 100

Copyright@ 2017 207




Evaluating Criminological Data Chapter 10

The primary value of a contingency table of total percentages is that it helps us to quickly
identify the most and least common attribute combinations. For example, we know from this
table that men who kill acquaintances represent nearly half (47.8%) of all homicides, and
homicides rarely involve women who kill strangers (1.5%). However, other types of
standardized percentage tables are better able to visually display the nature and magnitude of
bivariate relationships.

2. Constructing and Interpreting a Row Percentage Table

Another way to standardize the joint cell frequencies in a Row percentage table: A
contingency table is to convert the observed cell contingency table that presents
frequencies into a row percentage table. A row the percentage of cases within
percentage table is calculated using the following each row that are observed within
formula: each column of the table

cell % = [cell frequency / marginal row frequency] x 100.

These calculations are shown in the contingency table below.

Row Percentage Table


Gender of Homicide Offender
Victim-Offender
Relationship? Male Female Total

Family 79.1 % 20.9 % 100 %


(1548/1957) (409/1957) 1,957
Acquaintances 89.7 % 10.3 % 100 %
(4756/5304) (548/5304) 5,304
Strangers 94.6 % 5.4 % 100 %
(2542/2686) (145/2686) 2,686
Total 8,846 1,102 9,948
(88.9 %) (11.1 %) (100 %)
Cell Calculations = [row cell frequency/ row marginal frequency] x 100


Here are the questions answered by a row percentage table:

• Among the 1,957 homicides of family members, 79.1% of them involved male offenders
and the remaining 20.9% involved female offenders.

Copyright@ 2017 208




Evaluating Criminological Data Chapter 10

Calculations: cell % = [cell frequencies / row marginal frequencies] x 100.


(1) % family killings by males = [1,548 male / 1,957 total family] x 100 = 79.1 %
(2) % family killings by females = [409 female / 1,957 total family] x 100 = 20.9 %

• Among the 5,304 homicides of acquaintances, 89.7% of them involved male offenders
and the remaining 10.3% involved female offenders.

Calculations: cell % = [cell frequencies / row marginal frequencies] x 100.


(1) % acquaintance killings by males = [4,756 male / 5,304 total acquaintance] x 100 = 89.7 %
(2) % acquaintance killings by females = [548 female / 5,304 total acquaintance] x 100 = 10.3 %

• Among the 2,686 homicides of strangers, 94.6% of them involved male offenders and
the remaining 5.4% involved female offenders.

Calculations: cell % = [cell frequencies / row marginal frequencies] x 100.
(1) % stranger homicides by males = [2,542 male / 2,686 total stranger] x 100 = 94.6 %
(2) % stranger homicides by females = [145 female / 2,686 total stranger] x 100 = 5.4 %

Notice that the interpretations highlight the row comparisons within each column. For
example, this table reveals that male offenders commit a far higher percentage of stranger
homicides (95%) than family homicides (79%). Similarly, family homicides are more likely to
involve female offenders (21%) than either killings of acquaintances (10%) or stranger
homicides (5%). Both of these results suggest that there is a bivariate association between the
offender's gender and the victim-offender relationship in homicide. In particular, stranger
homicides are more likely to involve male offenders than other types of victim-offender
relationships, whereas a higher proportion of family killings involve female offenders than is
true of other homicide situations.

3. Constructing and Interpreting a Column Percentage Table

The final way to standardize the joint cell distributions in
Column percentage table: A
a contingency table is by converting the observed cell
contingency table that represents
frequencies into a column percentage table. This
the percentage of cases within
column percentage table is calculated using the
each column that are observed
following formula:
within each row of the table

cell % = [cell frequencies / column marginal frequencies] x 100.

These calculations are shown in the contingency table below.

Copyright@ 2017 209




Evaluating Criminological Data Chapter 10

Column Percentage Table


Gender of Homicide Offender
Victim-Offender
Relationship? Male Female Total

Family 17.5 % 37.1 % 19.7 %


(1548/8846) (409/1102) 1,957
Acquaintances 53.8 % 49.7 % 53.3 %
(4756/8846) (548/1102) 5,304
Strangers 28.7 % 13.2 % 27.0 %
(2542/8846) (145/1102) (27.0%)
Total 100.0 % 100.0 % 100.0 %
8,846 1,102 9,948
Cell Calculations = [column cell frequency/ column marginal frequency] x 100

Here are the questions answered by a column percentage table:

• Among the 8,846 homicides by male offenders, 53.8% of them involved the killing of
acquaintances, 28.7% were stranger homicides, and the remaining 17.5% were
homicides involving family members.

Calculations: cell % = [cell frequencies / column marginal frequencies] x 100.
(1) % males killing acquaintances = [4,756 acquaintances / 8,846 males] x 100 = 53.8 %
(2) % males killing strangers = [2,542 strangers / 8,846 males] x 100 = 28.7 %
(3) % males killing family members= [1,548 family / 8,846 males] x 100 = 17.5 %

• Among the 1,102 homicides by female offenders, 49.7% of them involved the killing of
acquaintances, 37.1% were family homicides, and the remaining 13.2% involved the
killing of strangers.

Calculations: cell % = [cell frequencies / column marginal frequencies] x 100.
(1) % females killing acquaintances = [548 acquaintances / 1,102 females] x 100 = 49.7 %
(2) % females killing family members = [409 family / 1,102 females] x 100 = 37.1 %
(3) % females killing strangers = [145 strangers / 1,102 females] x 100 = 13.2 %

Copyright@ 2017 210


Evaluating Criminological Data Chapter 10



Notice that the interpretations highlight the column comparisons within each row. For
example, this table reveals that male offenders are more than twice as likely as female
offenders to kill strangers (29% vs. 13%, respectively). Female offenders, in contrast, are more
than twice as likely as male offenders to kill family members (37% vs. 17.5%, respectively).
Thus, we can conclude that a bivariate association exists between the offender’s gender and
the victim-offender relationship.

4. Assessing Causal Relations in Contingency Tables

Most criminological research does not involve the construction and interpretation of four
different contingency tables (i.e., observed cell frequencies, total percentage, row percentage,
and column percentage tables). Instead, contingency table analysis is greatly simplified if the
two categorical variables are interpretable in the casual language of dependent and
independent variables.
Independent variable (IV): A

variable that is used to explain or
In the analysis of causal relationships, the
predict differences in a dependent
independent variable is the “cause” and the
variable
dependent variable is the “effect". A vast amount
of criminological research has explored the biological, psychological, and sociological risk
factors (i.e., causes) of the onset and persistence of criminal behavior (i.e., effects). Many
demographic variables that are ascribed status characteristics (e.g., sex, race, age) are viewed
as independent variables in most criminological
Dependent variable (DV): A variable
studies because they influence (i.e., “cause”)
whose values are assumed to be
the outcome of a dependent variable (e.g., the
determined by an independent variable
risks of victimization, arrest, a prison sentence,
or being executed). We use the values of independent variables to explain the values of
dependent variables.

When two variables are specified in causal terms (i.e., as independent and dependent
variables), the proper practice is to (1) interpret only one of these possible contingency tables
(i.e., either the row or column percent table) and (2) use the following “golden rule” for
calculating and interpreting the bivariate relationship:

• Percentage within categories of the independent variable(s) and then compare these
percentages for each category of the dependent variable.

Note that the particular percentage table you will use depends on what variable is used as the
“row variable” and what variable is used as the “column variable”. For example, if your
contingency table is set up so that the independent variable is the column variable and the
dependent variable is the row variable, this “golden rule” means that you will:


Copyright@ 2017 211


Evaluating Criminological Data Chapter 10

• Construct and interpret a column percentage table in which you calculate the cell
frequency percentage within each column and then compare these column
percentages within each row of the table.

In contrast, if your contingency table is set up in such a way that the independent variable is the
row variable and the dependent variable is the column variable, this “golden rule” means that
you will:

• Construct and interpret a row percentage table in which you calculate the cell
frequency percentage within each row and then compare these row percentages
within each column of the table.

In the previous contingency table example of the bivariate association between the homicide
offender’s gender and the victim-offender relationship, the independent variable is gender and
the dependent variable is victim-offender relationship. We know this because we expect
gender to influence types of homicide (whether a person kills a family member acquaintance,
or stranger). It would not make sense to argue that types of homicides influence gender
(whether a person becomes male or female). Based on this information, how would you answer
the following questions?

1. What table should be interpreted?

Answer: Column percentage table.

2. Why interpret this particular table?

Answer: Because the independent variable (gender) is the column variable and
the dependent variable (victim-offender relationship) is the row variable.

3. Examine the column percentage table. What is the proper interpretation of the bivariate
association between the offender’s gender and the victim-offender relationship in this
table?

Answer: (1) Men are more likely than women (29% vs. 13%) to kill strangers;
and (2) Women are over twice as likely as men to kill family members (37% vs.
17.5%, respectively).

A good summary of bivariate relationships in a contingency table analysis involves interpreting
both (1) the marginal distributions for the dependent variable (e.g., statements about over half
of homicides involve acquaintances, over one-quarter are strangers, and the remaining one-
fifth are family members) and (2) the joint frequency percentages (based on the column
percentages in this example).

Copyright@ 2017 212




Evaluating Criminological Data Chapter 10

Causal research questions specify


which variable is the independent
Causal vs. Relational Research Questions
variable and which is the dependent
variable. However, relational research Relational: What is the relationship between associating with
delinquent peers and juvenile delinquency?
questions do not specify which
variable is expected to cause change in • Since this question does not specify which variable we expect
another. When you are unable to to influence the other, you should interpret all percentage
tables (total, row, and column).
specify a causal relationship between
two qualitative variables and are Causal: Does being a juvenile delinquent make it more likely that
simply looking at the pattern of their a juvenile will associate with other delinquent peers?
joint occurrences, you should interpret • This question specifies “delinquency” as the independent
the total percentage table, the row variable and “association with delinquent peers” as the
percentage table, and the column dependent variable. Therefore, you should interpret the total
percentage table. You also interpret percentage table and either the row percentage table (if the
the total percentage table in causal IV is the row variable) or the column percentage table (if the
IV is the column variable).
analyses of contingency tables
(because the relative prevalence of
particular combinations in your data is
important information to know), but far greater emphasis is placed on interpreting the proper
table of row or column percentages.

Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about interpreting contingency tables :

Q1. Using the adjoining table of observed cell
frequencies, what percentage of convicted
offenders in this sample are drug felons with a
probation sentence?

A. 1.0 %
B. 7.5 %
C. 8.5 %
D. 6.75 %


Q2. What is the dependent variable in the following research question: Are men more likely than
women to carry firearms in public places?

A. Gender
B. Carrying firearms
C. Public places

Copyright@ 2017 213




Evaluating Criminological Data Chapter 10


Q3. In the adjoining contingency table, what
percentage table should be the basis for your
interpretation of the relationship between these
two qualitative variables (hint: you need to identify
the independent and dependent variable in this
table to answer this question)?

A. Table of Total Percentages
B. Table of Row Percentages
C. Table of Column Percentages


Q4. Analyze the adjoining contingency table and identify
the correct statement.

A. 12% of the defendants are released before trial
and White defendants are far more likely to
receive pretrial release than other racial groups.

B. 60% of the defendants are White and White
defendants are far more likely than Black
defendants to receive pretrial release.
B
C. 80% of the defendants receive pretrial release and
the percentage of defendants who get pretrial
release is the same among Black, White and Other
racial groups.


Q5.Which of the following is the proper interpretation
of the bivariate association between gender and
carrying firearms in this table?

A. Women and men are equally likely to carry
firearms.

B. Women are more likely to carry firearms than
men (20% vs. 10%, respectively).

C. Men are far more likely to carry firearms than
women (67% vs. 33%, respectively).


Correct Answers: Q1(C), Q2(B), Q3(B), Q4(C), and Q5(B).

Copyright@ 2017 214


Evaluating Criminological Data Chapter 10

LO3: Visually assess the strength of associations in bivariate contingency tables



You have learned how to determine whether a relationship exists between two qualitative
variables by constructing percentage contingency tables. Now, you will learn how to assess the
“strength” or “magnitude” of these relationships.

Visually comparing percentage differences across categories is the most basic method for
gauging the magnitude of a bivariate association in a contingency table. The table below
explains how to determine whether an observed relationship should be interpreted as “weak”,
“moderate”, “strong”, or “perfect.” An example is presented to illustrate each interpretation.
Notice that the examples are using column percentage contingency tables. This tells us that Age
(Juvenile, Adult) is the independent variable and Failed drug test (Yes, No) is the dependent
variable. When using a column percentage table, we compare percentages across rows.

Visual/Intuitive Methods for Assessing Magnitude of Bivariate Associations in a Contingency Table

Strength or Observed differences
Column percentage contingency table
Magnitude in categories
Age
1. No/Weak = 0 to 10% differences Failed drug Juvenile Adult
Relationship in categories test
No 80% 82%
Yes 20% 18% (2% Difference)
100% 100%

2. Moderate = 10 to 30% differences Failed drug Juvenile Adult
Relationship in categories test
No 65% 85%
Yes 35% 15% (20% Difference)
100% 100%

3. Strong = > 30% differences in Failed drug Juvenile Adult
Relationship categories test
No 40% 90%
Yes 60% 10% (50% Difference)
100% 100%

4. Perfect = 100% differences in Failed drug Juvenile Adult
Relationship categories test
No 0% 100%
Yes 100% 0% (100% Difference)
100% 100%

Copyright@ 2017 215




Evaluating Criminological Data Chapter 10

Although the table above provides some general “rules of thumb” for making statements about
the magnitude of bivariate relationships, some caution is required in applying these rules when
you have very small samples (e.g., total sample size of n < 20 and when < 5 cases are within any
joint frequency cell for a particular variable). This is because percentages based on small
samples are highly unstable/volatile. For example, the fraction 1 out of 2 (50%) drops by 17% by
simply adding 1 additional observation to make it 1 out 3 (33%). According to the rules in the
table above, this 17-percentage point difference would change the interpretation of the
relationship from “strong” to “moderate,” even though the difference is due to a trivial 1-unit
difference in the base size.

This visual method provides a good starting point for examining the bivariate relations between
two qualitative variables. However, most researchers will supplement this visual method with a
more formal statistical test to determine if the observed differences are statistically significant.

Practice Applications for Learning Objective 3 (LO3):


Use visual methods to assess the nature and strength of the bivariate associations in the
following contingency tables :

Q1. Calculate the appropriate percentage table
from the adjoinging contingency table and Does a person's gender influence their chances of
identify the correct statement below? being victimized by identify theft?

A. There is no relationship between gender Male Female Row Total
and victimization. Both men and women
are equally likely to be victims of identity Non-Victim 35 35 70
theft.
B. There is a moderate relationship between Victim 15 65 80
gender and victimization. 35% of women
and 15% of men were victims. Col. Total 50 100 150
C. There is a strong relationship between
gender and victimization. 65% of women

were victims of ID theft compared to only


30% among men.


Q2. Assume that a contingency table indicates that there is a 45% point difference between burglars
and sex offenders in whether they have been reconvicted of a crime after parole. Based on this
percentage difference, what is the strength of this relationship between type of offender and the
likelihood of reconvictions?

A. No relationship or weak relationship between type of offender and reconviction risks
B. Moderate relationship between type of offender and reconviction risks
C. Strong relationship between type of offender and reconviction risks

Copyright@ 2017 216




Evaluating Criminological Data Chapter 10


Q3. Calculate the appropriate percentage
table from the adjoinging contingency Does a person's gender influence their chances of
table and identify the correct statement being victimized by identify theft?
below?
Male Female Row Total
A. There is no relationship between gender
and victimization. Both men and women Non-Victim 40 80 120
are equally likely to be victims of identity
theft.
Victim 10 20 30
B. There is a weak relationship between
gender and victimization. 20% of women
and 10% of men were victims. Col. Total 50 100 150
C. There is a strong relationship between
gender and victimization. Women are
twice as likely to be victims of ID theft as

men.


Q4. Calculate the appropriate percentage

table from this contingency table and Does the type of attorney influence conviction risks?
identify the correct statement below?

Defendant Public Private Row Total
A. There is no relationship between type of
Convicted? Defender Attorney
attorney and a defendant’s risks of
conviction. Defendants have an equal No 100 80 180
80% risk of conviction regardless of their
type of attorney. Yes 400 120 520
B. There is a moderate relationship between
attorney type and conviction risks. 80% of Col. Total 500 200 700
defendants with public defenders were
convicted compared to 60% with private
attorneys.
C. There is a strong relationship between
attorney type and conviction risks.
Defendants with public defenders are
nearly 4 times more likely to be convicted
than those defendants with private
attorneys.

Correct Answers: Q1(C), Q2(C), Q3(A), and Q4(B).



Copyright@ 2017 217




Evaluating Criminological Data Chapter 10


LO4: Calculate and interpret observed and expected chi-square values for
testing the hypothesis of statistical independence

The chi-square test is the statistical test most commonly used to evaluate the relationship
between two qualitative variables in a contingency table analysis. This statistical procedure
tests the null hypothesis that there is no relationship Chi-square (c2) test: A hypothesis
between the two variables. More technically, a chi- test of statistical significance used
square test determines whether statistical to evaluate the relationship
independence exists between the two variables. Two between two qualitative variables
variables are statistically independent if the relative
proportions of cases are equal across categories. Complete or perfect statistical independence
only occurs if there is 0 percentage difference on the dependent variable for different
categories on the independent variable.
Statistical independence: The null
The column percentage table below demonstrates hypothesis tested in contingency
perfect statistical independence. Notice that age and table analysis that assumes no
drug test results are statistically independent because relationship between variables
there are no age differences in the likelihood of failing a drug test— in this example, both 25%
of juveniles and 25% of adults failed a drug test. In other words, there is no relationship
between age and drug test results.

Statistical Independence
Age
Failed drug test Juvenile Adult
No 75% 75%
Yes 25% 25% (0% Difference)
100% 100%
The variables, age and drug test results, are statistically
independent because the relative proportion of failures
is identical among both juveniles and adults.

Chi-square test (symbolized by c2) is the formal test of the statistical independence (null)
hypothesis. The steps involved in testing this null hypothesis are similar to other hypothesis
tests:

• Similar to the normal curve and t-distribution, the chi-square test uses a sampling
distribution (called the chi-square distribution) to define rare and common outcomes if
the null hypothesis is true (i.e., Ho: the two variables are unrelated [statistically
independent]).

Copyright@ 2017 218




Evaluating Criminological Data Chapter 10


• Sample data are converted to standardized scores (i.e., observed chi-square values).

• Decision rules establish the Chi-square (c2) distribution: The sampling
boundaries of the zone of rejection. distribution used in contingency table
These decision rules are based on the analysis to define rare and common
significance level of the test (e.g., α = outcomes when the null hypothesis is true
.05, .01, or .001) and the degrees of
freedom (df) of the test—the degrees of freedom in a contingency tables are based on
the following formula: df = (# of rows -1) x (# of columns - 1)

• The observed chi-square value and expected chi-square value (e.g., the value expected if
the null hypothesis is true) are compared to decide whether to reject or not reject the
null hypothesis. The null hypothesis is rejected when the observed chi-square value
exceeds the expected critical value of chi-square that was established by the decision
rule underlying the hypothesis test (i.e., the alpha-level selected for testing statistical
significance and the degrees of freedom associated with the contingency table).

An abbreviated table of critical values of the chi-square distribution is shown below. A visual
representation of a chi-square distribution is presented to the right of the table. The selection
of chi-square test critical values to reject the null hypothesis involves a three-step process:
(1) the researcher selects the significance level for the test (e.g., α = .05),
(2) the degrees of freedom are calculated [df = (# of rows -1) x (# of columns-1)], and
(3) the critical value of the chi-square test is found at the intersection of the appropriate
column for the α-level and the appropriate row for the degrees of freedom.

Abbreviated Table of Critical Values under the Chi-Square Distribution (c2)








Expected (common) and
2
unexpected (rare) critical c values




Copyright@ 2017 219


Evaluating Criminological Data Chapter 10

As practice in using this table, identify the critical chi-square value for rejecting the null
hypothesis of statistical independence based on the following scenarios.

• A table with 2 rows and 2 columns (df = (rows – 1)(columns -1)= 1 df) and α = .05:
à Answer: expected critical c2 value = 3.841

• A table with 3 rows and 2 columns (df = (rows – 1)(columns -1)= 2 df) and α = .05:
à Answer: expected critical c2 value = 5.991

• A table with 2 rows and 2 columns (df = (rows – 1)(columns -1)= 1 df) and α = .01:
à Answer: expected critical c2 value = 6.635

• A table with 3 rows and 2 columns (df = (rows – 1)(columns -1)= 2 df) and α = .01:
à Answer: expected critical c2 value = 9.210

1. Calculating and Interpreting Chi-Square Test Results

Below are the chi-square test calculation and interpretation steps for evaluating the null
hypothesis of statistical independence between two qualitative variables.

Step 1: Develop a Contingency Table (Observed Cell Frequencies Table)

We will use the contingency table used previously to analyze the association between the
offender gender and the victim-offender relationship in homicides.

Gender of Homicide Offender


Victim-Offender
Relationship Male Female Total
Family 1,548 409 1,957
Acquaintances 4,756 548 5,304
Strangers 2,542 145 2,687
Total 8,846 1,102 9,948

Step 2: Calculate the Expected Frequencies for Each Cell



Expected cell frequencies are calculated under the assumption of statistical independence.
These values represent the numbers we would expect to find in the contingency table if the null
hypothesis is true. If two variables are independent, the expected cell frequency will be the
product of their marginal distributions divided by the total sample size.

Copyright@ 2017 220




Evaluating Criminological Data Chapter 10

For example, to find the expected number for the cell “Family, Male” take the total (marginal)
number of “family victims” in the sample (n = 1,957), multiply this number by the total
(marginal) number of males in the sample (n = 8,846), and then divide this product by the total
sample size (N = 9,948).

Here are the calculations for each cell’s expected frequency:

Expected (Family, Male) = [1,957 x 8,846]/ 9,948 = 1,740.2
Expected (Family, Female) = [1,957 x 1,102]/ 9,948 = 216.8
Expected (Acquaintance, Male) = [5,304 x 8,846]/ 9,948 = 4,716.4
Expected (Acquaintance, Female) = [5,304 x 1,102]/ 9,948 = 587.6
Expected (Stranger, Male) = [2,687 x 8,846]/ 9,948 = 2,389.3
Expected (Stranger, Female) = [2,687 x 1,102]/ 9,948 = 297.7

Step 3: Compute the Observed Chi-Square Value

Chi-square values are based on the differences between the observed and expected cell
frequencies. This statistic requires us to sum the average squared discrepancies between the
observed and expected cell frequencies. It is calculated using the following formula:

(o − e)2
χ2 = ∑
e ; where o = observed cell frequency and e = expected cell frequency

This formula tells us to: (1) subtract the expected cell frequency (e) from the observed cell
frequency (o), (2) square each of these differences, (3) divide each of these squared differences
by the expected frequency for that cell, and (4) add the squared differences for all cells to
obtain the observed chi-square value.

Cell (Family, Male) = (1,548 - 1,740.2)2 / 1,740.2 = 21.2
Cell (Family, Female) = (409 - 216.8)2 / 216.8 = 170.4
Cell (Acquaintance, Male) = (4,756 - 4,716.4)2 / 4,716.4 = .33
Cell (Acquaintance, Female) = (548 - 587.6)2 / 587.6 = 2.67
Cell (Stranger, Male) = (2,542 - 2,389.3)2 / 2,389.3 = 9.76
Cell (Stranger, Female) = (145 - 297.7)2 / 297.7 = 78.3

!" = 21.2 + 170.4 + .33 + 2.67 + 9.76 + 78.3 = 282.66




Copyright@ 2017 221


Evaluating Criminological Data Chapter 10


Step 4: Identify the Critical Value for the Chi-Square Test

The critical value is based on the selected alpha level (e.g., α = .05) and the degrees of freedom.
Remember that we compute the degrees of freedom using the formula: df = (#rows -
1)(#columns -1). In this case, our table contains 3 rows and 2 columns: df = (3-1)(2-1) = 2.

The critical value for the chi-square test is found by looking in a Table of Critical Values of Chi-
Square (χ2). This table is available in all statistics books (see Appendix B). At the .05 significance
level (α = .05) and with 2 degrees of freedom, the critical value for this chi-square test is 5.991.
Thus, any observed chi-square value greater than 5.991 will lead to the rejection of the null
hypothesis (Ho) that the two variables are unrelated. In contrast, an observed chi-square value
of less than 5.991 will support the null hypothesis (Ho) that the two variables are statistically
independent/unrelated.

Step 5: Compare the Observed and Expected Chi-Square Values

If the observed value is greater than the expected value (#$% ' ( > )*+ ' ( ), we reject the null
hypothesis.

In this example, the observed chi-square value far exceeds the expected chi-square value
(382.66 vs. 5.991) so we can be confident in rejecting the claim that the two variables are not
related. Contrary to the null hypothesis, the two variables are statistically related. The column
percentage table for this sample indicates that (1) male homicide offenders are far more likely
to kill strangers than female offenders (29% vs. 13%, respectively) and (2) female offenders are
far more likely to kill family members than male offenders (37% vs. 17.5%, respectively). These
gender differences are statistically significant because they would occur far less than 5 times
out of 100 (α = .05) if the null hypothesis of no relationship were true.

2. Issues with Computing/Interpreting Chi-Square Tests

The chi-square test for statistical significance of a bivariate relationship can be applied to any
size contingency table. For example, if you wanted to look at state differences in resident gun
ownership, you would have a crosstab with 50 rows (1 for each state) and 2 columns (1 column
for the number of residents in the sample that said “yes” to gun ownership and 1 column for
the “no” group). This table would have a total of 100 separate cells (50 states x 2 gun [yes/no] =
100 cells).

Similar to the procedures described above for the 3 x 2 example of the victim-offender
relationship and gender, a chi-square test of state differences in gun ownership would involve
(1) creating a contingency table of observed cell frequencies, (2) calculating the expected
frequencies for each cell – assuming that the null hypothesis is true, (3) computing the chi-
square test statistic value, (4) finding the critical value associated with our decision rule for

Copyright@ 2017 222




Evaluating Criminological Data Chapter 10

rejecting the null hypothesis, and (5) deciding whether to reject or not reject the null
hypothesis by comparing the observed chi-square value with the expected chi-square value.

Just like other statistical procedures discussed in previous chapters, chi-square tests are
strongly affected by sample size and other factors. In particular, small percentage differences
across categories (e.g., 5% differences between Democrats and Republican in their support for
legalizing marijuana laws) may be substantively trivial but statistically significant if a very large
sample is used (n > 500). In contrast, for small samples and especially when the expected
frequencies for any particular cell are less than 5, chi-square tests are somewhat unreliable and
may provide misleading results depending upon the nature and number of small cell
frequencies.

When interpreting the results of a bivariate contingency table analysis, researchers will use
both visual methods and formal statistical tests like chi-square. However, they will also balance
the concerns of substantive and statistical significance. The following general guidelines may
help you decide when to interpret a bivariate association as being “significant” by balancing the
issues of sample size, statistical significance, and substantive significance:

• Don’t place too much emphasis on small percentage differences that are statistically
significant when using large samples (e.g., a 3% difference will be statistically significant
when n = 1,000, but it is probably substantively trivial in most criminological research).

• Don’t dismiss moderately large percentage differences (e.g., 10% to 20% differences) as
“insignificant” in small samples (n < 50) even if the chi-square value is not statistically
significant. These findings may suggest an important “trend” that requires further
investigation.

• Place more emphasis on statistically significant findings when sample sizes are
moderately large (between 100 and 1,000). This rule may help restrain people from
making too much of percentage differences that have not passed a reasonable statistical
significance test.









Copyright@ 2017 223




Evaluating Criminological Data Chapter 10

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about using chi-square tests to evaluate the statistical
significance of bivariate relationships in contingency table analyses:

Q1. Does the following column percentage
table provide visual support for the null
hypothesis (Ho) of statistical independence
(i.e., no relationship between the
variable)?

A. Yes, it gives visual evidence for Ho.
B. No, it gives visual evidence to reject Ho.


Does a person's gender influence their chances of
Q2. What is the observed chi-square value for being victimized by identify theft?
the following contingency table?
Male Female Row Total

A. ,-./01/2 !" 1345/ = 0


Non-Victim 40 80 120
B. ,-./01/2 !" 1345/ = 6.5
C. ,-./01/2 !" 1345/ = 8.9 Victim 10 20 30
"
D. ,-./01/2 ! 1345/ = 14.3
Col. Total 50 100 150


Q3. Using a .05 significance level (α = .05),
what is the expected critical value for the Does a person's gender influence their chances of
being victimized by identify theft?
chi-square tests for the following
contingency table? (Hint: you need to Male Female Row Total
figure out the degrees of freedom for your
test to answer this question). Non-Victim 40 80 120

8. /9:/;</2 !" 1345/ = 2.706 Victim 10 20 30

B. /9:/;</2 !" 1345/ = 3.841 Col. Total 50 100 150


C. /9:/;</2 !" 1345/ = 5.991
D. /9:/;</2 !" 1345/ = 7.815

Copyright@ 2017 224




Evaluating Criminological Data Chapter 10


Q4. An analysis of gender differences in the likelihood of getting a prison sentence revealed a
chi-square value of 2.3 with 1 degree of freedom. Based on this obtained chi-square
value and degrees of freedom, would you reject or not reject the null hypothesis at the
.05 significance level (α = .05)?

A. Reject the null hypothesis of no relationship between gender and sentencing.
B. Do not reject the null hypothesis of no relationship between gender and sentencing.
C. There is not enough information to answer this question.

Q5. An analysis of the relationship between type of evidence and the likelihood of arrest
yields a chi-square value of 12.5 with 3 degrees of freedom. Based on this obtained chi-
square value and degrees of freedom, would you reject or not reject the null hypothesis
at the .01 significance level (α = .01)?

A. Reject the null hypothesis of no relationship between type of evidence and arrest.
B. Do not reject the null hypothesis of no relationship between type of evidence and
arrest.
C. There is not enough information to answer this question.
Correct Answers: Q1(B), Q2(A), Q3(B), Q4(B), and Q5(A).

Review

Use these reflection questions and directed activities to help master the material presented in
this chapter.

What?
Can you define the critical concepts covered in this chapter? Describe each of these concepts
in your own words and give at least two examples of each.
• Bivariate association • Table of column percentages
• Contingency table • Independent variable (IV)
• Joint frequencies • Dependent variable (DV)
• Marginal frequencies • Chi-square (c2) test
• Table of total percentages • Statistical independence
• Table of row percentages • Chi-square (c2) distribution

Copyright@ 2017 225




Evaluating Criminological Data Chapter 10

How?

Answer the following questions and complete the associated activities.
• How are contingency tables constructed? Describe the steps involved.
• How are percentage contingency tables constructed? Describe the steps involved.
• How are contingency tables used to assess the magnitude of a bivariate relationship?
Describe the comparisons made in this type of analysis.
• How is a chi-square test calculated? Describe the steps involved.

When?

Answer the following questions.
• When should we use contingency table analyses – what types of variables should be
examined?
• When should we interpret total, row, or column percentage tables?
• When is a bivariate relationship considered weak, moderate, strong, or perfect?
• When should the null hypothesis be rejected in a chi-square test?

Why?

Think about the material you learned in this chapter and consider the following question.
• Why does knowing this material about assessing the relationship between two
qualitative variables help you evaluate the accuracy of criminological research findings
and identify distorted claims and misuses of data by other researchers, government
agencies, and the mass media?

Copyright@ 2017 226




Analyzing Criminological Data Chapter 11

CHAPTER 11:
Analyzing Variation Within and Between Group Means

This chapter describes a statistical approach for analyzing the sources of variation within a
dependent variable and assessing how much of this variation is explained by an independent
variable. The approach is called the "analysis of variance" (ANOVA for short) and it focuses on
identifying the extent to which knowledge of group membership (e.g., gender, race,
employment status) explains variation in a quantitative dependent variable (e.g., crime rates,
number of victimizations, length prison sentences). Understanding the concepts and learning
objectives in this chapter will improve your analytical skills for evaluating causal research
questions in criminology.

Concepts Learning Objectives (LO)



- ANOVA LO1: Describe the major sources of variability in
- Partitioning variance a quantitative dependent variable
- Total variation
- Between-group variation LO2: Apply visual methods to identify and assess
- Within-group variation the magnitude of group differences
- Correlation ratio (η2)
- F-distribution LO3: Calculate and interpret the correlation
- Mean sum of squares between ratio (η2)
groups
- Mean sum of squares within groups LO4: Conduct and evaluate formal tests of
- F-ratio hypotheses about differences in group
means

Introduction

In the previous chapter, we described how to evaluate the nature and statistical significance of
bivariate associations between two qualitative variables. We used a contingency table of
percentage differences to assess differences across categories and used the chi-square
sampling distribution to formally test the null hypothesis of statistical independence between
variables.

In this chapter, we describe the principles, concepts, and statistical technique used to assess
the nature and strength of relationships between a categorical independent variable and a
continuous dependent variable. The statistical technique used to evaluate these causal
relationships is called an analysis of variance (ANOVA). This statistical method simply compares

Copyright @2017
227

Analyzing Criminological Data Chapter 11

and analyzes the differences in the average scores on the dependent variable for each group or
category of the independent variable.

Our discussion of ANOVA focuses on (1) describing the major sources of variability in a
quantitative dependent variable, (2) using visual methods to Identify and assess the magnitude
of variability in group differences for this type of dependent variable, (3) calculating and
interpreting the correlation ratio (as a measure of the strength of the bivariate relationship
between these variables), and (4) conducting and evaluating formal tests of hypotheses about
the differences in group means.

We begin with describing and defining the major sources of variability that underlie the
assessment of bivariate associations through the analysis of variance (ANOVA).

LO1: Describe the major sources of variability in a quantitative dependent


variable

Much criminal justice research attempts to explain variability (differences) observed in a
dependent variable. We use statistical tests to see whether this variation can be explained by
variability observed in an independent variable. This allows us to determine whether one
variable (the independent variable [IV]) might be responsible for causing change in another (the
dependent variable [DV]).

Examine the following criminological research questions that require us to explain variability in
a dependent variable by examining these differences across categories of an independent
variable. Notice that the independent variable is categorical (qualitative) and the dependent
variable is quantitative in each question.

• How much of the variation in the crime rates of different cities is explained by their
regional location (e.g., South, West, Midwest)?

• Are differences in the number of citizen complaints for excessive use of force explained,
in part, by the type of police patrol practices (e.g., one- versus two-officer patrol units)?

• Is a large amount of the statewide variability in homicide rates accounted for by
knowing whether or not a state has capital punishment?

• Are there fewer drug relapses among criminal defendants who receive one type of
treatment (e.g., cognitive behavioral therapy) than another (e.g., family counseling)?

• How much variation in the length of prison sentences for convicted offenders is
attributable to the nature of their prior felony arrests (coded 0, 1, 2 or more arrests) or
some other factor (e.g., offense seriousness, gender, race)?

Copyright @2017
228

Analyzing Criminological Data Chapter 11

In each of these examples, the basic research question asks how much variation in a
quantitative dependent variable (e.g., crime rates, number of excessive force complaints and
drug relapses, length of sentence) is accounted for by differences across a categorical
independent variable (e.g., regional location, type of drug treatment, level of prior arrests)? To
answer these questions, criminologists examine the
Analysis of variance (ANOVA): A
differences in the mean values on the dependent
statistical technique for assessing
variable for the different categories of the
the amount of variability in a
independent variable. An analysis of variance (ANOVA
dependent variable that is explained
for short) is used to analyze this variability in mean
by different categories of an
differences.
independent variable

Let’s begin our investigation of ANOVA by examining the basic principles of variability and its
sources.

1. An illustration of the basic principles of variability and its explanations

Variation is a concept that represents the deviation of scores from a central value. For example,
in a previous chapter on variability (see Chapter 6), you learned how to calculate variance.
Variance is a statistical measure of the average sum of the squared deviations of individual
scores from their mean score. The following tables illustrate the variability in the number of
prior arrests and how these arrests vary by the type of offender.

Prior Arrests for 12 Offenders Prior Arrests for Different Groups of Offenders
Prior Prior Offender Group
Individual Arrests Individual Arrests Type Mean
1 0 1 0 Murderer
2 1 2 1 Murderer x M= 1
3 1 3 1 Murderer
4 2 4 2 Murderer
5 4 5 4 Petty Thief
6 5 6 5 Petty Thief x PT = 5
7 5 7 5 Petty Thief
8 6 8 6 Petty Thief
9 10 9 10 Drug Offender
10 15 10 15 Drug Offender x DO = 15
11 15 11 15 Drug Offender
12 20 12 20 Drug Offender
Overall Mean = 7 Overall Mean = 7 7

Copyright @2017
229

Analyzing Criminological Data Chapter 11

The table on the left above shows the number of prior arrests for 12 criminals. These arrest
data are rank-ordered from low to high. You should immediately notice the following when you
look at this table:

(1) there is wide variability in the number of prior arrests among these 12 offenders,
(2) this variability ranges from 0 arrests to 20 prior arrests, and
(3) the average (mean) number of arrests for all 12 offenders is 7, but there is variation
around this mean (i.e., some offenders have far less, some have far more).

Given the widespread variability in prior arrests among these 12 offenders, most criminologists
want an explanation for these differences from the average prior arrest value. In other words,
they look for other variable(s) that may account for the observed variation in the number of
prior arrests. One possible explanatory variable may be the type of offender. For example,
wouldn't you expect particular types of habitual offenders (e.g., thieves who are kleptomaniacs,
drug addicts) to have more prior arrests than other offenders?

The table on the right further classifies these same individuals and their prior arrests on the
basis of offender type (coded Murderer, Petty Thief, or Drug Offender). This allows us to
consider whether this particular variable helps us explain variability in offenders' arrest
histories.

So, what do we learn about variability in arrest histories by looking at differences within each
offender? Here’s the information revealed in the multi-colored area of the table above:

• Murderers in this sample don’t have extensive criminal records. They average 1 previous
arrest and there is little variation around this mean (e.g., their prior arrests range from
only 0 to 2 arrests).

• Petty thieves in this sample have more extensive criminal records than murderers, but
they have a far lower average number of prior arrests than drug offenders. The range of
prior arrests for petty thieves is also fairly narrow (ranging from 4 to 6 prior arrests).
Even the petty thief with the longest criminal record (i.e., individual #8 has 6 prior
arrests) has a far lower number of arrests than the drug offender with the shortest
criminal record (i.e., individual #9 has 10 previous arrests).

• Drug offenders in this sample have the most extensive criminal record (! = 15) and the
largest variability around their mean (ranging from 10 to 20 prior arrests).

• Because these between-group differences in the average criminal record are fairly large
and there is only small amount of variation within most of these groups, you should
conclude that knowledge of the offender type helps us explain the total amount of
variability in the arrest histories of these 12 offenders.

Copyright @2017
230

Analyzing Criminological Data Chapter 11

Knowledge of offender type helps explain the total amount of variability in prior arrests for the
following reasons:

(1) If we hear that someone has only 1-2 prior arrests, we can successfully predict that this
person is a murderer in our sample.
(2) If someone has about 4-5 prior arrests, we can successfully predict that this person is a
petty thief.
(3) If someone has anywhere between 14 and 20 prior arrests, we can successfully predict
that this person is a drug offender.

In the example above, we examined the total variation in a dependent variable and evaluated
the variation’s source. We determined that the variation is due mostly to differences between
groups, which leads us to reject the null hypothesis of no differences between groups. If we
found that the variation was the result of within group differences (i.e., knowing the offense
type does not help to predict their arrest history), then we would not reject the null hypothesis.
An analysis of variance (ANOVA) is a formal test to determine whether variation in the
dependent variable stems from between group or within group differences.

2. The Method and Major Components of ANOVA

ANOVA is a statistical technique commonly used in experimental research to examine the
effect of an independent variable (sometimes called treatment levels) on some outcome
measure (dependent variable). It is also more generally used when criminologists want to find
out if knowledge of group membership (e.g., male vs. female, low vs. medium vs. high drug use)
helps explain variation in some outcome variable (e.g., variation in length of prison sentence,
hours in drug rehab, number of times arrested). The variability that we are trying to explain
with an ANOVA test is variability around the overall mean for a dependent variable.

The basic statistical principle underlying ANOVA Partitioning variance: The separation of
is the concept of “partitioning variance". In the total variance in a dependent variable
particular, ANOVA looks at the total variation in a into two distinct parts: (1) variability
dependent variable and separates this variation between groups in their mean ratings on
into two components: between-group and the dependent variable and (2) variability
within-group variation. within groups around their group means

• The total variation (symbolized as SSTotal) Total variation (SSTotal): The total amount
represents the total sum of the squared of variation in a dependent variable,
deviations of individual scores from the represented by the sum of the squared
overall mean across all observations. deviations of individual scores from the
Total variation is computed using the overall mean across all observations
following formula:

Copyright @2017
231

Analyzing Criminological Data Chapter 11



SSTotal = ∑ (xi − XGM )2 , where (xi − XGM )2 is individual deviations (xi) from the grand
mean (XGM ) and the grand mean is the overall mean of the
dependent variable across all observations and is defined as
xi
XGM = ∑
n

• The between-group variation (symbolized as Between-group variation (SSBetween):
SSBetween) represents the total sum of the The variation in a dependent
squared deviations of the means for each group variable that is represented by
or category of the independent variable from differences between the group
the grand mean for all observations. Between- means and the grand mean
group variation is computed using the following
formula:

SSBetween = ∑ n(xgroup − XGM )2 where n is the number of observations in each group,
x group is the mean for each group, and XGM is the grand mean
across all observations.

• The within-group variation (symbolized as SSWithin) represents the total sum of the
squared deviations of each individual score from its group mean. It measures the
variation around each group’s mean. Within- Within-group variation (SSWithin):
group variation is also called “error variance,” The variation in a dependent
since it represents the error in predicting the variable that is represented by
dependent variable solely on the basis of the differences between each individual
means for each category of the independent score from its group mean
variable. Within-group variation is computed
using the following formula:


SSWithin = ∑ (xi − xgroup ) , where xi is the individual score and xgroup is the group mean for that
2

individual’s group.

When these variation components are taken together, the total variation can be expressed by
the following formula:
SSTotal = SSBetween + SSWithin

ANOVA assesses the strength of bivariate relationships by comparing the relative sizes of the
between-group and within-group variation. The greater the proportion of the total variation
(SSTotal) that is attributable to between-group variation (SSBetween), the stronger the relationship
between the independent variable and the dependent variable. However, the greater the

Copyright @2017
232

Analyzing Criminological Data Chapter 11

relative size of the within-group variation (SSWithin), the greater the error in predicting the
dependent variable from the independent variable. In cases of large within-group variation, the
independent variable becomes less useful in explaining the variation in the dependent variable.

3. Partitioning variance into different components

The best way to illustrate the partitioning of variation (SSTotal, SSBetween, SSWithin) is by using
ANOVA to test a strong bivariate relationship. By showing an ANOVA outcome when applied to
a “strong relationship” scenario, you should be better able to understand how ANOVA works in
more typical cases.

The table below reproduces our earlier variability example of prior arrests for different groups
of offenders. We will use this to illustrate (1) how to calculate the numerical values for each of
the three components of variation (SSTotal, SSBetween, SSWithin) and (2) how to interpret the relative
sizes of the two sources of variability (SSBetween, SSWithin). By doing so, we can explore the
bivariate association between a categorical independent variable (type of offender) and a
quantitative dependent variable (number of prior arrests).

Calculations and interpretations of the partitioning of variance in ANOVA

Individual Prior Arrests Offender Type Group Mean


1 0 Murderer
2 1 Murderer x M= 1
3 1 Murderer
4 2 Murderer
5 4 Petty Thief
6 5 Petty Thief x PT = 5
7 5 Petty Thief
8 6 Petty Thief
9 10 Drug Offender
10 15 Drug Offender x DO = 15
11 15 Drug Offender
12 20 Drug Offender
Overall Mean = 7 7


Copyright @2017
233

Analyzing Criminological Data Chapter 11

Step1: Before making calculations, make some general preliminary observations.



You should notice that:
• There is wide variability in arrests, ranging from 0 to 20.
• There are big differences in the number of arrests between the groups of offenders
(e.g., murderers have only a few prior arrests, drug offenders have many priors).
• There are small differences in the number of prior arrests within each group of
offenders. This possible exception is among drug offenders where the arrests range
from 10 to 20.


Step 2: Calculate the overall grand mean and group means

• Grand Mean= x GM= # (% − ' GM)2/n = (0 +1 +1 +2 +4 +5 +5 +6 +10 +15 +15 +20)/ 12 = 7.0

• Group Mean for Murderers = x M = # %M)2/ nM = (0 + 1 + 1 + 2)/ 4 = 1.0

• Group Mean for Petty Thieves = x PT = # %PT)2/ nPT = (4 + 5 + 5 + 6)/ 4 = 5.0

• Group Mean for Drug Offenders = x DO = # %DO)2/ nDO = (10 + 15 + 15 + 20)/ 4 = 15.0



Step 3: Compute the total variation (SSTotal ), between-group variation (SSTotal ), and within-
group variation (SSWithin ).

• Total Variation = SSTotal = ∑ (xi − XGM ) = (0-7)2 + (1-7)2 + (1-7)2 + (2-7)2 + (4-7)2 + (5-7)2
2

+ (5-7)2 + (6-7)2 + (10-7)2 + (15-7)2 + (15-7)2 + (20-7)2 = 470.0



• Between-Group Variation = SSBetween = ∑ n(xgroup − XGM )2 = 4(1-7)2 + 4(5-7)2 + 4(15-7)2
= 416.0

• Within-Group Variation = SSWithin = ∑ (xi − XGroup ) = [(0-1)2 + (1-1)2 + (1-1)2 + (2-1)2] +
2

[(4-5)2 + (5-5)2 + (5-5)2 + (6-5)2] + [(10-15)2 + (15-15)2 +


(15-15)2 + (20-15)2] = 54.0

• Total Variation = SSTotal = SSBetween + SSWithin --> 470.0 = 416.0 + 54.0.

Copyright @2017
234

Analyzing Criminological Data Chapter 11

Step 4: Interpret the relative sizes of between-group variation and within-group variation

• The total variation in the number of prior arrests is 470.

• Most of this variation (416 of the 470) represents differences between the group
means and the overall grand mean (i.e., between-group variation). This between-
group variation is the variation in prior arrests that is explained by knowing the
individual's particular group membership (i.e., Are they a murderer, petty thief, or
drug offender?).

• Only a relatively small amount of the total variation (54 of the 470) represents
variation around the means for each group. This within-group variation represents
errors in predicting the number of prior arrests solely on the basis of the group
means for the type of offender.


Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about the major sources of variability in the analysis of variance
(ANOVA):

Q1. When conducting an analysis of variance (ANOVA), what is the proper formula for the
total variation in the dependent variable (SSTotal)?
A.
SSTotal = ∑ (xi − XGM )2
B.
SSBetween = ∑ n(xgroup − XGM )2
C.
SSWithin = ∑ (xi − XGroup )2


Q2. When conducting an analysis of variance (ANOVA), what is the proper formula for the
between-group variation in the dependent variable (SSBetween)?
A.
SSTotal = ∑ (xi − XGM )2
B.
SSBetween = ∑ n(xgroup − XGM )2
C.
SSWithin = ∑ (xi − XGroup )2

Copyright @2017
235

Analyzing Criminological Data Chapter 11


Q3. When conducting an analysis of variance (ANOVA), what is the proper formula for the
within-group variation in the dependent variable (SSWithin)?
A.
SS = ∑ (x − X )2
Total i GM

B.
SSBetween = ∑ n(xgroup − XGM )2
C.
SSWithin = ∑ (xi − XGroup )2


Q4. Using the formula SSTotal = Σ (xi - %GM)2, what is the

numerical value of the total variation in this data?

The correct answer is:


A. 12
B. 56
C. 96
D. 152




Q5. Using the formula SSBetween = Σ n(%group - %GM)2, what

is the numerical value of the between-group
variation in this data?
The correct answer is:
A. 12
B. 56
C. 96
D. 152

Copyright @2017
236

Analyzing Criminological Data Chapter 11

Q6. If SSTotal = 200 and SSBetween = 160, what is the numerical value of SSWithin?

A. 360
B. 200
C. 160
D. 40

Correct Answers: Q1(A), Q2(B), Q3(C), Q4(D), Q5(C), and Q6(D).




LO2: Apply visual methods to identify and assess the magnitude of group
differences

The quickest way to evaluate the nature and magnitude of bivariate relations through ANOVA is
by visually inspecting the mean differences on the dependent variable across the attributes of
the independent variable. Basic mean comparisons for three groups are symbolized by
comparing the values of x Group1 , x Group2 , and x Group3 . In our example of prior arrests by
offender type, visual inspection of these group means (i.e., x Murder = 1; x Petty Thief = 5;
x Drug = 15) clearly indicates that murderers, on average, have far fewer prior arrests than drug
offenders. These mean differences are also dramatically revealed in the bar chart below. Both
the numerical mean comparisons and the bar chart strongly suggest that knowledge of
offender type explains some of the sample’s variability in prior arrests.



Visual inspection of mean differences is an important starting point for assessing the strength
of a bivariate association. However, it is also important to examine the magnitude of within-
group variation. High levels of variation around these group means suggests that the mean for
the group may not be representative of all scores within that particular category, contributing
to a degree of error in predicting a dependent variable solely on the basis of these group
means. You must compare the relative size of both the between-group and within-group
variability when using visual methods to more accurately judge the strength of these bivariate
relationships. Here are some general rules for making tentative judgments about the strength
of these relationships:

Copyright @2017
237

Analyzing Criminological Data Chapter 11

• Evidence of Weak Bivariate Associations in ANOVA: (1) small differences between


group means and large within-group variability or (2) comparable levels of between-
group and within-group variability (e.g., high between- and high within- group variation
= weak relationship).

• Evidence of Moderate Bivariate Associations in ANOVA: Moderate to large differences
between group means and relatively smaller but notable variability in the numerical
values of scores around each group's mean.

• Evidence of Strong Bivariate Associations in ANOVA: Large differences between group
means and very small within-group variability around these means.

• Evidence of Perfect Bivariate Associations in ANOVA: Large differences between group
means and no within-group variability (i.e., the group means vary widely but all scores
within each group are identical).

Notice that these "interpretation rules" allow us to make some inferences about the strength of
a bivariate relationship between a categorical independent variable and a quantitative
dependent variable with only two bits of information: (1) the mean values on the dependent
variable for each category or group of the independent variable and (2) a measure of the
variability in scores within each group. For example, using the group means and group
standard deviations (as a measure of variability within each group), the table below illustrates
how we make inferences by visually comparing these numerical values.

Group Means ( x ) and Strength of Why is relationship weak,
Causal Relationship Standard Deviations (s) Relationship moderate, or strong?

Large between- ($45,000)


Gender differences in x Mal = $95,000, s = $750
Strong and small within- ($750 and
detective salaries x Fem = $50,000, s = $500
$500) group differences

Large between- (2 yrs) and


Age differences in x Juvenile = 3 yrs, s = 1.5 yrs
Moderate moderate within- (1.5 and
length of probation x Adult = 1 yrs, s = 2.5 yrs
2.5 yrs) group differences

Moderate between- (<1


x Civil = 3.1 days, s = 1.9
Court differences in day) and moderate within-
Weak/Moderate
trial length x Criminal = 2.4 days, s = 1.5 (1.9 and 1.5) group
differences

Small between- (0.1) and


Regional differences x South = 6.2 per 100k, s = 4.0
Weak moderate within-group (4.0
in murder rates x North = 6.1 per 100k, s = 3.1
and 3.1) differences

Copyright @2017
238

Analyzing Criminological Data Chapter 11

Another visual approach involves the use of box plots. These graphs show how concentrated
scores are around the median value for each category and the magnitude of the within-group
variation. The following box plot of regional differences in the control of corruption clearly
reveals some differences between world regions in their achievement of this objective, but it
also shows substantial variability within each of these world regions.

This boxplot suggests a weak bivariate relationship because (1) while the median values (as
indicated by the black horizontal lines within each red box) show some regional differences in
their average values, (2) the large amount of within-group variation (as indicated by both the
length of the red boxes that cover the middle 50% of the cases and the long "whiskers" that
shows the additional range of values) overwhelms this between-group differences. When you
have large between-group differences and large within-group differences, the bivariate
relationship between the two variables will be relatively weak.

Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about the applying visual methods for assessing the nature of
group differences in the analysis of variance (ANOVA):

Q1. If there is large differences between group means and only small variation within each
group, what will be the strength of the relationship between the independent variable
and dependent variable?
A. weak
B. moderate
C. strong

Copyright @2017
239

Analyzing Criminological Data Chapter 11


Q2. Given the following group means and standard deviations ('1 = 102, s1 = 14; %2 = 101, s2 =
23; %3 = 99, s3=25), what is the magnitude of the bivariate associate between the
independent variable and these scores on the dependent variable?

A. weak
B. moderate
C. strong


Q3. What is the correct interpretation of the pattern
of the group means in this bar chart? Mean Investigative hours per case
60
A. The average investigative hours is longer for 50
murder than any other type of crime in this
40
chart.
B. The average investigative hours is longer for 30
burglary cases than robbery cases. 20
C. The average investigative hours is longer for 10
robbery cases than rape cases. 0
D. There are no major differences in the mean Murder Rape Robbery Burglary
hours of investigation for different types of
crimes.


Q4. What is the correct interpretation of the pattern
of the group means in this bar chart? Mean dollar loss by robbery location

$6,000
A. The mean dollar loss in street robberies is
about $2,000. $5,000

B. The mean dollar loss in residential robberies is $4,000


about $3,000. $3,000
C. The mean dollar loss is far larger in bank
$2,000
robberies than the other types of robberies in
this chart. $1,000
D. There are no major differences in the mean $0
dollar loss in different types of robbery. Street Residence Bank



Copyright @2017
240

Analyzing Criminological Data Chapter 11


Q5. What is the correct interpretation of the pattern of
the group means in this box plot?

A. There is less variation around the average
probation risk score for juveniles charged
with simple assault than other juvenile
offenders.

B. There is less variation around the average
probation risk score for juveniles charged
with petty theft than other juvenile offenders.

C. There is less variation around the average
probation risk score for juveniles charged
with drug use than other juvenile offenders.


Q6. Based on this box plot, what is the magnitude or
strength of the relationship between type of
offense and probation risk?

A. weak, because of small between group and
large within group differences.

B. moderate, because of some large differences
between groups and relatively moderate
levels of within group variation for most
offenses.

C. strong, because of large between group and
small within group differences.

Correct Answers: Q1(C), Q2(A), Q3(A), Q4(C), Q5(A), and Q6(B).


LO3: Calculate and interpret the correlation ratio (η2)

One statistical measure of a bivariate association's strength when using ANOVA is the
correlation ratio (symbolized as η2). There are two different ways to interpret this statistical
measure’s value: (1) it represents the percent of the variation in a dependent variable that is
explained or accounted for by the independent variable and (2) it represents the proportional
reduction in error that occurs when moving from predicting the dependent variable based on

Copyright @2017
241

Analyzing Criminological Data Chapter 11

its grand mean to predicting it on the basis of the categorical means on the independent
variable. Most criminologists use the first
interpretation of the correlation ratio, but they are Correlation ratio (η2): A statistical
essentially equivalent expressions. measure of the proportion of
variation in the dependent variable
The calculation of the correlation ratio involves that is explained by the categories of
comparisons of the relative size of the between- the independent variable
group and total variation in the dependent variable. It is calculated using the following formula:

η2 = SSBetween / SSTotal , where SSBetween = ∑ n(xgroup − XGM )2 and SSTotal = ∑ (xi − XGM )2

In our previous example of predicting the number of prior arrests on the basis of offender type,
the correlation ratio is .885 (SSBetween / SSTotal = 416 / 470 = .885). This value indicates that 88.5%
of the variation in arrest histories is explained by the type of offender. You could also interpret
this by saying that 88.5% of arrest history variation is explained by “group membership,” where
group refers to the type of offender. Alternatively, the correlation ratio also indicates that we
reduce by 88.5% the errors in predicting the number of prior arrests when we use the group
mean scores (! Murder = 1, ! Petty Thief = 5, ! Drug Offender = 15) rather than the grand mean (! Grand
Mean= 7) to predict these individual arrest histories.

Several general properties of the correlation ratio (η2) are summarized below:
• η2 ranges in its numerical value from .00 to 1.00. A correlation ratio of .00 indicates
that there is no statistical association between the dependent variable and
independent variable. This may be due to (1) no between-group variation in their
means, (2) extremely large within-group variation, or (3) both of these conditions. A
correlation ratio (η2) with a value equal or close to 1.00 is produced by the conditions
of (1) large between-group variation and (2) none or little within-group variation.

In the example used in this chapter, our finding of η2 = .885 is produced by the
conditions of large between- and small within-group variation. Note that this
correlation ratio would equal its maximum value of 1.00 if all 4 murders had 1
arrest, all 4 petty thieves had 5 arrests, and all 4 of the drug offenders in this
example had 15 arrests. In this case, there would be large between-group
variation (means of 1, 5, and 15) and no within-group variation-- thus, η2 = 1.00.

• The typical value of η2 in criminological research depends on the type of research
activity. For example, in experimental designs with small samples (n < 20 per group), it
is common to obtain η2 values that range from .40 to .80, indicating that anywhere
between 40% and 80% of the variation in the outcome [dependent] variable is
explained by the type of experimental treatment [independent variable]. However, in
large sample survey research (n > 100 per group), typical η2 values range from about
.05 to .40.

Copyright @2017
242

Analyzing Criminological Data Chapter 11

• “Rules of thumb” for defining η2 values as indicative of either “weak”, “moderate”, or


“strong” bivariate associations are hard to establish because these conclusions vary by
the type of research (e.g., experimental vs. non-experimental studies) and disciplinary
traditions. However, here are some general guidelines to use as a possible reference
point:

o Weak bivariate associations (η2 = < .10)

o Weak to Moderate bivariate associations (η2 = .10 to .30)

o Moderate to Moderately Strong bivariate associations (η2 = .30 to .50)

o Strong bivariate associations (η2 = > .50).

Practice Applications for Learning Objective 3 (LO3):


Answer the following questions about the calculation and interpretation of the correlation ratio
(η2) in ANOVA:

Q1. Given the following sources of variation (SSTotal = 100; SSBetween= 80; and SSWithin= 20),
what is the numerical value of the correlation ratio (η2)?
A. η2 = .80
B. η2 = .60
C. η2 = .40
D. η2 = .20


Q2. A study of gender differences in length of prison sentences reports a correlation ratio of
.45 (η2 = .45). Which of the following is the correct interpretation of this correlation ratio?
A. 45% of the variation in the length of prison sentences is explained by the convicted
offender’s gender.
B. 45% of the variation in the convicted offender’s gender is explained by their length
of prison sentence.
C. Errors in predicting the convicted offender’s gender on the basis of their prison
sentence are reduced by 45% when we use the mean gender to predict prison
sentences rather than grand mean.

Copyright @2017
243

Analyzing Criminological Data Chapter 11


Q3. If most of the variation in a dependent variable is within-group variation, the size of the
correlation ratio (η2) will be:
A. small
B. moderate
C. large


Q4. As a general “rule of thumb”, a correlation ratio between the values of .10 and .30 (η2 =
.10 to .30) would indicate a
A. weak bivariate relationship between the two variables
B. a weak to moderate bivariate relationship between the two variables
C. a moderate to strong bivariate relationship between the two variables
D. a strong bivariate relationship between the two variables

Correct Answers: Q1(A), Q2(A), Q3(A), and Q4(B).


LO4: Conduct and evaluate formal tests of hypotheses about differences in
group means

The formal method for hypothesis testing when using ANOVA is similar to the approach used in
our previous chapters. These tests involve (1) establishing a null hypothesis [Ho], (2) looking at a
sampling distribution that describes rare and common outcomes if the Ho is true, (3) converting
raw scores into standardized scores, (4) comparing observed and expected values of the test
statistic, and (5) reaching a decision about whether to reject or not reject the null hypothesis.

Compared to the z-, t-, and chi-square tests described in previous chapters, ANOVA hypothesis
testing is different in the following ways:
• The null hypothesis (Ho) is stated as the equality of means across two or more groups
(e.g., Ho: u1 = u2 = u3 = u4 .....= uGrand Mean). Similar to other tests, however, we use our
sample means ( x 1, x 2 , x 3 , x 4 ...) as estimates of these unknown population means.

F-distribution: The sampling
• The comparative sampling distribution
distribution used in ANOVA to define
is the F-distribution (rather than a
rare and common outcomes when
normal curve, t-distribution, or a chi-
testing the null hypothesis of equal
square distribution). This F-distribution
means across groups
is actually the ratio of two chi-square
distributions.

Copyright @2017
244

Analyzing Criminological Data Chapter 11

• Both the observed sample values and expected values are converted in standardized
F-ratios (rather than z-, t- or c2-values). Dividing the components of between- and
within-group variation by their respective degrees of freedom standardizes these F-
ratios. Similar to other types of hypothesis testing, the researcher establishes the
significance level (e.g., α = .05) of the test to identify the critical value of the expected
F-ratio necessary to reject the null hypothesis.

So, how do you actually go about testing this null hypothesis of equal means in ANOVA? Using
the example of number of prior arrests and offender type, the basic steps of hypothesis testing
when using ANOVA are summarized below:

1. Develop the null hypothesis about the population means.
Ho: μ Murder = μ Petty Thief = μ Drug Offender = μ Grand Mean

2. Draw a random sample of cases and compute the sample’s overall mean and group
means (see the table below to identify the particular data used in these calculations).
! Overall (Grand Mean) = 7; ! Murder = 1; ! Petty Thief = 5; ! Drug Offender = 15

Individual Prior Arrests Offender Type Group Mean
1 0 Murderer
2 1 Murderer x M= 1
3 1 Murderer
4 2 Murderer
5 4 Petty Thief
6 5 Petty Thief x PT = 5
7 5 Petty Thief
8 6 Petty Thief
9 10 Drug Offender
10 15 Drug Offender x DO = 15
11 15 Drug Offender
12 20 Drug Offender
Overall Mean = 7 7

3. Compute the total variation (SSTotal), between-group variation (SSBetween), and within-
group variation (SSWithin) from the sample data.
SSTotal = ∑ (xi − XGM )2 = 470
SSBetween = ∑ n(xgroup − XGM )2 = 416
SSWithin = ∑ (xi − XGroup )2 = 54

Copyright @2017
245

Analyzing Criminological Data Chapter 11


4. Standardize components of the total variation (SSBetween and SSWithin) by dividing them by
their degrees of freedom (df).

The degrees of freedom for between-group variation is defined by k -1 (where k = the
number of categories for the independent variable). The degrees of freedom for the
within-group component is defined as n - k (where n is the total sample size).

When the between-group and within-group
variation are divided by their respective Mean Sum of Squares Between Groups
degrees of freedom, the results are called the (MSBetween): The average variation
Mean Sum of Squares Between Groups between the group means
(MSBetween) and the Mean Sum of Squares Mean Sum of Squares Within Groups
Within Groups (MSWithin). These concepts are (MSWithin): The average variation of
simply measures of the average variability of individual values from their group
scores among the means (MSBetween) and the means
average variability of individual scores from
their mean scores (MSWithin). When there is a strong relationship between the
independent and dependent variables, the average between-group variation will be far
greater than the average within-group variation. When there is a weak relationship
between the independent and dependent variables, the average within-group variation
will be of comparable value or greater than the average between-group variation.

The calculations of MSBetween and MSWithin for explaining the variation in prior arrests on
the basis of difference in offender type are shown below:

MSBetween = SSBetween SS/ k-1 =
Between = ∑ n(x group − X GM ) / k – 1 = 416 / 2 = 208
2


MSWithin = SSWithin / n-k =
SSWithin = ∑ (xi − XGroup )2 / n – k = 54 /9 = 6

5. Calculate the ratio of the mean sum of squared differences between the group means
(MSBetween) and the mean sum of squared differences within the group means (MSWithin).

The ratio of (MSBetween) to (MSWithin) is called F-Ratio: the ratio of the average sum of
the F-Ratio. It is similar to the standardized squared differences in the between-
scores (z-, t-, )2- scores) used in other types and within-group variation. The size of
of hypothesis testing. Based on the this observed ratio forms the basis for
observed sample values, the observed F- testing hypotheses about mean
Ratio and its value in this example are differences when using the F-sampling
shown below: distribution.

Fobs-Ratio = MSBetween / MSWithin = 208 / 6 = 34.67

Copyright @2017
246

Analyzing Criminological Data Chapter 11

6. Establish the critical value of the expected F-ratio that defines rare and common
outcomes in an F-distribution under the null hypothesis of no differences in the means.

The particular expected value of the F-Ratio in this type of hypothesis testing is found by:

• Looking at a Table of the F-Distribution for the selected alpha level (e.g., α = .05,
α = .01)

• Determining the degrees of freedom (df) associated with MSBetween (df = k -1) and
MSWithin (df = n - k), and

• Finding the expected F-Ratio at the intersection of these two degrees of freedom
values. An abbreviated F-Distribution Table (for only a select number of degrees
of freedom and α = .05) is below.

In our example of 12 individuals (n = 12) and 3 types of offenders (k = 3), the F-
Distribution Table indicates that our expected F-value (α = .05, df = 2, 9) is 4.26 (see
yellow shaded cell in the table). Under these conditions, any observed F-ratio that
exceeds this expected F-value of 4.26 will lead to the rejection of the null hypothesis of
no differences in the number of prior arrests between groups. In other words, an
observed F-ratio of 4.26 or larger would be a rare outcome if the null hypothesis is true.

Critical F-Values for the F-Distribution (select values, α = .05)
df (k-1) 1 2 3 4 5 6 7 8 9

df (n-k)
1 161 199 215 224 230 234 237 239 241
2 18.5 19.0 19.2 19.3 19.3 19.3 19.4 19.4 19.4
3 10.1 8.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00
5 6.61 5.79 5.41 5.19 5.05 4.93 4.88 4.82 4.77
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18
10 4.96 4.10 3.71 3.48 3.33 3.22 3.13 3.07 3.02
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59
25 4.24 3.38 2.99 2.76 2.60 2.49 2.40 2.34 2.28
50 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07
75 3.97 3.12 2.73 2.49 2.34 2.22 2.13 2.06 2.01
100 3.94 3.09 2.70 2.46 2.30 2.19 2.10 2.03 1.97
500 3.86 3.01 2.62 2.39 2.23 2.12 2.03 1.96 1.90
1,000 3.85 3.00 2.61 2.38 2.22 2.11 2.02 1.95 1.89

Copyright @2017
247

Analyzing Criminological Data Chapter 11

7. Compare the observed F-ratio and expected F-value to reach a decision about whether to
reject or not reject the null hypothesis.

In this example, our observed F-ratio of 34.67 far exceeds the expected F-value of 4.26
so we confidently reject the null hypothesis of equal means across groups. Based on our
sample, the average murderer (! Murder = 1) has far fewer prior arrests than both the
petty thief (! Petty Thief = 5) and drug offender (! Drug = 15). The F-test confirms that these
differences between group means are statistically significant. Thus, we can reject the
null hypothesis of equal means in the number of prior arrests for different offender
types.

8. Construct an ANOVA summary table that presents the major statistical information used
to test the null hypothesis.

The value of an ANOVA summary table is that it provides a clear and concise summary
of (1) the basic partitioning of the total variation into its between/within- group
components and (2) the results of the hypothesis test. The notation in the table below
of the α probability (p < .05) indicates that these differences in group means are
statistically significant, occurring less than 5 times out of 100 by chance alone if the null
hypothesis were true. If the differences in group means were not statistically significant,
the proper notation in the ANOVA summary table would be p > .05, indicating these
observed differences are relatively common because they would occur by chance alone
more than 5 times out of 100. Similar notations and interpretations apply when other
significance levels are selected by the researcher (e.g., α = .01, α = .001).

ANOVA Summary Table

Source of Sum of Mean Sum


Variation Squares df of Squares F-Value Probability (α)

Between-Group 416.0 2 208.0 34.67 p < .05

Within-Group 54.0 9 6.0

Total 470.0 11



9. Summarize the ANOVA results of the study.

The results of our analysis of the variance in the number of prior arrests for different
groups of offenders include the following:

Copyright @2017
248

Analyzing Criminological Data Chapter 11

• Based on this random sample, murderers (! Murder = 1) have far fewer prior arrests
than both petty thieves ( ! Petty Thief = 5) and drug offenders (! Drug = 15).

• Over 88% of the variation in the number of prior arrests is explained by the type
of offender.

• The differences in mean number of arrests for each type of offender are
statistically significant (F2,9 df = 34.67, p < .05)

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about the calculation and interpretation of the F-ratio for
testing hypotheses in ANOVA:

Q1. Given the following sources of variation (SSTotal = 100; SSBetween= 80; and SSWithin= 20)
and sample statistics (k = 3 groups, n= 28), what is the numerical value of the
MS Between?
A. MS Between = 80
B. MS Between = 60
C. MS Between = 40
D. MS Between = 20


Q2. Given the following sources of variation (SSTotal = 100; SSBetween= 80; and SSWithin= 20)
and sample statistics (k = 3 groups, n= 28), what is the numerical value of the MS Within?
A. MS Within = .80
B. MS Within = .60
C. MS Within = .40
D. MS Within = .20


Q3. Given the following sources of variation (SSTotal = 100; SSBetween= 80; and SSWithin= 20)
and sample statistics (k = 3 groups, n= 28), what is the numerical value of the observed
F-Ratio?
A. Observed F-Ratio = 80.0
B. Observed F-Ratio = 60.0
C. Observed F-Ratio = 50.0
D. Observed F-Ratio = 20.0

Copyright @2017
249

Analyzing Criminological Data Chapter 11


Q4. Given the following sources of variation (SSTotal = 100; SSBetween= 80; and SSWithin= 20)
and sample statistics (k = 3 groups, n= 28), what is the numerical value of the expected
F-Ratio to reject the null hypothesis at a .05 signficance level (* = . ,-)?
A. Expected F-Ratio = 4.24
B. Expected F-Ratio = 3.38
C. Expected F-Ratio = 3.29
D. Expected F-Ratio = 3.06


Q5. If your obtained F-ratio is 14.5 and your expected F-ratio is 3.38, will you reject or not
reject the null hypothesis?
A. reject the null hypothesis that the group means are the same.
B. do not reject the null hypothesis that the group means are the same.
C. not enough information is provided.

Correct Answers: Q1(C), Q2(A), Q3(C), Q4(B), and Q5(A).





Review

Use these reflection questions and directed activities to help master the material presented in
this chapter.

What?
Can you define the critical concepts covered in this chapter? Describe each of these concepts
in your own words and give at least two examples of each.
• ANOVA • Correlation ratio (η2)
• Partitioning variance • F-distribution and hypothesis testing
• Total variation • Mean sum of squares between groups
• Between-group variation • Mean sum of squares within groups
• Within-group variation • F-ratio

Copyright @2017
250

Analyzing Criminological Data Chapter 11

How?

Answer the following questions and complete the associated activities.
• How is variability partitioned when calculating an ANOVA statistic? Describe the different
components.
• How can visual methods be used to assess differences between groups? Describe the steps
involved.
• How is the correlation ratio calculated? Describe the steps involved.
• How is an ANOVA computed? Describe the steps involved.

When?

Answer the following questions.
• When should we use ANOVA – what types of variables are appropriate for this analysis?
• When are differences between groups considered to be weak, moderate, or strong?
• When is a correlation ratio value considered weak, moderate, or strong?
• When should the null hypothesis be rejected in an ANOVA analysis? When is a bivariate
relationship considered weak, moderate, strong, or perfect?
• When should the null hypothesis be rejected in an F- test?

Why?

Think about the material you learned in this chapter and consider the following question.
• Why does knowing this material about how to asses the variation in a dependent
variable that is explained by an independent variable help you evaluate the accuracy of
criminological research findings and identify distorted claims and misuses of data by
other researchers, government agencies, and the mass media?


Copyright @2017
251

Analyzing Criminological Data Chapter 12

CHAPTER 12:
Assessing Relationships Between Quantitative Variables

This chapter describes visual and formal approaches to assess the linear relationship between
two quantitative variables. These approaches include the use of (1) scatterplots to visually
represent your data and (2) the methods of correlation and linear regression analysis to assess
the nature and strength of these bivariate relationships. Understanding the concepts and
learning objectives in this chapter will improve your analytical skills for evaluating both
relational and causal research questions in criminology.

Concepts Learning Objectives (LO)



- Scatterplots LO1: Assess the strength and direction of bivariate
- Correlation coefficient (r) relationships using scatterplots
- Linear regression analysis
- LO2: Calculate and interpret bivariate correlation and
Linear regression equation
(Y = a + bX) linear regression analyses
- Y-intercept (a) LO3: Assess the explanatory power of linear
- Unstandardized regression regression models
coefficient (b)
- Ordinary Least Squares (OLS) LO4: Demonstrate the logic in testing hypotheses
- Coefficient of determination (R ) 2 about the coefficient of determination and
regression coefficients

Introduction

In the previous chapters, you learned how to evaluate the nature and statistical significance of
two types of bivariate associations. First, you learned how to assess relationships between two
qualitative variables using contingency tables and the chi-square statistic. Second, you learned
how to assess relationships between a categorical independent variable and a quantitative
dependent variable using the ANOVA statistic. These analyses allow us to formally test null
hypotheses that there is no relationship between two variables.

In this chapter, you will learn how to assess another type of bivariate relationship. We describe
the principles, concepts, and statistical techniques used to assess the nature and strength of
relationships between two quantitative variables. The techniques presented are most
appropriate for variables measured on interval/ratio scales. As in previous chapters, you will
learn how to use visual methods to examine data, conduct formal statistical tests of null
hypotheses, and interpret the results of these analyses.

Copyright @2017 252


Analyzing Criminological Data Chapter 12

We begin with a discussion of how to evaluate the strength and direction of quantitative
bivariate relationships using visual techniques.

LO1: Assess the strength and direction of bivariate relationships using
scatterplots

The nature and magnitude of relationships between two quantitative variables can be
examined by constructing a scatterplot. A scatterplot (also called a scattergram) is a visual
display of the joint distribution of data points as coordinates on an X-Y graph.
Scatterplot: A visual
It is essential to first construct and inspect scatterplots when display of the relationship
analyzing quantitative bivariate relationships. The two variables between two quantitative
in a scatterplot are referred to as variable X (the independent variables
variable in causal relationships) and variable Y (the dependent
variable in causal relationships). Y variable values are plotted along the vertical axis (Y-axis) and
X variable values are plotted along the horizontal axis (X-axis).

Four crucial pieces of information can be obtained by simply looking at bivariate scatterplots:

• Relationship Direction. Variables X and Y can be related in a positive direction. This
means that as the values of variable X increase, the values of variable Y also increase.
Similarly, as the values of variable X decrease, the values of variable Y decrease. In other
words, variables X and Y increase or decrease in the same direction if they share a
positive relationship. If variables X and Y are related in a negative direction, then the
values of X and Y move in opposite directions (as X increases, Y decreases).

A scatterplot indicates a positive association when the scattering of data points
stretches from the lower left quadrant to the upper right quadrant. A negative
association is revealed when the scattering of data points stretches from the lower right
quadrant to the upper left quadrant of the graph.

These visual patterns of positive and negative relationships are shown in the two
scatterplots below.

Positive Relationship Negative Relationship

Copyright @2017 253


Analyzing Criminological Data Chapter 12

• Relationship Strength. If X and Y are strongly related, the scatterplot will show a
relatively tight ban of points (resembling a cigarette-like shape). If there is no
relationship between the variables, the data points will resemble a shotgun blast
pattern. These visual patterns for strong and weak relationships are shown in the
scatterplots below.


Strong Relationship Weak Relationship



• Functional Form of the Relationship. Scatterplots show whether the bivariate
relationship is linear or non-linear. A linear relationship is one that can best be
represented by a straight line. There are several types of non-linear relationships (e.g., J-
shaped, S- shaped, U-shaped functions). The best description of the X-Y relationship
functional form, whether linear or non-linear, can be determined using a scatterplot.
Examples of different functional forms are presented below.

Linear Relationship Logarithmic Relationship Square (X2) Relationship

Copyright @2017 254


Analyzing Criminological Data Chapter 12

• Identify Outliers. Scatterplots reveal the presence of any extreme data points (outliers).
Outliers must be identified since they affect the strength of bivariate relationships and
the values of statistical tests. Outliers are the data points circled in the scatterplots
below.

Outliers that reduce the Outliers that reduce the
strength of positive relationships strength of negative relationships



The table below displays a scatterplot of the relationship between state high school dropout
rates (X) and associated state homicide rates (Y). This scatterplot reveals the bivariate
relationship’s direction, strength, functional form, and whether outliers are present.

Scatterplot of State Dropout Rates (X) and Homicide Rates (Y)

Homicide Rate by Dropout Rate
20
18
16
State Homicide Rate

14
12
10
8
6
4
2
0
0 10 20 30 40 50
Dropout Rate (%)


Copyright @2017 255


Analyzing Criminological Data Chapter 12

The scatterplot shows that a strong, positive, and linear association exists between a state’s
high school drop out rate and its homicide rate. There does not appear to be any notable
outliers. We can drawn the following conclusions from the scatterplot:

• The claim of a strong association between homicide and dropout rates is supported by
the tight ban of data points on an approximate 45-degree angle.

• The pattern of data points moving from the lower left quadrant to upper right quadrant
indicates a positive relationship. This means that higher state high school dropout rates
are associated with higher state homicide rates (and lower dropout rates are associated
with lower homicide rates).

• The linear form of this relationship is indicated by a straight line that can be drawn
through these points. The data do not appear to reveal any clear alternative non-linear
form (e.g., J- or U-shaped patterns).

Although scatterplots are an important visual tool for assessing the association between two
quantitative variables, these visual representations are less useful in very large samples (n’s >
500). In large samples, scatterplots begin to resemble a random splattering of points even when
there is a strong relationship because of the sheer number of points plotted on the graph.
Regardless of sample size, however, the first step in evaluating relationships between two
quantitative variables is to look at scatterplots for preliminary insights about the nature and
magnitude of these relationships.

Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about scatterplots and what information about the nature of the
relationship between two quantitative variables can be derived from them:

Q1. What is the direction of the bivariate
relationship between the two quantitative
variables in this scatterplot?

The correct answer is:
A. positive relationship
B. negative relationship
C. unable to tell from the scatterplot

Copyright @2017 256


Analyzing Criminological Data Chapter 12


Q2. What is the strength of the bivariate
relationship between the two quantitative
variables in this scatterplot?

The correct answer is:
A. no linear relationship
B. weak linear relationship
C. strong linear relationship




Q3. What is the strength of the bivariate
relationship between the two quantitative
variables in this scatterplot?

The correct answer is:
A. no/weak linear relationship
B. moderate linear relationship
C. strong linear relationship




Q4. What is the strength and direction of the
bivariate relationship between the two
quantitative variables in this scatterplot?

The correct answer is:
A. weak negative relationship
B. moderate negative relationship
C. moderate positive relationship

Copyright @2017 257


Analyzing Criminological Data Chapter 12


Q5. What effect does the circled outlier have on
the strength or direction of the bivariate
relationship between the two quantitative
variables in this scatterplot?
The correct answer is:

A. it reduces the strength of the
positive linear relationship
B. it reduces the strength of the

. negative linear relationship


C. it increases the strength of the

positive linear relationship


Correct Answers: Q1(B), Q2(C), Q3(A), Q4(C), and Q5(A).




LO2: Calculate and interpret bivariate correlation and linear regression analyses

Correlation and linear regression analyses are the statistical methods used to evaluate bivariate
quantitative relationships. A correlation analysis provides a summary measure of the bivariate
association between these variables. It helps us to answer both relational and causal research
questions. In contrast, a regression analysis evaluates the nature and magnitude of the
influence of an independent variable on a quantitative dependent variable. It specifically helps
us to answer causal research questions.

1. The Correlation Coefficient (r)



Pearson’s product-moment correlation coefficient (symbolized by the letter “r” and often
simply referred to as the correlation coefficient) is a measure of the linear association between
two quantitative variables. The correlation coefficient is the measure of association most
commonly used in criminological research. It’s a symmetrical measure [rxy= ryx] and its value
ranges from - 1.00 to + 1.00. A correlation value of 0 Correlation Coefficient: A
indicates that there is no linear relationship between the summary measure of the
two variables, while a value of – 1.00 indicates a perfect direction and strength of the
negative linear relationship and a value of + 1.00 indicates linear association between
a perfect positive linear relationship. two quantitative variables

The correlation coefficient measures the covariation between two continuous variables
relative to their variances. A large correlation is found when two variables covary together

Copyright @2017 258


Analyzing Criminological Data Chapter 12

more than they vary among each other. A weak correlation, in contrast, is found in situations
of wide variability within each variable relative to their amount of covariation.

The correlation coefficient is computed using the following formula:
cov xy
rxy = ,
s 2x s 2y
where covxy is the covariation between the variables x and y; and s2x and
s2 y are the variances in x and y, respectively.

The steps for computing a correlation coefficient are described below.

Steps for computing a correlation coefficient (rxy)

Steps Formulas

1. Compute the means for each variable. x y


x = ∑ y = ∑
n n

2. Take deviations of individual scores from x − x y − y


their means.

3. Square these deviations from the means (x − x )2 (y − y )2


of X and Y.

4. Multiple together the deviation scores of (x − x )(y − y )


X and Y.

5. Compute the covariance of X and Y. ∑(x − x)(y − y)


cov xy =
n −1

6. Compute the variances of X and Y. (x − x )2


s 2x = ∑
n −1
(y − y )2
sy = ∑
2

n −1

7. Compute the correlation coefficient (rxy). covxy


rxy =
sx2 sy2

Copyright @2017 259


Analyzing Criminological Data Chapter 12

The table below provides hypothetical sample data. There are five cases in this sample, as
indicated by the five values of X and five values of Y. The table also provides examples of the
basic calculations needed to compute the mean, variance, covariance, and correlation formulas
(steps 1 through 7 above).

Sample Data and Basic Calculations for Correlation Coefficient

X y x−x y− y (x − x )2 (y − y )2 (x − x )(y − y )

10 4 0 1 0 1 0
8 3 - 2 0 4 0 0
12 2 2 -1 4 1 -2
6 5 -4 2 16 4 -8
14 1 4 - 2 16 4 -8

∑ =50 ∑ =15 ∑ = 0 ∑ = 0 ∑ =40 ∑ = 10 ∑ = - 18


These calculations provide the basis for obtaining the correlation coefficient (r). The
computations described in steps 1, 5, 6, and 7 are depicted below.

Computations:

x y
1. Means: x = ∑ n = 50 /5 = 10; y = ∑ n = 15 / 5 = 3.

2. Variances: s 2x = ∑ (x − x ) x
2
= 40 / 4 = 10; s 2y = ∑
(y − y )2
= 10 / 4 = 2.5;
n −1 n −1

∑(x − x)(y − y)
3. Covariance: cov xy = = - 18 /4 = - 4.5
n −1

cov xy
4. Correlation: rxy = = - 4.5 / (10)(2.5)= - 4.5 / 5 = - .90
s 2x s 2y

Interpretation:
There is a strong inverse (negative) relationship between X and Y. As the X value
increases, the Y value decreases.


It can be somewhat difficult to determine whether the correlation coefficient value indicates a
weak, moderate, or strong bivariate correlation. This assessment depends on disciplinary

Copyright @2017 260


Analyzing Criminological Data Chapter 12

tradition. For example, psychology and criminology often interpret correlation coefficient
values differently. This assessment also depends on sample size since it is more difficult to
obtain a “large” correlation value with very large samples (n’s > 500). However, for moderate to
large sample sizes (n’s > 50 and n’s < 500) in criminological research, the following general
guidelines can be used to interpret the correlation coefficient (r):

• Perfect Relationship: r = ± (1.00)


• Strong Relationship: r = ± (.70 to .99)
• Moderately Strong Relationship: r = ± (.50 to .69)
• Moderate Relationship: r = ± (.35 to .49)
• Moderately Weak Relationship: r = ± (.20 to .34)
• Weak Relationship: r = ± (.10 to .20)
• No Relationship: r = .00

As described earlier, a scatterplot provides a visual display of the nature and magnitude of a
quantitative bivariate association. By looking at the scatterplots below, we can estimate the
direction and relative size of the correlation coefficient by the patterning of the data points.
You should be able to tell by the clustering of the data points which graph indicates (1) a strong
negative linear relationship (r ≈ -.70 to -.80) and (2) a weak linear relationship (r ≈ 0).

Strong Correlation (r= - .73) Weak Correlation (r = .02)

While the correlation coefficient is the most common measure of association for quantitative
variables, it should only be used if the following assumptions are met by the data.

• The mean is an appropriate descriptive statistic. The correlation coefficient is based on


the principle of covariance—it uses the cross-product of the deviations from the means
of each variable (x − x )(y − y ) . If means cannot be calculated for the data, then a

Copyright @2017 261


Analyzing Criminological Data Chapter 12

correlation coefficient should not be used. The mean is an appropriate measure of


central tendency only when (1) both variables are measured at an interval or ratio level
of measurement and (2) the scores on the variables are normally distributed (e.g., there
are no extreme outliers).

• Variation exists within both variables (x, y). If variation does not exist in both variables,
the variance of one variable and its cross-product with the other variable will be
0. Generally, more variation in scores creates a greater possible magnitude of the
correlation coefficient.

• The relationship between X and Y is linear. Linear means that regardless of the level of
X or Y, unit changes in X correspond to the same magnitude of change in Y for all values
of Y (both change together at a constant rate). Both correlation and regression
coefficients (described below) are used to assess the nature and magnitude of
relationships between continuous variables. However, some bivariate relationships are
best represented as non-linear functions. Correlations computed for non-linear bivariate
relationships will usually underestimate the magnitude of these associations. The gravity
of violating this assumption depends on the amount and nature of the non-linearity.


2. Linear Regression Analysis

Regression analysis is designed to assess the influence of a quantitative independent variable
on a quantitative dependent variable. It evaluates the linear effect of a unit change in the
independent variable on a continuous dependent variable. Similar to the correlation ratio (η²)
in ANOVA, the coefficient of determination (R²)
is used in regression analysis to assess the Linear regression analysis: A statistical
predictive accuracy or “goodness of fit” of the procedure that is designed to assess the
overall regression model to the observed data (i.e., effect of an independent variable (X) on
how much variation in the dependent variable [Y] is a dependent variable (Y)
accounted for by the independent variable [X]).

A. General Form of the Linear Regression Equation

Linear equations are used to estimate relationships between two continuous


variables. However, there is almost always some error in predicting a value of Y (dependent
variable) based on X (independent variable). These errors are important because they represent
inaccuracies in our ability to perfectly predict Y from X. The larger these errors, the less
confidence we have in our ability to use X to predict Y.

The ultimate goal of bivariate regression analysis is to estimate a true model of the relationship
between X and Y in the population by using sample data. Like the statistical inferences
discussed in previous chapters, we use sample statistics, in this case, a linear regression
equation, to estimate unknown population parameters.

Copyright @2017 262


Analyzing Criminological Data Chapter 12

The true form of the linear regression equation is Linear regression equation: A summary
expressed by the following formula about specification of the linear relationship
population parameters: between an independent variable (X)
and a dependent variable (Y)
,= - + 0 1, where:

α = the Y-intercept (i.e., the point where the regression line crosses the y-axis),
and

β = the slope (i.e., the numerical effect of a unit change in X on the value of Y).

When we estimate this population regression equation Y-intercept (α): The point on a
with data from a random sample, we substitute our graph where the regression line
sample estimates for these population parameters and crosses the y- axis (the value of Y
use the following linear regression equation: when X is 0)

Y = a + bX + e, where:
Slope (β): Also called the
a = the sample estimate of α (the Y- regression coefficient, it
intercept), represents the numerical change
in the value of Y for a unit change
b = the sample estimate of β (the Slope), in X
and

e = the error term (also called the residual error) in predicting Y from X with
sample data.

When sample data are used to estimate the population slope and Y-intercept, the error term
(e) represents the differences between the observed value of Y and the predicted value of Y
based on the estimated regression equation. Formally, this error component is expressed by
the following equation:

e = Yi − Yˆ , where:

Yi = the observed value of Y for the any individual value or observation in the
data, and

Yˆ = the predicted value of Y based on the estimated regression equation


Y = a + bx .

If X perfectly predicts Y, the error term associated with each Y score will be equal to 0 and there
will be no error in predicting Y from X. However, in virtually every application of regression
analysis, there will be some error in prediction. The primary sources of these errors in
predicting Y from X include the following:

Copyright @2017 263


Analyzing Criminological Data Chapter 12

• Sampling error: errors that stem from using samples and not an entire
population.
• Measurement error: errors that derive from using unreliable or invalid
measures.
• Misspecification errors: errors due to (1) excluding other relevant variables
that are likely to influence the value of Y (the dependent variable) and (2)
assuming that a linear function is appropriate when a non-linear functional
form best represents the impact of X on Y.

When we conduct a regression analysis, the statistical


goal is to select values of the slope (b) and the y- Ordinary Least Squares (OLS): A
intercept (a) that minimize the errors (e) in predicting procedure used in regression
the values of Y from the values of X. The estimation analysis that generates numerical
procedure called Ordinary Least Squares (OLS) is the estimates of the y-intercept (a) and
method widely used in regression analysis because it slope (b) which minimize the sum
estimates values of the slope and y-intercept that of the squared errors in predicting
minimize this error variance (i.e., it minimizes the sum Y from X
of these squared errors [min e²] in predicting Y from X).

The table below shows two examples of linear regression equations that are estimated by the
OLS method. In these examples, OLS regression analyses were conducted to derive estimates of
the slope (b) and the y-intercept term (a) that resulted in the "best fitting" linear equation (the
red line) for each set of sample data. The linear equations used to construct these particular
regression lines are shown at the top of each table.

Estimation of Two OLS Regression Equations


Y = 20 + .59 X Y = 38 + .022 X

Copyright @2017 264


Analyzing Criminological Data Chapter 12

The following major principles are revealed in the regression lines that intersect the data points
in the scatterplots above:

• The red lines in each graph represent the results of estimating the linear regression
equation using the OLS method. The regression line can be superimposed on any
scatterplot by simply (1) selecting several values that fall along the X-axis, (2) plugging
these values into the estimated linear regression equation to solve for Y, and (3)
plotting these points on the scatterplot and then drawing a straight line between them
to connect the data points.

• Errors in predicting Y from particular values of X ( e = Yi − Yˆ ) are shown in the
scatterplots by the distance between the regression line (red lines) and the black dots
(.) representing individual observed data points. The blue-arrow lines ( ----> ) illustrate
the magnitude of errors in prediction for particular data points in the scatterplots.

• The wider the magnitude of errors around the regression line, the less useful X is in
explaining variation in Y. In these two examples, variable X is a far better predictor of Y
in the graph on the left because the errors around the line are less dramatic. Thus, the
effect of X on Y is far stronger in the example on the left than on the right.

• Notice that the location and the pitch of each regression line serves to minimize the
overall errors in predicting the Y-axis variable from the X-axis variable. This
minimization of the distance between all of the data points and the regression line is
the result of the OLS estimation procedure (i.e., OLS estimates the particular values of
the y-intercept and slope that minimize these errors).


A final point about the general form of the OLS regression equation is that the sample
estimates of the y-intercept (a) and slope (b) are strongly affected by extreme scores (i.e.,
outliers). For example, there are two extreme data points in the previous graph on the left.
These data points are circled. Notice that the regression line and its slope are actually pulled
toward these two influential points to minimize the magnitude of prediction errors for all data
points in the sample, including these outliers. However, if we removed either or both of these
particular data points, the regression line and its slope would be flatter. By removing these data
points, our substantive conclusions may change somewhat about the impact on the
independent variable (X) on the dependent variable (Y).

The impact of outliers and other influential points on estimates of the y-intercept term and the
slope is generally less dramatic when regression analyses are based on very large sample sizes
(n's > 1,000). However, regardless of sample size, it is always good practice to look at extreme
data points and assess their likely impact on statistical analyses.

Copyright @2017 265


Analyzing Criminological Data Chapter 12

B. How to Estimate the Unstandardized Regression Coefficient (b) and the Y-Intercept (a)

As mentioned in the previous section, OLS regression provides estimates of the slope (b) and
the y-intercept (a) for the linear equation Yˆ = a + bx .

The slope (b) represents the effect of a unit change in the X variable on the value of the Y
variable. For example, suppose you were interested in predicting the annual income of
homicide detectives (Y) on the basis of their years of experience (X). In the estimated
regression equation Y$ = 50,000 + 6,000 Xyears, the slope of 6,000 indicates that a unit increase
in experience (i.e., increasing one’s experience by 1 year) is predicted to lead to a $6,000
increase in income.

The slope may be presented in either unstandardized (i.e., the original measurement units for Y
and X) or standardized forms. When the slope is expressed in its original units (like in this
example that a 1 year increase in experience = $6,000
Unstandardized regression
increase), it is called the unstandardized regression
coefficient (b): The slope in a
coefficient (symbolized by the letter b). In contrast, the
regression equation when it is
standardized regression coefficient (symbolized by the
expressed in the original units
capital letter B or β) converts the value of the slope to
of a variable’s measurement
standardized units that ranges from -1.00 to +1.00. In
bivariate regression analyses, we focus on the unstandardized regression coefficient (b). In
regression analysis with multiple independent variables, we focus on the standardized
coefficients (B) to answer questions about the relative importance of different variables (e.g.,
which variable best predicts variation in the dependent variable). In this chapter, we will focus
on bivariate regression analyses and the unstandardized regression coefficient (b).

The unstandardized regression coefficient (b) in a bivariate analysis is calculated by using the
following formula:

covxy
b = , where covxy = the covariance of x and y, and
s 2x
s 2x = the variance in x (the independent variable)


The y-intercept (a) represents the value of Y when the regression line crosses the X-axis (i.e.,
when X = 0). In the regression equation Y = $50,000 + $6,000 Xyears, the y-intercept value of
50,000 indicates that the detective’s predicted income would be $50,000 if they had no police
experience at all (X = 0).

Copyright @2017 266


Analyzing Criminological Data Chapter 12

The y-intercept (a) in a bivariate regression analysis is calculated by using the following formula:

y ⎛ x⎞
a = ∑ − b ⎜ ∑ ⎟ , where: ∑ y = the mean of the Y variable ( y ),
n ⎝ n⎠ n

b = the slope for predicting Y from X, and

3 xi
) = ∑
∑ = the mean of the X variable ( XGM
4 n

Let’s analyze some sample data to illustrate how to actually calculate and interpret the slope
and y-intercept term in an OLS regression analysis. In this example, suppose you are interested
in the causal impact of increased post-graduate course work on security officers' scores on an
advanced placement exam. The table below shows the raw data for 10 security officers and the
calculations necessary to assess the influence of post-graduate credits (X) on their exam scores
(Y).


Computations for Regression Analysis
(Post Grad Credits (X) —> Advanced Placement Test Score (Y))
  
y x y−y x − xx (x − x )(y − y ) (y − y )2 (x − x )2 Y Yi − Y (Yi − Y )2

30 2 -40 -8 320 1,600 64 38.4 -8.4 70.6

40 4 -30 -6 180 900 36 46.3 -6.3 39.7

55 5 -15 -5 75 225 25 50.3 4.7 22.1

65 8 -5 -2 10 25 4 62.1 2.9 8.4

70 10 0 0 0 0 0 70.0 0.0 0.0

75 12 5 2 10 25 4 77.9 -2.9 8.4

80 18 10 8 80 100 64 101.6 -21.6 466.6

90 14 20 4 80 400 16 85.8 4.2 17.6

95 12 25 2 50 625 4 77.9 17.1 292.4

100 15 30 5 150 900 25 89.7 10.3 106.1

700 100 0 0 955 4,800 242 700 0 1,032

Copyright @2017 267


Analyzing Criminological Data Chapter 12

Here are the calculations necessary to compute the OLS regression equation:

1. Find the mean of both y and x.

y x
y = ∑ = 700 / 10 = 70 x = ∑ = 100 / 10 = 10
n n

2. Find the variance of both y and x.

(y − y )2 (x − x )2
s y = ∑ = 4,800 / 9 = 533.3 s x = ∑ = 242 / 9 = 26.9
2 2

n −1 n −1

3. Find the covariance of x and y.

covxy = ∑ (x − x )(y − y ) /N-1 = 955 / 9 = 106.1

4. Divide the covariance by the variance of x to find the slope (b) and subtract the
product of b and the mean of x from the mean of y to find the y-intercept (a).

covxy y ⎛ x⎞
b = = 106.1 / 26.9 = 3.9 a = ∑ − b ⎜ ∑ ⎟ = 70 - 3.9 (10) = 31.0
s 2
x n ⎝ n⎠

5. Plug these numbers into the estimated regression equation ( Yˆ = a + bx ).

Yˆ = 31+ 3.9x

Interpretation: The slope in the regression equation Yˆ = 31+ 3.9x reveals that for every
1-unit increase in post-grad credits, officer test scores increase by 3.9 points. If a
security officer didn’t have any post-grad credits (i.e., x = 0), the y-intercept value
predicts that the officer would get a score of 31 points on this exam.

C. Predicting Values of the Dependent Variable from the Estimated Linear Regression
Equation

Once the slope and y-intercept are calculated, the estimated regression equation ( Yˆ = a + bx )
can be used to predict any value of the dependent variable (Y). For example, in the equation
for predicting a security officer’s exam score on the basis of their post-grad credits
( Yˆ = 31+ 3.9x ), we can simply plug in values for x (number of post-grad credits) to derive the
expected score ( Yˆ ) for each of these following x values:

Copyright @2017 268


Analyzing Criminological Data Chapter 12

• x = 0 credits —> Yˆ = a + bx
= 31 + 3.9 (0) = 31
• x = 5 credits —> Yˆ = a + bx
= 31 + 3.9 (5) = 50.5
• x = 10 credits —> Yˆ = a + bx
= 31 + 3.9 (10) = 70
• x = 15 credits —> Yˆ = a + bx
= 31 + 3.9 (15) = 89.5
• x = 20 credits —> Yˆ = a + bx
= 31 + 3.9 (20) = 109
• x = 30 credits —> Yˆ = a + bx
= 31 + 3.9 (30) = 148

Although regression equations can be used to predict values, their accuracy depends on how
well the independent variable actually predicts the particular dependent variable. If the
prediction errors are large, there is no substantive value in interpreting these predicted values
(because the large errors indicate that the X variable is basically worthless in predicting the Y
variable). However, if prediction errors are relatively small, there is great substantive value in
interpreting the slope and the predicted values derived from estimated regression equations.

Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about calculation and interpretation of the results of correlation
and linear regression analyses:

Q1. Given the following information, what is the value of

the correlation coefficient (r) between variable X and Y?

The correct answer is:


A. r = + .68
B. r = + .12
C. r = + .02
D. r = - .12
E. r = -. 68

Copyright @2017 269


Analyzing Criminological Data Chapter 12


Q2. What is your best guess of the value of the correlation
coefficient that would derive from the data in the
following scatterplot? The correct answer is:
A. r = + .95
B. r = + .70
C. r = + .10
D. r = - .10


E. r = -. 70


Q3. What is your best guess of the value of the correlation
coefficient that would derive from the data in the
following scatterplot? The correct answer is:
A. r = + .80
B. r = + .65
C. r = + .10
D. r = - .50
E. r = -. 80



Q4. Given the following information, what is the computed
value of the y-intercept term (a) in the regression
equation Y = a + bX?

The correct answer is:
A. a = + 1.60
B. a = + .80
C. a = + .15
D. a = - .80
E. a = - 1.60

Copyright @2017 270


Analyzing Criminological Data Chapter 12


Q5. Given the following information, what is the computed
value of the unstandardized regression coefficient (b)
in the regression equation Y = a + bX?



The correct answer is:
A. b = + 1.64
B. b = + .81
C. b = + .15
D. b = - .81
E. b = - 1.64

Q6. Which of the following graphs has the smallest error in predicting Y from X?


A. Graph A has the smallest error in predicting Y from X

B. Graph B has the smallest error in predicting Y from X

C. Graph C has the smallest error in predicting Y from X

D. Graph D has the smallest error in predicting Y from X


Copyright @2017 271


Analyzing Criminological Data Chapter 12

Q7. In the regression equation predicting a state’s crime rate per 100,000 on the basis of its
unemployment rate (Ycrime=300 + 200X%unemployed), what is the proper interpretation of the
y-intercept?

A. The crime rate is predicted to be 300 per 100,000 if there is no unemployment.

B. A 1% percentage point increase in unemployment increases the crime rate by 300 per
100,000 population.

C. A 1% percentage point increase in the crime rate increases by 300 per 100,000 the
unemployment rate.

Q8. In the regression equation predicting a state’s crime rate per 100,000 on the basis of its
unemployment rate (Ycrime=300 + 250X%unemployed), what is the proper interpretation of the
unstandardized regression coefficient?

A. The crime rate is predicted to be 300 per 100,000 if there is no unemployment.

B. A 1% percentage point increase in unemployment increases the crime rate by 250 per
100,000 population.

C. A 1% percentage point increase in the crime rate increases by 250 per 100,000 the
unemployment rate.

Q9. In the regression equation predicting the years of imprisonments on the basis of a person's
number of prior convictions (YYears = 1.5 + 2.0 XPrior Convictions), what is the predicted number
of years of imprisonment for an offender with 2 convictions?

A. 1.5 years
B. 2.0 years
C. 3.5 years
D. 5.5 years

Correct Answers: Q1(A), Q2(E), Q3(C), Q4(A), Q5(B), Q6(A), Q7(A), Q8(B), and Q9(D)





Copyright @2017 272


Analyzing Criminological Data Chapter 12


LO3: Assess the explanatory power of linear regression models

The term “goodness of fit” refers to how well an independent variable explains variation in a
dependent variable in regression analysis. It is measured by the magnitude of the errors in
predicting Y on the basis of X. Smaller errors in prediction produce better “goodness of fit” and
indicate a stronger linear relationship between X and Y.

There are two general ways of assessing the overall “goodness of fit” in a regression model: (1)
visual inspection of the bivariate scatterplot and (2) formal computations of the coefficient of
determination (R²). Each method is described below.

1. Visual Method for Assessing the Strength of Bivariate Relationships in a Regression
Analysis

Scatterplots provide a visual approach for assessing the nature and magnitude of bivariate
relationships between two quantitative variables. It allows us to make an initial evaluation of
the predictive power of X in explaining Y. The process used to visually assess the bivariate
relationship strength includes the following steps:

1. Construct a scatterplot with the Y variable on the vertical axis and the X variable on the
horizontal axis.

2. Estimate the regression coefficient (b) and y-intercept (a) in the linear regression
equation.

3. Draw the regression line on the scatterplot. Under the properties of OLS regression, the
slope and y-intercept of the regression equation will produce a straight line that
minimizes the sum of the squared errors in predicting Y from X. This line will be pulled
toward extreme data points on the scatterplot.

4. Visually inspect the clustering of data points around this regression line and look for one
of the following patterns:

• If the scatterplot shows no clustering of points along this line, there are large errors
in predicting Y from X, and values of X are not going to be very useful in predicting
values of Y.

• If there is loose clustering around the line (i.e., data points form a large football-like
shape), there is a weak to moderate relationship between X and Y.

• If these data points form a narrow band around the line, there are fewer prediction
errors, and the scatterplot indicates a stronger X-Y relationship.

Copyright @2017 273


Analyzing Criminological Data Chapter 12


The table below shows two different scatterplot examples. Notice that X is a better predictor of
Y in the scatterplot on the left because there is greater clustering of the data points around its
regression line. In contrast, the wider dispersion of data points from the regression line in the
other scatterplot shows far greater errors in predicting Y from the value of X.

X is strong predictor of Y X is a weak predictor of Y



2. Formal Statistical Methods of Assessing the Strength of Causal Relationships in Regression
Analysis

The coefficient of determination (symbolized as R2) is the primary statistic for measuring the
strength of causal relationships in regression analysis. Similar to the correlation ratio (h2) in
ANOVA, the coefficient of determination in regression analysis represents the amount of
variation in a dependent variable that is explained by the independent variable(s). The value of
R2 ranges from 0% (X has no impact on Y) to 100% (all Coefficient of determination (R2):
of the variation in Y is explained by X). A measure of the strength of the
casual relationship between two
In the case of bivariate regression analysis, the quantitative variables, representing
coefficient of determination can be easily computed the proportion of variation in a
by simply squaring the value of the correlation dependent variable that is
coefficient (i.e., R2 = r2). It can also be computed by explained by an independent
partitioning the total variation in Y (called SSTotal) into variable(s)
variation explained by the regression model (called
SSRegression) and variation in Y that is error in predicting Y from X (called SSError). Under this
formulation, the coefficient of determination (R2) is computed by the following ratio:



Copyright @2017 274


Analyzing Criminological Data Chapter 12

SSRe gression
R2 =
SSTotal SSRe gression
, where = variation in Y explained by the regression equation and
SSTotal = the total variation in Y.

It is also possible to compute R2 by applying the following formula:

2
⎛ 1 ⎛ (x − x )(y − y ) ⎞ ⎞
R2 = ⎜ ⎜ ∑ ⎟⎟
⎝n⎝ sx sy ⎠⎠
where x = mean of X; y = mean of Y; sX = standard deviation of X; and
sY = standard deviation of Y.

General “rules of thumb” for interpreting the magnitude of the coefficient of determination are
similar to those for the correlation ratio (h2). In particular, for moderately large samples (n’s=
50 to 500), follow these general guidelines when interpreting the value of R2:

• weak causal relationship (R2 = < .10),

• weak to moderate causal relationship ( R2 = .10 to .30),

• moderate to moderately strong causal relationship (R2 = .30 to .50), and

• strong causal relationship (R2= > .50).

Practice Applications for Learning Objective 3 (LO3):


Answer the following questions about visual and formal methods of assessing the strength of
bivariate relationships in a linear regression analysis:

Q1. What type of relationship between X and Y is indicated by a pattern of data points that are
tightly placed around the "best fitting" linear regression line?
A. weak relationship between X and Y
B. moderate relationship between X and Y
C. strong relationship between X and Y

Copyright @2017 275


Analyzing Criminological Data Chapter 12


Q2. Applying the principles of Ordinary Least Squares
(OLS), what color of line is “best fitting” regression line
for the data points in the following scatterplot?




The correct answer is:
A. Blue line
B. Red line
C. Yellow line



Q3. Applying the principles of Ordinary Least Squares
(OLS), what color of line is “best fitting” regression line
for the data points in the following scatterplot?




The correct answer is:
A. Blue line
B. Red line
C. Yellow line

Copyright @2017 276


Analyzing Criminological Data Chapter 12


Q4. If SSRegression = 40 and SSTotal = 100, what is numerical value of the coefficient of
determination (R²)?
A. R2 = .20
B. R2 = .40
C. R2 = .60
D. R2 = .80

Q5. In a study of the influence of defendants’ prior arrest record on the length of their prison
sentence, the coefficient of determination is .50 (R² = .50). What is the proper
interpretation of this finding?

A. If a defendant has no prior record, he/she will get one-half of a year in prison

B. A one-unit change in a defendant’s prior arrest record leads to a 50% increase in their
length of prison sentence.

C. 50% of the variation in the length of prison sentences is explained by defendants’ prior
arrest record.

D. 50% of the variation in defendants’ prior arrest record is explained by the length of their
Prison sentence.

Correct Answers: Q1(C), Q2(B), Q3(B), Q4(B), and Q5(C)



LO4: Demonstrate the logic in testing hypotheses about the coefficient of
determination and regression coefficients

Hypothesis testing within regression analysis involves statistical tests of the coefficient of
determination, the slope, and the y-intercept term.

The statistical significance of the coefficient of determination is tested through an F-test. The
logic in conducting an F-test was described in the previous chapter (Chapter 11). For regression
analysis, the particular null hypothesis that is tested is that the coefficient of determination is
zero (Ho: Ryx2 =0). The researcher will be able to reject this null hypothesis if the obtained F-
value exceeds the expected F-value based on the alpha level and degrees of freedom
underlying the particular sample.

Copyright @2017 277


Analyzing Criminological Data Chapter 12

Using sample data from a previous example, the table below illustrates the calculations
necessary to test the null hypothesis (Ho: Ryx2 =0)—that is, the number of security officers' post-
grad credits (X) does not influence their scores on an advanced placement test (Y). Notice the
interpretation of the results. We reject the null hypothesis and conclude that (1) knowledge of
an officer's post-grad credits is a statistically significant predictor of their exam score and (2) for
each additional credit hour, the officer's exam score increases by .39 points.

Computations and Interpretations for Hypothesis Testing in Regression Analysis
(Post Grad Credits (X) —> Advanced Placement Test Score (Y))
 

y X x − xx (x − x )(y − y ) (y − y )2 (x − x )2 Y Yi − Y (Yi − Y )2

30 2 -40 -8 320 1,600 64 38.4 -8.4 70.6

40 4 -30 -6 180 900 36 46.3 -6.3 39.7

55 5 -15 -5 75 225 25 50.3 4.7 22.1

65 8 -5 -2 10 25 4 62.1 2.9 8.4

70 10 0 0 0 0 0 70.0 0.0 0.0

75 12 5 2 10 25 4 77.9 -2.9 8.4

80 18 10 8 80 100 64 101.6 -21.6 466.6

90 14 20 4 80 400 16 85.8 4.2 17.6

95 12 25 2 50 625 4 77.9 17.1 292.4

100 15 30 5 150 900 25 89.7 10.3 106.1

700 100 0 0 955 4,800 242 700 0 1,032


Here are the slope (b) and y-intercept (a) calculations:

y x (y − y )2 = 4,800 / 9 = 533.3.
y = ∑ = 700 / 10 = 70 x = ∑ = 100 / 10 = 10 s 2y = ∑
n n n −1
(x − x ) 2
s 2x = ∑ = 242 / 9 = 26.9 covxy = ∑ (x − x )(y − y ) /N-1 = 955 / 9 = 106.1.
n −1
b = covxy = 106.1 / 26.9 = 3.9 a =
∑ n − b ⎜⎝ ∑ n ⎟⎠ = 70 - 3.9 (10) = 31.0
y ⎛ x⎞
s 2x

Yˆ = 31+ 3.9x (the estimated regression equation)



Here are the further necessary steps for testing the null hypothesis (Ho: R2 = 0):

Copyright @2017 278


Analyzing Criminological Data Chapter 12


1. Compute SSTotal, SSError, SSRegression, observed F-ratio and R2:

SSTotal = ∑ (y − y )2 SSError = ∑ (y − Y )2
= 4,800 = 1,032

SSRe gression = SSTotal − SSError
= 4,800 – 1,032= 3,768

Fobs- ratio = MSReg / MSError = [SSReg / k ] / [SSError / n- 2]
= [3,768/ 1] / [1,032/ 8] = 29.2
SSRe gression
R2 =
SSTotal = 3,768 / 4,800 = .785
2. Go to an F-distribution table and find the expected F-value to reject the null
hypothesis under the decision rule of the test (i.e., alpha = .05; dfReg = k = 1;
dfError = n - 2 = 10 - 2 = 8). In this example, the expected F-value = 5.32.

3. Compare observed and expected F values to reach conclusions about the null
hypothesis that R2 = 0. Conclusion: Since the observed F-value (29.2) far
exceeds the expected F-value (5.32), we reject the null hypothesis that the
officers' post-grad credits have no impact on their placement exam score.



Interpretation/Conclusions:

1. The slope in the regression equation Yˆ = 31+ 3.9x reveals that for every 1-unit
increase in post-grad credits, officers' test scores increases by 3.9 points.

2. 78% of the variation in officers' exam scores is explained by their number of post-
grad credits (R2 = .785).

3. Based on the F-test of the null hypothesis (Ho: R2 = 0), our sample data reveal that
knowledge of an officer's post-grad credits is a statistically significant predictor of
their exam score at the .05 significance level (p < .05).



The statistical significance of the slope (b) and y-intercept (a) is evaluated in regression analysis
by conducting t-tests. Similar to other types of hypothesis testing, these particular t-tests are
evaluated through the following steps:

Copyright @2017 279


Analyzing Criminological Data Chapter 12

1. The null hypothesis is established (e.g., Ho: b = 0; Ho: a = 0).



2. Sample estimates of the slope and y-intercept are converted into standard scores (t-
scores in this case). The specific conversion formulas are the following:


t = (b - Ho) / SEb and t = (a - Ho) / SEa,

where Ho = the hypothesized values of b and a, and
SE = the estimated standard error of b and a.

3. Expected critical values of the t-statistic are established (based on the alpha level and
degrees of freedom) that would lead to the rejection of the null hypothesis.

4. Observed and expected t-values are compared to determine whether the sample data
leads us to reject or not reject the null hypothesis. We reject the null hypothesis when
the observed t-value exceeds the expected t-value.

In the case of bivariate regression analysis, statistical tests of hypotheses about the coefficient
of determination (Ho: R2 = 0) and the unstandardized regression coefficient (Ho: b = 0) will
provide identical results. This is true because the R2 can only be 0 when the slope of the
regression equation is 0. Likewise, if the slope is statistically significant, it means that
knowledge of X explains a significant proportion of variation in Y (i.e., Ho: R2 = 0 must be
rejected). This equivalence of tests of the slope and R2 is only true in bivariate regression
analysis; it doesn’t hold true when multiple independent variables are included in the analysis.

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about hypothesis tests of the coefficient of determination, the
unstandardized regression coefficient (b) and the y-intercept term:

Q1. For an F-test of the null hypothesis about the coefficient of determination (Ho: R² =0) in a
bivariate regression analysis, what is the critical value of the expected F-ratio when alpha =
.05, dfreg = 1 and dfError = 12?
A. Fexpected = 4.41
B. Fexpected = 4.60
C. Fexpected = 4.75
D. Fexpected = 4.96

Copyright @2017 280


Analyzing Criminological Data Chapter 12

Q2. If MSreg =100 and MSError = 25, what is the value of the observed F-ratio (Fobs-ratio)?
A. Fobs = 2.0
B. Fobs = 2.5
C. Fobs = 3.0
D. Fobs = 4.0

Q3. A study of the influence of national unemployment rates on rates of property crime in 185
nations of the world reveals an obtained F-ratio of 2.3 (Fobs = 2.3). The expected F-value for
testing the null hypothesis of R² = 0 is 3.94 (Fexp = 3.94). Based on this information, do you
reject or not reject the null hypothesis?
A. reject the null hypothesis
B. do not reject the null hypothesis

Q4. Based on a random sample of 80 countries of the world, the coefficient of determination
indicates that 2% of the variation in nation’s property crime rates is explained by their
unemployment rate. If these sample data were used to test the null hypothesis that R² = 0,
what conclusion should you reach from this formal test?
A. reject the null hypothesis
B. do not reject the null hypothesis

Q5. Based on a random sample of 50 cities, the coefficient of determination indicates that 65%
of the variation in the number of juvenile arrests is explained by the city’s school dropout
rate. If these sample data were used to test the null hypothesis that R² = 0, what conclusion
should you reach from this formal test?
A. reject the null hypothesis
B. do not reject the null hypothesis

Q6. In a random sample of 16 convicted sex offenders, the following regression equation is
found to represent the influence of prior arrests (X) on years of imprisonment (Y):
Y = .5 + 1.2 X.
In our test of the null hypothesis that the offender’s prior record has no influence on their
sentence (Ho: b = 0), we find that the observed t-value is 4.5 and the expected t-value
(for df = 15 and 7 = .05) is 2.131. Based on these t-values, would we reject or not reject
the null hypothesis?
A. reject the null hypothesis
B. do not reject the null hypothesis
Correct Answers: Q1(C), Q2(D), Q3(B), Q4(B), Q5(A), and Q6 (A)

Copyright @2017 281


Analyzing Criminological Data Chapter 12


Review

Use these reflection questions and directed activities to help master the material presented in
this chapter.

What?
Can you define the critical concepts covered in this chapter? Describe each of these concepts
in your own words and give at least two examples of each.
• Scatterplots • Y-intercept (a)
• Correlation coefficient (r) • Unstandardized regression coefficient (b)
• Linear regression analysis • Ordinary Least Squares (OLS)
• Linear regression equation • Coefficient of determination (R2)
(Y = a + bX)

How?

Answer the following questions and complete the associated activities.
• How are scatterplots used to assess bivariate relationships? Describe the four pieces of
information provided by these visual tools.
• How is OLS regression used to assess bivariate relationships? Describe each regression
equation component.
• How do data points cluster in a scatterplot when a strong relationship exists? Draw an
example.
• How is the F-table used to test the null hypothesis in a regression analysis? Explain the steps
involved in hypothesis testing.

When?
Answer the following questions.
• When should scatterplots be used (i.e., for what types of variables)?
• When does the correlation coefficient (r) indicate a weak, moderate, or strong relationship?
• When does the coefficient of determination (R2) indicate a weak, moderate, or strong
relationship?
• When should the null hypothesis be rejected in when using an OLS regression analysis?

Copyright @2017 282


Analyzing Criminological Data Chapter 12

Why?

Think about the material you learned in this chapter and consider the following question.
• Why does knowing this material about how to assess the nature and strength of the
bivariate association between two quantitative variables help you evaluate the
accuracy of criminological research findings and identify distorted claims and misuses
of data by other researchers, government agencies, and the mass media?

Copyright @2017 283


Analyzing Criminological Data Chapter 13

CHAPTER 13:
Assessing the Relationships among Multiple Variables

This chapter describes visual and formal approaches to assess the nature and magnitude of the
relationship among multiple independent variables and a dependent variable. These
approaches include (1) multivariate contingency table analysis, (2) partial correlation analysis,
and (3) multiple regression analysis. Understanding the concepts and learning objectives in this
chapter will greatly expand your analytical skills for evaluating both relational and causal
research questions in criminology.


Concepts Learning Objectives (LO)

- Multivariate analysis LO1: Identify the major principles of multivariate analysis


- Multiple causes LO2: Assess and interpret the impact of multiple
- Statistical control variables in a contingency table analysis.
LO3: Apply visual and statistical methods for assessing
- Multivariate contingency tables
the partial correlation among multiple variables.
- Partial correlation
LO4: Evaluate multiple causes and statistical control
- Multiple regression through a multiple regression analysis.
- Standardized regression
coefficient

Introduction

In the previous chapters, we described how to evaluate the nature and statistical significance of
the bivariate association between two variables using various analytical approaches (e.g.,
contingency table analysis, ANOVA, correlation and regression analysis). Most criminological
research is concerned with assessing the nature and magnitude of these basic bivariate
relationships.

However, it is also important in criminological research to assess two additional questions
about these basic bivariate associations (e.g., gender differences in victimization risks; poverty
and crime rates). First, are there other independent variables that influence a dependent
variable and which particular variables are most important in explaining this dependent
variable (e.g., victimization risks, crime rates)? Second, what happens to the bivariate
relationship of primary interest (e.g., gender differences in victimization risks) once

Copyright @2017 284


Analyzing Criminological Data Chapter 13

adjustments are made for the impact of these other variables? These two additional concerns
involve the principles of multiple causes and statistical control.

In this chapter, we describe the principles, concepts, and methods for assessing the nature and
impact of these multiple variables in what is called a multivariate statistical analysis. After
describing the basic concepts underlying a multivariate analysis, we illustrate these statistical
procedures through applications of (1) multivariate contingency table analysis, (2) partial
correlation analysis, and (3) multiple regression analysis.

We begin with a discussion of the major concepts and principles in multivariate analysis.


LO1: Identify the major principles of multivariate analysis

The term multivariate statistical analysis refers to a variety of statistical procedures for
assessing the relationship among multiple variables. These variables can be measured in a
variety of ways (e.g., qualitative, quantitative) and indicative of Multivariate Statistical
either non-causal or casual relations (e.g., independent variables Analysis: statistical
and dependent variables). Criminologists use multivariate procedures for assessing
analyses whenever they are interested in (1) providing a more the relationship among
complete explanation of the nature of bivariate relationship and between three or
between two variables and (2) assessing the relative importance more variables.
of multiple variables in explaining a dependent variable.

Here are some examples of criminological questions that have been examined in previous
research through multivariate statistical analysis:

• Are gender differences in risks of victimization accounted for by differences in the
routine activities and lifestyles of men and women?

• Are racial differences in the length of prison sentences attributable to differences in
prior arrest records and nature of charges for Black and White defendants?

• What happens to the bivariate association between states’ homicide rates and school
dropout rates once adjustments are made for state differences in demographic and
geographical characteristics (e.g., age distribution, median income, region of the
country, level of urbanization)?

• Are particular legal factors (e.g., seriousness of charge, number of charges) more
important than extra-legal factors (e.g., the defendant’s gender, race, age and/or
income) in the likelihood of being criminally convicted or receiving a prison sentence
upon conviction?

Copyright @2017 285


Analyzing Criminological Data Chapter 13

• What is the best predictor of a neighborhood’s crime rate (e.g., economic condition,
population mobility/turnover, age distribution, racial heterogeneity, housing structure
[concentration of multi-unit housing])?

• What is the primary risk factor for juvenile criminality (e.g., their mental health
background, peer relations, family structure, educational commitment, substance
use/abuse)?

Although multivariate analysis is used to answer each of these research questions, they vary in
their underlying purposes of statistical control and assessing multiple causes. These major
principles of multivariate analysis are described below.

1. Statistical Control

The classical experimental design is the “gold standard” for making causal inferences in
criminological research because it enables the researcher to (1) establish temporary ordering
(by introducing the X variable [called treatment] before the dependent variable (Y) and (2)
control for other variables that influence X and Y by random assignment of cases to
experimental and control groups. By random assignment to these groups, the influence of
other factors that are associated with either X or Y is equally dispersed across groups, thereby
negating their impact on the basic causal relationship between X and Y. Under these conditions
of the classical experimental design, criminologists are able to make clear and untainted
inferences about the causal relationship between X and Y.

Unfortunately, classical experimental designs are rarely used in criminological research,
primarily because of the ethical issues surrounding random assignment of individuals to
experimental and control groups. As a consequence, researchers must use less effective
method of controlling for the influence of these “contaminating” factors that adversely effect
the observed bivariate relationship between X and Y. The primary alternative approach to
random assignment in most criminological research
is the method of statistical control. Statistical control: the statistical
method for adjusting for the
As the name implies, statistical control involves impact of other variables on the
various statistical procedures to adjust or control for basic relationship between X and Y.
the influence of other variables on the basic relationship between X and Y. These adjustments
enable us to make statements about the net impact of some variable (X) on Y after we control
for these other variables (Z1, Z2, Z3 … etc…).

Tables 13.1 and 13.2 provide a graphic representation of the principle of statistical control in
symbolic notation (xàzày) and in a criminological example of explaining gender differences in
victimization risks under routine activity and lifestyle theories of criminal victimization.

Copyright @2017 286


Analyzing Criminological Data Chapter 13

Table 13.1:
Visual display of the relationship between X and Y with statistical controls for Z1 and Z2

Z1

X
------ > Y
Z2


Interpretation: The relationship between X and Y changes (as indicated by the dashed arrow line)
once controls are introduced for Z1 and Z2. Because Z1 and Z2 are related to both X and Y, controlling
for these variables will effect the nature and magnitude of the bivariate association between X and Y.




Table 13.2:
Visual display of the relationship between gender and victimization risks with statistical controls for
differences in lifestyles and routine activities associated with gender.


Interpretation: The nature of the relationship between gender and risks of criminal victimization is
altered (as indicated by the narrow straight gray line) once controls are introduced for (1) the effects
of gender differences on lifestyles and routine activity and (2) the impact of these variables on the
risks of criminal victimization. Because differences in lifestyles and routine activities are related to
both gender and victimization risks, controlling for these variables will effect the nature and
magnitude of the bivariate association between gender and victimization.



The specific ways in which statistical controls are applied and calculated in different types of
multivariate analysis (e.g., multivariate contingency tables, partial correlation) will be described
later in this chapter.

Copyright @2017 287


Analyzing Criminological Data Chapter 13

2. Multiple Causes

Similar to other aspects of human behavior, criminal behavior and its response by criminal
justice officials and agencies is rarely explained by a single variable. Instead, all human
behavior has a diverse etiology, explained by a wide variety of biological, psychological, and
sociological factors.
Multiple Causes: the recognition that

human behavior is caused by multiple
When criminologists use statistical methods to
factors. The goal of multivariate analysis
investigate casual relations, their focus is on
is to assess the relative importance of
identifying the multiple causes of the phenomena
different causal factors.
in question. These multiple factors may be
proximate causes (i.e., immediate antecedents to the dependent variable) or distal causes (i.e.,
causal factors that effect the dependent variable more indirectly through the proximate
causes). For example, in the previous example of gender differences in victimization (see Table
13.2), routine activities and lifestyles are considered the proximate cause of criminal
victimization, whereas gender is the distal cause whose affect on victimization is transmitted in
a large part through the effect of gender on routine activity and styles.

Tables 13.3 and 13.4 provide a visual display of two criminological examples of research
questions involving multiple causes. In each substantive area, previous research has evaluated
(1) the relative importance of the different causal factors and (2) the extent to which controlling
for them influences the nature and magnitude of the effect of the other variables.

Table 13.3:
Visual display of the multiple causes (risk factors) for chronic juvenile offending


Interpretation: The risk factors on the left are some of the multiple causes of chronic juvenile
offending. In a multivariate analysis, the researcher uses statistical procedures to (1) identify the
relative importance of each risk factor in explaining the likelihood of a juvenile being a chronic offender
and (2) assess the net effect of each variable after controlling for the other variables in the analysis.

Copyright @2017 288


Analyzing Criminological Data Chapter 13


Table 13.4:
Visual display of the effects of multiple measures of social disorganization on neighborhood crime
rates controlling for the level of collective efficacy in the community.


Interpretation: This visual representation suggests that the net impact of measures of social
disorganization (i.e., population mobility, low socio-economic opportunity, and racial heterogeneity)
on neighborhood crime rates is totally eliminated once controls are introduced for the
neighborhood’s collectivity efficacy. Thus, the causal effect of social disorganization on crime rates is
explained by its influence on the neighborhood’s collective efficacy.

Practice Applications for Learning Objective 1 (LO1):


Answer the following questions about the basic principles of multivariate analysis:

Q1. What are the primary reasons why researchers conduct multivariate analyses of
criminological data?
A. to assess how statistical controls for other variables influence the bivariate
relationship of primary interest to the researcher.
B. to evaluate which of the multiple causes are most important in explaining the
criminological research question.
C. both A and B are true.

Copyright @2017 289


Analyzing Criminological Data Chapter 13

Q2. What is the proximate cause of adult criminality in the following casual chain?

Poor School Performance --> School Dropout --> Juvenile Misconduct --> Adult Criminality
A. Poor school performance
B. School dropout
C. Juvenile misconduct

Q3. What is the most distal cause of adult criminality in the following casual chain?

Poor School Performance --> School Dropout --> Juvenile Misconduct --> Adult Criminality
A. Poor school performance
B. School dropout
C. Juvenile misconduct

Q4. The diagram below shows that the impact of higher family income on decreasing one's risks
of burglary victimization is almost eliminated entirely (as indicated by the gray arrow) after
adjustments are made for the influence of (1) family income on taking more security
measures and (2) security measures on reducing burglary risks. Which of the following
statements is a possible interpretation of the effect of security measures in this diagram?

Security
Measures

Burglary
Family
Risks
Income


A. Statistical control for security measures has no noticeable impact of the basic
relationship between family income and burglary risks.
B. Statistical control for security measures dramatically increases the influence of high
family income on the risks of burglary.
C. The location of your home is the primary multiple cause of the risks of burglary.
D. Statistical controls for security measures indicate that much of the influence of high
family income on lower risk of victimization is due to the fact that higher income families
have more security measures and these security measures reduce burglary risks.

Correct Answers: Q1(C), Q2(C), Q3(A), and Q4(D)

Copyright @2017 290


Analyzing Criminological Data Chapter 13



LO2: Assess and interpret the impact of multiple variables in a contingency table
analysis

Chapter 10 focused on constructing and interpreting contingency tables that represent the
bivariate relationships between two qualitative variables. Extending these tabular analyses to
explore the relationship between two or more independent variables and a dependent variable
is easily accomplished. This extension to multiple qualitative variables is called Multivariate
Contingency Table Analysis.
Multivariate Contingency

Table Analysis: the analysis of
As true of all multivariate analyses, there are two primary
the cross tabulation of three
reasons why we conduct multivariate contingency table
or more qualitative variables.
analyses: (1) to assess the relative importance of different
variables in predicting or explaining a dependent variable and (2) to assess what happens to the
basic bivariate relationship between two variables after we control or adjust for other variables.

To illustrate how these goals are achieved in multivariate contingency table analysis, let’s look
at an example of gender differences in arrest outcomes. Table 13.5 provides a bivariate table
of column percentages to assess gender differences in these arrest outcomes (i.e., is the
arrested person convicted or not?). The subtables (Tables 13.6) represent a multivariate
analysis that evaluates (1) whether controlling for type of offense modifies the observed
bivariate relationship between gender and arrest outcome and (2) whether gender or type of
offense is the more important variable in predicting the risks of conviction.

Table 13.5:
Gender differences in arrest outcomes

Arrest
Outcome Men Women Total

Not 40 % 40 % 40 %
Convicted (n=200)

Convicted 60 % 60 % 60 %
(n=200)

Total 100 % 100% 100%


(n=200) (n=200) (N=400)



Copyright @2017 291


Analyzing Criminological Data Chapter 13

Table 13.6: Gender differences in arrest outcomes controlling for type of offense

Misdemeanors Felonies

Arrest Arrest
Outcome Men Women Total Outcome Men Women Total

Not Not
Convicted 30 % 10 % 20 % Convicted 50 % 70 % 60 %
(n=100) (n=100)
Convicted 70 % 90 % 80 % Convicted 50 % 30 % 40 %
(n=100) (n=100)
Total 100 % 100% 100% Total 100 % 100% 100%
(n=100) (n=100) (N=200) (n=100) (n=100) (N=200)





Here are the answers to the research questions that underlie the data in Tables 13.5 and 13.6:

• Are there gender differences in arrest outcomes? Answer: No and Yes. Why both?

o According to Table 13.5, there are no gender differences because 60 % of the
defendants are convicted upon arrest and these percentages are identical for
both men and women.

o According to Table 13.6, there are gender differences in arrest outcomes when
separate analyses control for differences among misdemeanor and felony
offenses. For misdemeanor offenses, women are more likely than men to be
convicted upon arrest (90% vs. 70%). For felony offenses, women are less likely
than men to be convicted (30% vs. 50%). Thus, controlling for type of offense
alters the nature of the basic bivariate relationship between gender and arrest
outcomes.

o Although the initial bivariate table shows no gender differences in arrest
outcomes, the multivariate analysis indicates that this is not true for all types of
offenses. In fact, depending on the type of arrested offense (felony,
misdemeanor), men may have either more or less risks of conviction than
women.

Copyright @2017 292


Analyzing Criminological Data Chapter 13


• Are there differences by the type of offense in arrest outcomes? Answer: Yes. Why?

o The column marginal percentages in the offense-specific tables (Table 13.6)
indicate that misdemeanor arrestees are far more likely than felony arrestees to
be convicted of their offenses (80% vs. 40%).

o Among both men and women, misdemeanor arrestees are more likely to be
convicted than felony arrestees. However, the differences in conviction risks for
misdemeanors and felonies are far greater among women (90% vs. 30%) than
men (70% vs. 50%).

• Which independent variable, gender or type of offense, is most important in accounting
for differences in arrest outcomes? Answer: Type of Offense. Why?

o There is a 20 percentage point difference between men and women in their
conviction risks within each type of offense. A similar percentage point
difference in conviction risks are found between misdemeanors and felonies
among men (70% vs. 50%). However, these offense differences within gender
increase to a 60 percentage point difference when comparisons are made among
women (90% of women misdemeanants were convicted vs. 30% women felons).

o Given that offense differences within gender groups are greater than gender
differences within offense types, the type of offense is more important in
accounting for conviction risks than gender. The importance of offense type is
especially pronounced for the conviction risks of women.

Here is another example of constructing and interpreting a multivariate contingency table
analysis (see Table 13.7). In this example, the research questions involve (1) assessing the
nature of political party differences in support for gun control, (2) whether controlling for
region of the country alters the nature of this bivariate relationship between political party
affiliation and gun control attitude, and (3) determining whether political party or geographical
region is more important in explaining public support for gun control.










Copyright @2017 293


Analyzing Criminological Data Chapter 13


Table 13.7:
Political party differences in support for gun control

Gun Control
Attitude Democrat Republican Total

Oppose gun 25 % 37 % 31 %
control (n=248)

Support gun 75 % 63 % 69 %
control (n=552)
Total 100 % 100% 100%
(n=400) (n=400) (N=800)



Political party differences in gun control attitudes controlling for geographical region

North South

Gun Control Gun Control


Attitude Democrat Republican Total Attitude Democrat Republican Total

Oppose gun 10% 22 % 16 % Oppose gun 40 % 52 % 46 %
control (n=64) control (n=184)

Support gun 90 % 78 % 84 % Support gun 60 % 48 % 54 %


Control (n=336) Control (n=216)
Total 100 % 100% 100% Total 100 % 100% 100%
(n=200) (n=200) (N=400) (n=200) (n=200) (N=400)

Copyright @2017 294


Analyzing Criminological Data Chapter 13

Here are the answers to the following research questions that underlie the data in Table 13.7:

• Are there political party differences in support for gun control? Answer: Yes. Why?

o Democrats are more supportive of gun control than republicans (75% vs. 63%).

o Democrats are more supportive of gun control than republicans for both
residents of Northern and Southern states. Thus, controlling for geographical
location does not alter the nature of the basic bivariate relationship between
political party affiliation and gun control attitudes.

• Are there regional differences in support for gun control? Answer: Yes. Why?

o The column marginal percentages in the region-specific tables indicate that
Northern residents are far more supportive of gun control than Southern
residents (84% vs. 54%).

o Among both democrats and republicans, Northern residents are far more
supportive of gun control than Southern residents. Gun control support is 30
percentage points higher for Northern than Southern residents for both
democrats (90% vs. 60%) and republicans (78% vs. 48%). Thus, controlling for
political party affiliation does not alter the nature of the basic bivariate
relationship between geographical region and attitudes toward gun control.

• Which independent variable, political party or type of offense, is most important in
explaining attitudes toward gun control? Answer: Type of Offense. Why?

o There is a 12 percentage point difference between democrats and republicans in
their support for gun control and the magnitude of this difference is similar for
both Northern and Southern residents. However, there is a 30 percentage point
difference between Northern and Southern residents in their gun control
support and this percentage difference is maintained among both democrats
and republicans.

o Given that regional differences within political groups (30%) are greater than
political party particular differences within regions (12%), geographical location
is more important in accounting for gun control attitudes than one’s political
party affiliation.

The formal test of statistical significance in a multivariate contingency table analysis is the chi-
square test. As previously described in Chapter 10, the null hypothesis in these tests is that the
two variables are statistically independent (i.e., there is no relationship between the variables).
By conducting a series of chi-square tests for each bivariate table, the researcher is able to

Copyright @2017 295


Analyzing Criminological Data Chapter 13

assess whether the observed differences across categories are of a magnitude to achieve
statistical significance.

The logic of table construction and interpretation in multivariate contingency table analysis is
easily extended beyond two independent variables. However, when the number of
independent variables and categories of them increase, the number of cells in the contingency
table increases dramatically. For example, a contingency table with 4 independent and 1
dependent variable (with 3 categories for each variable) has 256 distinct cells in the table.
With this many cells, interpretation of a table becomes far more complex and definitive
statistical conclusions are more problematic because of the small number of cases within each
cell of the table. When cell frequencies are small (i.e., less than 5 cases), formal chi-square
tests are distorted and potentially misleading. Alternative statistical approaches for analyzing
multiple causes of qualitative dependent variables (e.g., logistic regression analysis) are
available and discussed in more advanced texts.

Practice Applications for Learning Objective 2 (LO2):


Answer the following questions about the interpreting the results of a multivariate contingency
table analysis:

Q1. In the following multivariate contingency tables, what is the proper interpretation of the
influence of race on risks of a prison sentence controlling for prior arrest?
No Prior Arrest Prior Arrest
Prison Prison
Sentence? Black White Total Sentence? Black White Total
No No
70% 75 % 13 % 20% 25 % 23 %
Yes 30 % 25 % 27 % Yes 80 % 75 % 77 %

Total 100 % 100% 100% Total 100 % 100% 100%
(n=200) (n=200) (N=400) (n=200) (n=200) (N=400)

A. Blacks are far more likely than whites to be given a prison sentence, especially when they have a
prior arrest.

B. Blacks are slightly more likely than whites to be given a prison sentence, and this is true for both
those with and without prior arrests.

C. Controlling for prior arrests, whites are more likely to given a prison sentence than blacks.

Copyright @2017 296


Analyzing Criminological Data Chapter 13

Q2. In the following multivariate contingency tables, what is the proper interpretation of the
influence of prior arrest on risks of a prison sentence controlling for the defendant's race?
No Prior Arrest Prior Arrest
Prison Prison
Sentence? Black White Total Sentence? Black White Total
No No
70% 75 % 13 % 20% 25 % 23 %
Yes 30 % 25 % 27 % Yes 80 % 75 % 77 %

Total 100 % 100% 100% Total 100 % 100% 100%
(n=200) (n=200) (N=400) (n=200) (n=200) (N=400)

A. For black but not white defendants, persons with a prior arrest are far more likely to get a prison
sentence than those without a prior arrest.

B. For white but not black defendants, persons with a prior arrest are far more likely to get a prison
sentence than those without a prior arrest.

C. Among both black and white defendants, persons with prior arrests are far more likely to be
given a prison sentence than those without a prior arrest.

Q3. Based on the following multivariate contingency tables, which independent variable, the
defendant's race or prior arrest record, is the best predictor of the risks of a prison sentence?
No Prior Arrest Prior Arrest
Prison Prison
Sentence? Black White Total Sentence? Black White Total
No No
70% 75 % 13 % 20% 25 % 23 %
Yes 30 % 25 % 27 % Yes 80 % 75 % 77 %

Total 100 % 100% 100% Total 100 % 100% 100%
(n=200) (n=200) (N=400) (n=200) (n=200) (N=400)


A. Prior arrest record

B. Defendant's race

C. Both prior arrest record and the defendant' race are equally important in predicting risks
of imprisonment

Copyright @2017 297


Analyzing Criminological Data Chapter 13


Q4. For each of these bivariate contingency tables, what is the likely result of the chi-square tests
of the null hypothesis that the defendant's race and risks of a prison sentence are unrelated
(i.e., these two variables are statistically independent)?
No Prior Arrest Prior Arrest
Prison Prison
Sentence? Black White Total Sentence? Black White Total
No No
70% 72 % 71 % 20% 55 % 37 %
Yes 30 % 28 % 28 % Yes 80 % 45 % 62 %

Total 100 % 100% 100% Total 100 % 100% 100%


(n=200) (n=200) (N=400) (n=200) (n=200) (N=400)

A. Do not reject the null hypothesis in both contingency tables

B. Reject the null hypothesis in both contingency tables

C. Reject the null hypothesis for the table for no prior arrests, but do not reject the null hypothesis
for the table for prior arrests.

D. Do not reject the null hypothesis for the table for no prior arrests, but reject the null hypothesis
for the table for prior arrests.


Correct Answers: Q1(B), Q2(C), Q3(A), and Q4(D)






LO3: Apply visual and statistical methods for assessing the partial correlation
among multiple variables

A basic way to illustrate how statistical control is applied in the multivariate analysis of two or
more quantitative variables involves the computation of the partial correlation coefficient.
Similar to the bivariate correlation (r), the partial correlation coefficient (symbolized as rxy.z)
represents the linear association between two variables (X and Y), but it assesses this
relationship after controlling for the influence of other variables (e.g., Z1, Z2, Z3,…etc.) on both X
and Y.

Copyright @2017 298


Analyzing Criminological Data Chapter 13


The essential elements for computing the
partial correlation coefficient between X and Partial Correlation Coefficient: a
Y controlling for Z (rxy.z) are given in the standardized measure of the relationship
following formula: between two quantitative variables after
controlling for the influence of other
"#$ & "#' "$'
!xy.z= , where: variables.
( & " )#' ( & " )$'

rxy is the correlation between X and Y,


rxz is the correlation between X and Z, and
ryz is the correlation between Y and Z.

A close examination of this computing formula reveals how the partial correlation coefficient
actually removes or controls for the influence of variable Z on the correlation between X and Y.
Here are several things to notice in this formula:

1. The numerator of this formula shows how the confounding effects of Z on both X and Y
are removed by subtracting the product of the correlation between rxz and ryz from the
bivariate correlation of X and Y (rxy).

2. The denominator of this formula shows how the variation in X and Y that is explained by
Z is removed so that the remaining variation is only that shared by both X and Y.

3. The resulting variation after purging the influence of Z from both X and Y represents the
correlation between X and Y that is unaffected by Z (i.e., the correlation between X and
Y controlling for Z).

The use of Venn diagrams is a visual method for demonstrating how controlling for other
variables influences the bivariate correlation between two variables. Table 13.8 provides two
Venn diagrams that visually represent how controlling for variable Z effects the bivariate
correlation between X and Y.











Copyright @2017 299


Analyzing Criminological Data Chapter 13

Table 13.8:
Venn diagrams of bivariate correlation (rxy) and partial correlation (rxy.z)

Bivariate Correlation (rxy ≈ .40) Partial Correlation (rxy.z ≈ .20)





Z
X Y

X
Y




Interpretation: The correlation between X and Y is represented by the overlapping areas in the
Venn diagrams. In the bivariate correlation diagram, X and Y share about 40% of their areas
jointly, thus the correlation between X and Y is around .40. In the partial correlation diagram,
the same overlapping areas of X and Y exists, but about half of this shared area is also shared
with Z. When the partial correlation is computed, this shared area of Z is partitioned out and the
remaining overlapping area of X and Y is the basis for computing the correlation between X and Y
controlling for Z. In this example, the partial correlation is about one-half of the size of the
bivariate correlation (rxy.z ≈ .20 versus rxy ≈ .40) because the remaining area of overlap between
X and Y is about one-half of the original area of overlap. By removing the shared area with Z, the
bivariate correlation between X and Y is reduced. This suggests that about one-half of the
observed covariation between X and Y is explained by Z's relationship to both X and Y.


The magnitude of differences between the bivariate correlation (rXY) and the partial correlation
(rXY.Z) depend entirely on the strength of correlation between the "control" variables (Z's) and
the major variables of interest (X and Y). Table 13.9 shows how the values of the partial
correlation (rXY.Z) change for different correlations among the control variable (Z) and the
primary variables (X and Y).

The following general conclusions about bivariate and partial correlations are derivable from
the data presented in Table 13.9:

• Controlling for other variables has no effect on the bivariate relationship when the other
variable(s) are not related at all to the primary variables of interest (i.e., rXZ = rYZ =0).
Under these conditions, the numerical value of the bivariate and partial correlations will
be identical (i.e., rXY = rXY.123…), where ".123…" refers to various control variables (Z1, Z2,
Z3,….etc).

Copyright @2017 300


Analyzing Criminological Data Chapter 13

• As the magnitude of the bivariate correlation among the control variables and both
primary variables (X, Y) increases, the partial correlation between X and Y controlling for
Z will decrease (i.e., rXY > rXY.Z when rXZ and rYZ are not equal to 0).

• When a control variable (Z) is unrelated to variable X but strongly correlated to Y, the
partial correlation will be larger than the bivariate correlation (i.e., rXY.Z > rXY).

Thus, whether or not controlling for other variables influences the observed correlation
between variable X and variable Y depends on the nature of the joint correlation between the
control variables (Z) and the primary variables of interest (X and Y).

Table 13.9: Illustration of how the strength of the relationship between the control variable (Z)
and both primary variables (X and Y) effect the value of the partial correlation of XY controlling
for Z (rXY.Z)

Conditions: Venn Diagram of Partial Correlation (rXY.Z)




1. Z is uncorrelated with both X and Y: rXY=.50 X Y


Ex. rXY = .50; rXZ= .00; rYZ=.00 --- > rXY.Z = .50
rXY.Z= .50

X Y

Conclusion: Controlling for Z has no effect on the Z
bivariate correlation between X and Y.



2. Z is moderately correlated with both X and Y: rXY = .50

X Y
Ex. rXY = .50; rXZ= .20; rYZ=.20 --- > rXY.Z = .45

Conclusion: Controlling for Z decreases slightly the affect on
rXY.Z = .45
X Y
bivariate correlation between X and Y.
Z








Copyright @2017 301


Analyzing Criminological Data Chapter 13



3. Z is strongly correlated with both X and Y:

X Y
Ex. rXY = .50; rXZ= .60; rYZ=.80 --- > rXY.Z = .04 rXY =.50

Conclusion: Controlling for Z greatly decreases
rXY.Z = .04 Z
the bivariate correlation between X and Y.

X Y


4. Z is uncorrelated with X but strongly

correlated with Y: rXY = .50 X Y

Ex. rXY = .50; rXZ= .00; rYZ=.80 --- > rXY.Z = .83
Z
X Y
Conclusion: Controlling for Z greatly increases the rXY.Z = .83
bivariate correlation between X and Y.


The extension of partial correlation to two or more control variables is relatively
straightforward, but the computation of these correlations by hand is cumbersome.
Fortunately, all major software for analyzing quantitative data (e.g., SPSS, SAS, STATA) have
statistical options in their correlation procedures for calculating the partial correlation when
there are multiple control variables.

Practice Applications for Learning Objective 3 (LO3):


Answer the following questions about the statistical control for the influence of other variables
through the computation of the partial correlation coefficient:

Q1. If the bivariate correlation between variables X and Y decreases when you control for
another variable (Z), what does this partial correlation (rXY.Z) tell you about variable Z's
relationship with these other variables?
A. it tells you that variable Z is unrelated to both X and Y.
B. it tells you that variable Z is related to Y but not X.
C. it tells you that variable Z is related to both X and Y

Copyright @2017 302


Analyzing Criminological Data Chapter 13

Q2. What does this Venn diagram show about the effect of controlling for Z on the correlation
between X and Y?


A. The bivariate correlation between X and Y will be the same numerical value as the
partial correlation coefficient between X and Y controlling for Z, suggesting that Z is
not correlated with either X or Y.
B. The bivariate correlation between X and Y will be slightly larger in numerical value than
the partial correlation coefficient between X and Y controlling for Z, suggesting that Z
is moderately correlated with both X and Y.
C. The bivariate correlation between X and Y will be substantially larger in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with both X and Y.
D. The bivariate correlation between X and Y will be substantially smaller in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with Y but not X.

Q3. What does this Venn diagram show about the effect of controlling for Z on the correlation
between X and Y?


A. The bivariate correlation between X and Y will be the same numerical value as the
partial correlation coefficient between X and Y controlling for Z, suggesting that Z is
not correlated with either X or Y.
B. The bivariate correlation between X and Y will be slightly larger in numerical value than
the partial correlation coefficient between X and Y controlling for Z, suggesting that Z
is moderately correlated with both X and Y.
C. The bivariate correlation between X and Y will be substantially larger in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with both X and Y.
D. The bivariate correlation between X and Y will be substantially smaller in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with Y but not X.

Copyright @2017 303


Analyzing Criminological Data Chapter 13


Q4. What does this Venn diagram show about the effect of controlling for Z on the correlation
between X and Y?


A. The bivariate correlation between X and Y will be the same numerical value as the
partial correlation coefficient between X and Y controlling for Z, suggesting that Z is
not correlated with either X or Y.
B. The bivariate correlation between X and Y will be slightly larger in numerical value than
the partial correlation coefficient between X and Y controlling for Z, suggesting that Z
is moderately correlated with both X and Y.
C. The bivariate correlation between X and Y will be substantially larger in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with both X and Y.
D. The bivariate correlation between X and Y will be substantially smaller in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with Y but not X.

Q5. What does this Venn diagram show about the effect of controlling for Z on the correlation
between X and Y?


A. The bivariate correlation between X and Y will be the same numerical value as the
partial correlation coefficient between X and Y controlling for Z, suggesting that Z is
not correlated with either X or Y.
B. The bivariate correlation between X and Y will be slightly larger in numerical value than
the partial correlation coefficient between X and Y controlling for Z, suggesting that Z
is moderately correlated with both X and Y.
C. The bivariate correlation between X and Y will be substantially larger in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with both X and Y.
D. The bivariate correlation between X and Y will be substantially smaller in numerical
value than the partial correlation coefficient between X and Y controlling for Z,
suggesting that Z is strongly correlated with Y but not X.

Correct Answers: Q1(C), Q2(A), Q3(B), Q4(C) and Q5(D)

Copyright @2017 304


Analyzing Criminological Data Chapter 13


LO4: Evaluate multiple causes and statistical control through a multiple
regression analysis

Multivariate regression analysis extends bivariate regression by including more than one
independent variable. The term multiple regression analysis is commonly used to describe this
statistical approach. The primary goals of multiple regression analysis are (1) evaluate the
relative importance of various independent variables on a quantitative dependent variable and
(2) assess the net impact of particular variables once statistical controls are introduced for
other variables. Multiple regression shares with other multivariate methods this emphasis on
multiple causes and statistic controls.

How statistical controls are calculated and
Multiple Regression Analysis: A statistical
interpreted within multiple regression analysis is
similar to their application for computing the partial method for assessing the net impact of
correlation coefficient. In particular, for both the multiple independent variables on a
partial correlation coefficient and its regression quantitative dependent variable. It is
counterpart (i.e., the unstandardized partial designed to identify the net effects of each
regression coefficient), the “contaminating” effect variable on the dependent variable (after
of the control variables are purged from the controlling for the influence of other
covariation of Y and X, thereby allowing for a clearer variables) and to evaluate the relative
assessment of the net influence of X on Y. importance of these variables in explaining
this dependent variable.
1. The General Form of the Multiple Regression Equation

Multiple regression analysis in criminology and other substantive areas is designed to assess the
impact of multiple independent variables on a dependent variable. The general form of the
multiple regression equation in the population is given by the following formula:

Y = + + , 1X1 + , 2X2+ , 3X3+ ...+ , nXn , where:

+ = Y-intercept term (i.e., the value of Y when the value for all independent
variables is equal to 0 [X1= X2 = X3 =… Xn = 0]) and

, 1, , 2, , 3 ... , n = the unstandardized partial regression coefficients that
indicate the net effect of a unit change for a particular X
variable on the value of Y after controlling for the influences
of all X variables in the regression equation (Xn).

Similar to a bivariate regression analysis, the population values of the y-intercept term (+) and
regression coefficients (, 1, , 2, , 3... , k) are estimated by their sample values in the following
formula:

Copyright @2017 305


Analyzing Criminological Data Chapter 13


Y = a + b1X1 + b2X2 + b3X3 + ..... + bnXn + e, where:

a = the sample estimate of the y-intercept term,

b1....bn = sample estimates of the unstandardized partial
regression coefficients, and

e = the error in predicting Y on the basis of the X variables
included in the regression model.

The primary difference between bivariate and
multivariate regression analysis involves the Unstandardized Partial Regression
computing formulas and interpretation of the Coefficient (bj..n): the effect on Y of a unit
OLS regression estimates of the y-intercept (a) change in a particular independent
and slopes (b's). Table 13.10 summarizes these variable (e.g., Xi) after controlling for the
differences between bivariate regression and influence of all of the independent
multiple regression analysis with only two variables (Xn) in the regression model.
independent variables.

Table 13.10: Differences between bivariate and multiple regression estimates

Bivariate Regression: Y = a + b X Multiple Regression: Y = a + b1.2X1 + b2.1X2

Y-intercept: a = .- b / Y-intercept: a = .- b1.2 /1 – b2.1 /2

Slope: b = cov (xy) / var (x) Slope: b1.2 =


012 34,6 278 39 –012 39,6 012 34,39

278 34 278 39 – 012 34,39 9
012 (39,6) 278(34) &012 (34,6) 012(34,39)
b2.1 =
278 (34) 278 (39) & 012 (34,39)9

Interpretation: Interpretation:
a = value of Y when X is 0. a = value of Y when X1 = 0 and X2 = 0.
b = the effect of a 1-unit change in X on the value b1.2 = effect of 1-unit change in X1 on Y controlling
of Y. for the value of X2.

b2.1 = effect of 1-unit change in X2 on Y controlling
for the value of X1


A closer examination of Table 13.10 indicates how the computation of the partial regression
coefficient for each independent variable is designed to remove the influences of other
independent variables from each other and the dependent variable. In particular, notice how

Copyright @2017 306


Analyzing Criminological Data Chapter 13

the computing formulas in this table for b1 and b2 (1) remove the shared covariation among the
independent variables X1 and X2, (2) remove the shared covariation between the other
independent variable and the dependent variable, and thus (3) isolate the net impact of each
independent variable on the dependent variable controlling for the other independent variable.

Under these statistical adjustments, the unstandardized
partial regression coefficients in a multiple regression analysis
involving two independent variables (i.e., Y= a +b1X1 + b2X2) Both partial correlation
are interpretable in the following ways: and partial regression
coefficients measure the

bivariate association
• b1 is the effect of X1 on Y once the effect of X2 on both
between two variables
X1 and Y are removed. Thus, it represents the net effect after statistical controls
of a unit change in X1 on Y after statistically controlling are used to remove the
for the influences of X2. influence of other
variables on this basic
• b2 is the effect of X2 on Y once the effect of X1 on both relationship.
X2 and Y are removed. Thus, it represents the net effect
of a unit change in X2 on Y after statistically controlling
for the influences of X1.

2. Evaluating Multiple Causes and the Overall Explanatory Power in Multiple Regression

One of the primary goals of multiple regression analysis is to assess the relative importance of
multiple causes. Through the mechanism of statistical control, OLS regression generates
estimates of the net effect of each independent variable on a dependent variable after
adjusting for the impact of the other variables. Criminologists interpret these unstandardized
partial regression coefficients (b1, b2, … bn) to assess the net effect of how a unit change in a
particular independent variable influences the dependent variable.

When evaluating the relative importance of particular variables, however, a comparison of the
numerical value of these unstandardized coefficients is not very useful because their size
depends on the actual units of measurement. For example, the numerical effect of a unit
change in educational attainment on salaries of police officers and conclusions about the
relative importance of this variable will be Standardized Partial Regression
quite different when educational attainment Coefficient (Bj..n): the effect of a standard
is measured in days of education, credit unit change in a particular independent
hours, or years of formal education. variable (e.g., Xj) on the dependent
variable after controlling for the
As a result of these problems with influence of all of the independent
comparisons of variables in their original variables (Xn) in the estimated regression
units, questions about the relative model. These standardized partial
importance of particular variables in a regression coefficients are also
multiple regression analysis are usually commonly referred to as “beta weights”.
evaluated on the basis of the numerical

Copyright @2017 307


Analyzing Criminological Data Chapter 13

value of the standardized partial regression coefficients. These standardized coefficients are
also called “Beta weights” (symbolized as B1, B2 ….Bn) and are similar to correlation coefficient
in that they range from -1.0 to 1.0.

Based on the relative size of their standardized partial regression coefficients, you should be
able to identify the most important independent variable in predicting Y in the following
regression equations:

• Y = .40 X1 - .25 X2 + .33 X3 (i.e., B1 = .40; B2 = -.25; B3 = .33)à Most Important = X1 (B1= .40)
• Y = .30 X1 - .60 X2 + .19 X3 (i.e., B1 = .30; B2 = -.60; B3 = .19)à Most Important = X2 (B2 = -.60)
• Y = .20 X1 - .35 X2 + .50 X3 (i.e., B1 = .20; B2 = -.35; B3 = .50)à Most Important = X3 (B3 = .50)

The overall explanatory power of a multiple Coefficient of Determination (R2): a
regression equation is often assessed by the value measure of the proportion of variation in
of the coefficient of determination (R2). Similar a dependent variable that is explained by
to its bivariate counterpart, the coefficient of the independent variables included in the
determination measures the proportion of regression model.
variation in a dependent variable that is explained
or accounted for by knowledge of all of the independent variables included in the analysis. It is
also similar to the multiple correlation ratio (=2) in ANOVA and represents the ratio of the
variation in Y that is explained by the regression model compared to the total variation in Y.
This coefficient of determination (R2) in a multiple regression analysis is calculated on the basis
of the following formula:

R2= SSRegression/ SSTotal = ∑ (Y’ - ?)2 /∑ (Y - Y’)2

where Y’ is the predicted value of Y under the estimated regression
equation, ? is the mean value of Y, and Y is an individual score.

The test of the statistical significance of the coefficient of determination in multiple regression
involves the F-distribution. The null hypothesis underlying this test is that the independent
variables do not help explain the total variation in the dependent variables (i.e., Ho: R2YX1..Xn = 0,
where Xn is the number of independent variables). The critical value of the expected F-ratio for
testing this null hypothesis is based on the significance level (e.g., ∝ = .05) and its degrees of
freedom (i.e., k-1 and n-k, where k = the number of independent variables and n= the total
sample size). As is true of other types of hypothesis testing, an observed F-ratio that exceeds
the expected F-ratio will lead to the rejection the null hypothesis because such an difference
between these two values is a rare outcome if the null hypothesis were true.

3. Example of Multiple Regression Analysis

Due to potential errors from the complexity of hand calculations, computer software (e.g.,
SPSS, SAS, STATA) is widely used to (1) provide sample estimates of the various coefficients in
multiple regression (i.e., y-intercept, b1..n, B1..n, R2) and (2) test null hypotheses about their

Copyright @2017 308


Analyzing Criminological Data Chapter 13

statistical significance. This computer software easily extends multiple regression analysis to
include a large number of independent variables and impose multiple statistical controls to
assess the net effect of each variable in the analysis.

Table 13.11 provides an example of the results of a multiple regression analysis of the
predictors of robbery rates in 342 cities. This multiple regression analysis was conducted using
SPSS software. For illustration purposes, this regression analysis focuses on evaluating the
following research questions:

1. What is the nature and strength of the bivariate relationship between a city’s
prevalence of single-parent families and its robbery rates?

2. How is this basic relationship between family structure and robbery rates influenced by
statistically controlling for other structural causes of crime (i.e., a city’s median family
income, unemployment rate, racial heterogeneity, population mobility)?

3. What is the most important variable in predicting a city’s robbery rate?

4. How much variation in the robbery rates across cities is explained by this entire set of
independent variables? Is the magnitude of the coefficient of determination sufficient
to reject the null hypothesis (Ho: R2=0)?

Table 13.11: Unstandardized (b) and standardized (B) partial regression coefficients
in predicting robbery rates based on city characteristics
Unstandardized Unstandardized Standardized
Independent Variable Model 1 (bYX) Model 2 (bYX..Xn) Betas (BYX…Xn)
Y-intercept (a) 2228.1 * 3038.0 *

X1 (single-parent families) 2276.6 * - 719.4 -.091

X2 (median family income) .026 * .139

X3 (unemployment rate) 13085.2 * .526


X4 (racial heterogeneity) 3164.3 * .199

X5 (population mobility) 2328.0 * .222

R2 = .084 ** .371 ** .
Notes:
* Observed t-value is statistically significant at p < .05.
** Observed F-value is statistically significant at p < .05.

Here are the answers to these research questions based on the information in Table 13.11:

Copyright @2017 309


Analyzing Criminological Data Chapter 13

1. The unstandardized regression coefficient for family structure (X1) in Model 1 indicates
that each additional percentage increase in a city’s proportion of households involving
single-parent families increases the robbery rate by 2277 per 100,000 residents. This
bivariate regression coefficient is statistically significant (probability of the observed
t-value is < .05), leading to the rejection of the null hypothesis that Ho: b1=0.

2. As shown by the green box from Model 1 to Model 2, the net impact of family
structure on robbery rates drops dramatically after controlling for other structural
causes of crime in U.S. cities. The unstandardized partial regression coefficient for
single-parent families changes from a value of 2277 per 100,000 to -719 per 100,000
after controlling for these other factors. In fact, the impact of single-parent families on
robbery rates is not statistically significant once we take into account the fact that the
prevalence of these families in a city is strongly related to the city’s racial
heterogeneity, median family income, and unemployment rates. Once these other
variables are considered, the net impact of family structure on robbery rates is
statistically insignificant.

3. As indicated by the standardized partial regression coefficients (i.e., beta weights) in
the last column of Table 13.11, the most important variable in predicting a city’s
robbery rate is its unemployment rate (B = .53). Based on the relative size of their
standardized coefficients, the next two most important predictors of robbery rates are
the city’s population mobility (B = .22) and its racial heterogeneity (B= .20).

4. The combined regression model that includes these five predictors explains about 37%
of the variation in robbery rates. The impact of these variables in predicting robbery
rates is statistically significant (Ho: R2 = 0 is rejected at the .05 level of level of
significance).

Based on this multiple regression analysis, the predicted value of a city’s robbery rate can be
determined by plugging in values for each independent variable in the following regression
equation:

Y’ = 3038.0 – 719.4 X1 + .026 X2 + 13085.2 X3 + 3164.3 X4 + 2328.0 X4

For example, this regression equation predicts a robbery rate of 9,109 per 100,000 for a city
with the following characteristics:

• X1 = .50 (50% of its families involve single-parents)
• X2= 50,000 (median family income of $50,000)
• X3 = .20 (a 20% unemployment rate)
• X4 = .50 (50% of its population is black and 50% non-black = maximum racial heterogeneity)
• X5 = .40 (40% of population has changed/moved their residence in the last 5 years)

Y’ = 3038 – 719.4(.50) +.026(50,000) + 13085.2(.20)+ 3164.3(.50) + 2328(.40) = 9,109

Copyright @2017 310


Analyzing Criminological Data Chapter 13


As true of bivariate regression analysis, the discrepancy between the observed and expected
values of Y in a multiple regression analysis is indicative of errors in predicting Y on the basis of
the independent variables. By exploring the nature of this residual or error variance,
criminologists may be able to uncover patterns which may indicate non-linear trends or the
presence of other variables that should be included in subsequent multivariate analyses.

4. More Advanced Topics in Multiple Regression Analysis

Many criminological applications of multiple regression analysis involve more than two
independent variables, use multiple levels of measurement of these variables, and estimate
models with OLS and other methods that do not necessarily involve linear functions. These
extensions and alternative estimation procedures are discussed extensively in statistical
textbooks that focus on multivariate analysis. For the current purposes, however, we just want
to make you aware of the versatility of multiple regression analysis for various research
questions. These extensions of multiple regression analysis include the following:

• Nominal-level independent variables may be included in any regression analysis
(including both bivariate and multiple regression). These non-quantitative independent
variables are coded in binary form (0, 1) to created “dummy variables” that represent
the differences between pairs of categories (e.g., gender [0=female; 1=male]). When
these dummy variables are included in a multiple regression model that predicts a
quantitative dependent variable, both multiple regression analysis and ANOVA with
multiple independent variables will yield the same results.

• The effects of an independent variable on a dependent variable may be non-linear (i.e.,
the impact of a change in X on the value of Y depends on the level of X). Under these
conditions, a transformation of the particular X variable is performed (e.g., X = log (X); X
= X2) and the multiple regression analysis is then conducted using these transformed
variables as the predictor variables.

• The joint or combined effects of two or more independent variables may be estimated
by incorporating their combined influences into a multiplicative term in a regression
model. For example, both the number of charges (X1) and number of prior arrests (X2)
may influence the length of a defendant’s prison sentence, but this sentence may be
especially severe for defendants with a high number of both current charges and prior
arrests. In this case, the estimated regression model would be Y = a + b1 Xcharges + b2Xpriors
+ b3Xcharges x priors, where the value of b3 indicates the net multiplicative effect of the
number of charges and the number of prior arrests on the length of prison sentences.

• Multiple regression analysis using OLS estimation is often applied even when the
dependent variable is measured as a binary variable (e.g., victim of a crime [no=0;
yes=1]). However, when using binary measures of the dependent variable, the method

Copyright @2017 311


Analyzing Criminological Data Chapter 13

of logistic regression analysis is the more appropriate technique for estimating the net
effects of the independent variables in a multivariate analysis.

Practice Applications for Learning Objective 4 (LO4):


Answer the following questions about multiple regression analysis:

Q1. What is the proper interpretation of the unstandardized partial regression coefficient of
30 for variable X1 in the following regression equation:
Y = 10 + 30 X1 + 40 X2 - 15 X3 + 20 X4
A. A unit change in X1 results in a 30 unit increase in Y.
B. A unit change in X1 results in a 30 unit increase in Y once controls are
introduced for X2.
C. A unit change in X1 results in a 30 unit increase in Y once controls are
introduced for X2, X3, and X4.

Q2. To identify the most important independent variable in predicting the dependent variable
in a multiple regression analysis, the researcher should examine which of the following
measures?
A. the variable with the largest unstandardized partial regression coefficient
B. the variable with the largest standardized partial regression coefficient
C. the y-intercept term
D. the coefficient of determination


Q3. A study of 10 biological, psychological, and sociological risk factors for predicting the
length of an offender’s criminal record yields a coefficient of determination of .75
(R21234..10= .75). What is the proper interpretation of this coefficient in this example?

A. 75% of the variation in these risk factors is explained by the person’s
criminal record.

B. 75% of the variation in an offender’s criminal record is explained by their
biological characteristics.

C. 75% of the variation in an offender’s criminal record is explained by these 10
biological, psychological, and sociological risk factors.

Copyright @2017 312


Analyzing Criminological Data Chapter 13



Q4. Based on the following multiple regression equation, what is the predicted sentence
length in months for a 20 year old (X1 =20) who has 5 prior arrests (X2= 5) and has lived in
the community for 10 years (X3 = 10): YMonths = 3.5 + .2 X1 + 4.1 X2 - .3 X3.
A. 3.5 months
B. 7.5 months
C. 10.8 months
D. 25.0 months

Correct Answers: Q1(C), Q2(B), Q3(C), and Q4(D)

Review

Use these reflection questions and directed activities to help master the material presented in
this chapter.

What?
Can you define the critical concepts covered in this chapter? Describe each of these concepts
in your own words and give at least two examples of each.

• Multivariate analysis • Partial correlation


• Multiple causes • Multiple regression
• Statistical control • Standardized regression coefficient
• Multivariate contingency tables

How?

Answer the following questions and complete the associated activities.
• How does a multivariate analysis differ from a bivariate analysis? Describe the key
differences.
• How does a partial correlation coefficient “control for” the other variables in the analysis?
Use the elements in the formula for rXY.Z to illustrate this principle.

Copyright @2017 313


Analyzing Criminological Data Chapter 13

When?
Answer the following question:

• When should a multivariate analysis be used (i.e., for what type of research question)?

Why?

Think about the material you learned in this chapter and consider the following question.
• Why does knowing this material about multivariate analysis help you evaluate the
accuracy of criminological research findings and identify distorted claims and misuses
of data by other researchers, government agencies, and the mass media?

Copyright @2017 314


Analyzing Criminological Data Appendix A


GLOSSARY OF MAJOR TERMS IN ANALYZING CRIMINOLOGICAL DATA

Alpha Level (a): The value is the probability of making a Type I Error (i.e., rejecting a
true hypothesis). It is also called the significance level for a test statistic. Common alpha
levels in criminological research are .05 and .01.

Analysis of Variance (ANOVA): A statistical technique for assessing the amount of
variability in a dependent variable that is explained by different categories of an
independent variable(s). For this type of analysis, the dependent variable is measured
on an interval/ratio scale and the independent variable is a categorical variable
(nominal/ordinal). The key idea of ANOVA is to break down the total variation in the
dependent variable into two components: (1) between-group variation and (2) within-
group variation. ANOVA allows the researcher to determine how much variation in a
dependent variable is accounted for by knowledge of the independent variable.

Alternative Hypothesis (Ha): A rival claim about an assumed population parameter or
differences in the assumed population parameters across groups. The alternative
hypothesis is what the researcher expects to find when testing the null hypothesis. It is
also called the research or substantive hypothesis. The alternative hypothesis is derived
from existing theory, previous research, and/or sound logical reasoning.

Attributes: Classification of a variable’s characteristics. For example, for the variable
“gender”, the attributes are male and female.

Bar Chart: A visual data display in which the length of a bar represents the frequency of
a single variable’s attributes. The category with the longest bar is the most frequently
occurring category.

Between-Group Variation (SSBetween): The variation in a dependent variable that is
represented by differences between the group means and the grand mean.

Binomial Distribution: A sampling distribution that represents the probability of
obtaining particular outcomes for a dichotomous variable within a given number of
trials or cases.

Bivariate Association/Analysis: The degree to which two variables are related or
associated with each other. These analyses of two variables focus on their joint
occurrence and/or association. Examples of bivariate analyses include contingency table
analysis of two variables, bivariate correlation, one-way ANOVA, and the linear
regression analysis of Y on X.

Box Plots: A visual display of data dispersion that shows the median value, range, and
interquartile range of the distribution of scores.

315


Analyzing Criminological Data Appendix A

Chi-Square Distribution: The sampling distribution used in contingency table analysis to


define rare and common outcomes when the null hypothesis is true. The shape of this
distribution depends on the degrees of freedom. In the analysis of a contingency table,
the degrees of freedom is found by multiplying the number of rows minus one (rows-1)
and the columns minus one (columns -1).

Chi-Square Test (!2): A hypothesis test of statistical significance used to evaluate the
relationship between qualitative variables. In a contingency table analysis, it is the
formal statistical test of independence between two categorical variables. The critical
values of this test depend on the significance level ( α ) and the degrees of freedom (df).

Coefficient of Determination (R2): A measure of the strength of the causal relationship
between two quantitative variables, representing the proportion of variation in a
dependent variable that is explained by an independent variable(s). It also measures the
proportional reduction in error in predicting a dependent variable on the basis of the
independent variable rather than solely based on the mean of the dependent variable.

Column Percentage Table: A contingency table that presents the percentages of cases
within each column that are observed within each row of the table.

Confidence Interval: A range of values with a particular probability of capturing the true
population value (symbolized as CI). Confidence intervals include a point estimate of
the population parameter and a margin of error (i.e., +/- some value or the “give and
take” component) around this point estimate for the population mean or proportion.

Confidence Level: The probability that a confidence interval will capture the true
population parameter (symbolized as CL). Common confidence levels in criminological
research involve z-scores or t-scores whose combined probability of occurrence is 68%,
80%, 90%, 95%, 98% or 99%.

Contingency Table: A table that represents the cross tabulation of two or more
qualitative variables. A contingency table of rows and columns represents the
relationship between two categorical variables. The size of this cross-tabulation of
variables is based on the number of categories of the row variable and the number of
categories of the column variable. It is called a contingency table because the value on
some variable is contingent upon the value on some other variable.

Correlation Coefficient (r): A summary measure of the direction and strength of the
linear association between two quantitative variables. It is a standardized measure that
ranges in value from -1.00 to +1.00. It is the most commonly used measure of statistical
association in criminological research.

Correlation Ratio (η 2 ): A statistical measure of the proportion of variation in a
dependent variable that is explained by the categories of the independent variable. It is

316


Analyzing Criminological Data Appendix A

also called eta square (η 2 ). It is interpreted as the proportion of error reduction when
the means of each category of the independent variable is used to predict scores on the
dependent variable rather than just using the grand mean for these predictions. It is
also interpreted as the proportion of variation in the dependent variable that is
accounted for by the particular categories of the independent variable(s).

Critical Values: The standard scores that mark the boundary of the zone(s) of rejection
in testing a null hypothesis. These critical values for any sampling distribution (z, t, chi-
square, F, binomial) define what constitutes a rare or common outcome under a null
hypothesis and the particular decision rule used by the researcher. The actual values
are determined by the type of sampling distribution (e.g., t- or normal distribution), the
alpha level of the test, and the nature of the alternative hypothesis [(e.g., if it is non-
directional, there are two critical values; if the alternative hypothesis is directional (< or
>), there is only one critical value]. These critical value(s) are defined as “critical”
because they establish the boundary of the zone of rejection for rejecting the null
hypothesis. The confidence level in confidence intervals provides a related type of
“critical value”.

Cumulative Distributions: A data summary column that provides a “running total” for
another data summary column. It is a frequency or percentage distribution of a variable
measured on an ordinal/interval/ratio scale that shows the number or percentage of
observations that are above or below a particular category.

Data: Pieces of information that are collected to test hypotheses and answer research
questions.

Data Reduction: Organizing data into a simplified and useful form. The primary reason
that criminologists compute descriptive statistics (e.g., means, medians, standard
deviations) is for purposes of data reduction.

Decision Rules: Objective standards used to evaluate the relative accuracy of the null
and alternative hypotheses. These decision rules are often based on the type of
sampling distribution (e.g., z- or t- distribution), alpha level (e.g., .05, 01) or confidence
level (e.g., 90%, 95%), and the nature of the alternative hypothesis (e.g., is it directional
[<, >] or non-directional [≠]).

Degrees of Freedom (df): The number of independent observations or scores that are
“free” to vary when estimating population values from a sample.

Dependent Variable (DV): A variable whose values are assumed to be determined by an
independent variable(s). This is the outcome that is the primary concern in
criminological research. It is the “effect” variable in a cause-effect chain. For example,
in the X à Y causal chain, Y is the dependent variable. It is called the “dependent
variable” because its outcome depends on some other variable.

317


Analyzing Criminological Data Appendix A


Descriptive Statistics: Statistics that summarize descriptive information about the
variables in a study. These summary statistics are sample values that are used for
purposes of data description and reduction. Common descriptive statistics include
various measures of central tendency or location (e.g., means, medians, modes),
dispersion and variation (e.g., standard deviation, variances), and association (e.g.,
correlations, regression coefficients).

Ecological Fallacy: Making claims about individuals when using groups as the unit of
analysis. For example, if low income neighborhoods have higher amounts of violent
crime than high income neighborhoods, it is an ecological fallacy to claim that poor
people are more violent than rich people because (1) you haven’t made observations to
compare individuals who vary on income and (2) it is possible that it is rich people in
poor neighborhoods who are committing the most crime in them.

2
Eta Squared (η ): The dominant measure of the strength of the relationship between a
dependent and independent variable in the analysis of variance (ANOVA). It is also
called the correlation ratio. It is interpreted as the proportion of error reduction when
the means of each category of the independent variable is used to predict scores on the
dependent variable rather than just using the grand mean for these predictions. It is
also interpreted as the proportion of variation in the dependent variable that is
accounted for by the particular categories of the independent variable(s).

Exhaustive: Attributes that represent all possible responses or outcomes. For example,
for the variable “gun ownership”, the exhaustive list of all possible response attributes
include the categories “no” and “yes”.

F-Distribution: The sampling distribution used in ANOVA to define rare and common
outcomes when testing the null hypothesis of equal means across groups. This sampling
distribution of probabilities is used to test hypotheses about the nature of variation in
group differences on a quantitative dependent variable.

F-Ratio or F-test: The ratio of the average sum of squared differences in the between-
and within-group variation. The size of this observed ratio forms the basis for testing
hypotheses about mean differences using the F-sampling distribution. Critical values of
the F-sampling distribution are used to reach decisions about rejecting or not rejecting
hypotheses within the ANOVA framework. It is also used to test the significance of the
coefficient of determination (R2) in a regression analysis.

Frequency Distribution: A data summary column that presents the number of times
each variable attribute appears in a dataset.

318


Analyzing Criminological Data Appendix A

“Garbage In, Garbage Out” (GIGO): An important principle acknowledging that


inaccurate data will lead to false conclusions. Major sources of inaccurate data in
criminological research include poor measures of concept and bad sampling designs.
This basic principle suggests that statistical analysis is only as good as the data used in
the analysis. If you have garbage coming into the earlier stages of data analysis, you are
guaranteed to have garbage come out.

Histogram: A visual data display in which a continuous series of connecting bars vary in
length to represent the frequency of quantitative variable attributes. There is often no
blank space between the categories of a histogram to indicate that the variable has a
continuous distribution. A histogram is a visual equivalent to a bar chart for qualitative
variables, except that the bar chart often has space between the bars to
emphasize that the variable is not continuous.

Homogenous: Attributes that classify similar things in the same categories. For
example, for the variable “type of evidence”, the separate categories of “expert
testimony”, “eyewitness testimony”, “gun residue”, and “DNA” are fairly homogenous.
In contrast, the income categories of “< $10,000”, $10,000 to $5 million”, and “> $5
million” are not homogeneous, especially the range of diverse incomes in the “$10,000
to $5 million” category.

Hypothesis: A prediction concerning the answer to a research question. The two major
types of hypotheses in criminological research involve the null hypothesis (symbolized
Ho) and the alternative hypothesis (symbolized Ha).

Hypothesis Testing: A statistical approach for evaluating claims about population
parameters. It involves the following process: (1) establish a null hypothesis and
decisions rules for rejecting the null hypothesis, (2) draw random sample(s) and
computing sample statistics, (3) convert these sample statistics to standardized scores,
(4) compare the observed standardized scores with the expected standardized scores
based on the decision rules established, and (5) make a decision to reject or not reject
this null hypothesis based on these comparisons.

Independent Variable (IV): A variable that is used to explain or predict differences in a
dependent variable. It is the variable that causes or influences another variable. It is
called the “treatment” variable in experimental designs. In the simple causal chain X à
Y, X is the independent variable. It is the variable that precedes the outcome variable in
time.

Inferential Statistics: These are the results of statistical methods that are used to
estimate population values from the observations and analyses of a sample. Hypothesis
testing and developing confidence intervals are two common situations in which
inferential statistics are used.

319
Analyzing Criminological Data Appendix A

Interquartile Range: A measure of dispersion that indicates the difference between the
25th percentile (1st quartile) and 75th percentile (3rd quartile) in a data distribution
(symbolized as IQR).

Interval Measurement: One of the types of levels of quantitative measurement
involving the properties of differences in magnitude and equal distance between the
response categories. For example, an interval-level measure of a person’s age could be
expressed in individual years (coded in years [1, 2, 3, 4, 5, etc…] or in equal 10-year wide
intervals [0-9, 10-19, 20-29, etc…]).

Joint Frequency: The number of cases that possess a particular combination of variable
attributes in a contingency table.

Kurtosis: A measure of the degree of peakness in a data distribution. Distributions with
less dispersion/variability have more kurtosis. Leptokurtic distributions are
represented by a tight clustering of cases around the mean and these distributions have
sharper peaks than normal distributions and are both tall and thin distributions. In
contrast, platykurtic distributions are symmetrical disributions with low levels of
peakedness. These distributions are wider and flatter than normal distributions and
cases are only loosely clustered around the mean.

Level of Measurement: The type of measurement scale associated with a variable’s
attributes. The four levels of measurement of a variable’s attributes are represented by
nominal, ordinal, interval, and ratio measures.

Linear Regression Analysis: A statistical procedure that is designed to assess the effect
of an independent variable (X) on a dependent variable (Y).

Linear Regression Equation: A summary specification of the linear relationship between
an independent variable (X) and a dependent variable (Y). The symbolic form of this
linear equation is Y = a + b X.

Linear Relationship: A functional relationship between two variables that defined by a
straight line. The standard expression in the bivariate regression equation Y= a + b X is
an example of the specification of a linear relationship between variable X and Y.

Line Graph: A visual data display in which a line is used to connect a series of related
data points. The vertical axis of this graph often represents the frequency of particular
outcomes and the horizontal axis is indicative of another quantitative variable.

Marginal Frequencies: The frequency distribution for each variable’s attributes in a
contingency table. The row marginal frequencies represent the univariate frequency for
the categories of the row variable in a contingency table. The column marginal

320
Analyzing Criminological Data Appendix A

frequencies represent the univariate frequency for the categories of the column variable
in a contingency table.

Margin of Error: The expected amount of error in a confidence interval (symbolized as
ME). It is the product of the confidence level and the standard error. This margin of
error is the “+ or –“ element that is shown in many polls, and it is also the “give and
take” component in statements about confidence intervals (e.g., “the mean age is
$55,000, give or take $7,000”). In a confidence interval around a population mean
( µ ± z cl se ), the “ ± z cl se ” component is the margin of error.

Mean: An average based on every score in a data distribution (symbolized as #). It is
computed by (1) summing up all scores and (2) dividing by the number of scores. It is
the most often used summary measure of central tendency and typicality in
criminological research.

Mean Sum of Squares Between Groups (MSBetween): The average variation between the
group means in the analysis of variance (ANOVA). It is the found by taking the total
between-group variation and dividing it by its degrees of freedom (k-1).

Mean Sum of Squares Within Groups (MSWithin): The average variation of individual
values from their group means in the analysis of variance (ANOVA). It is the found by
taking the total within-group variation and dividing it by its degrees of freedom (n- k).

Measures of Central Tendency: Statistics that describe the center point of a data
distribution. The three most common measures of central tendency are the mode,
median, and mean.

Measures of Dispersion: Statistics that describe the degree to which cases are spread
across variable attributes. The most common types of measures of dispersion include
the variation ratio, range, interquartile range, standard deviation, and the variance.

Median: A value that represents the middle score in a data distribution (symbolized as
Md). It is the middle score in an ordered array of scores such that 50% of the scores fall
above it and 50% fall below it.

Multivariate Statistical Analysis: Statistical procedures for assessing the relationship
among and between three or more variables. Examples of multivariate analysis include
multivariate contingency table analysis, partial correlation, and multiple regression
analysis.


321
Analyzing Criminological Data Appendix A

Mutually Exclusive: Attributes that offer only one category for each possible response
or outcome. For example, for the variable “number of times victimized by crime”, a
mutually exclusive classification of its attributes would include the following categories:
0, 1, 2, 3 or more. These categories are mutually exclusive because they don’t overlap.

Mode: The most frequently occurring score or category in a distribution (symbolized as
Mo). The mode may or may not be representative of the majority of cases in a
distribution. It can be found for both qualitative (nominal) and quantitative variables
(ordinal, interval, ratio measures).

Modal Interval: In a frequency distribution for a quantitative variable grouped into
categories, it is the interval that contains the largest number of observations.

Multivariate Analysis: A type of analysis based on two or more independent variables
that are within the same statistical analysis. Examples include multiple regression
analysis and 3-way ANOVA in which the effects of two or more independent variables
are assessed in terms of their impact on a dependent variable.

Multiple Causes: The recognition that human behavior is caused by multiple factors.
The goal of multivariate analysis is to assess the relative importance of different causal
factors.

Multiple Regression Analysis: A statistical method for assessing the net impact of
multiple independent variables on a quantitative dependent variable. It is designed to
identify the net effects of each variable on the dependent variable (after controlling for
the influence of the other variables) and to evaluate the relative importance of these
variables in explaining variation in this dependent variable.

Multivariate Contingency Table Analysis. The analysis of the cross tabulation of three
or more qualitative variables.

N (the Number of observations): The number of scores from which statistics are
calculated. When used to describe the number of observations in a sample or
population, it is often stated as “N =” some value.

Nominal Measurement: A level of measurement of qualitative attributes that reflect
the measurement of differences in type, kind or form. Numbers are often assigned to
nominal measures, but the numbers don’t refer to distance – they just represent
differences between categories. Examples include one’s race (coded 1= Black, 2 =
White, 3 = Other), gender (M or F), or type of academic major (Accounting, Biology,
Chemistry, etc..).

322
Analyzing Criminological Data Appendix A

Nonprobability Sampling: A sampling method in which entities are not randomly


selected from a population. Sample elements may be selected because of their
convenience, availability, or on the basis some other nonprobability criterion.

Normal Distribution: The most commonly used sampling distribution in statistical
analysis. It is also called the standard normal probability distribution or, more simply,
the “normal curve”. When a variable is normally distributed, it has well-defined
properties (i.e., it is symmetrical around its mean, bell-shaped, unimodal, and a constant
proportion of cases fall within given standard deviations from the mean).

Null Hypothesis (Ho): A claim about the assumed value of a population parameter or
differences in the assumed population parameters across groups. The null hypothesis
may involve the value of a specific population parameter (e.g. the population mean, the
population proportion) or a difference between two or more population parameters
(e.g., whether two population means are identical).

Observations: The act of gathering information. Primary data collection involves the
direct observation of people, object, or events, whereas secondary data collection
involves the compilation and review of data already collected by criminal justice
officials, government agencies, or other sources.

One-Tailed or Two-Tailed Tests: Hypothesis tests based on areas in the tails of sampling
distributions that are consistent with the alternative hypothesis (Ha). These terms
distinguish between situations in which the alternative hypothesis is either directional
(i.e., one-tailed) or non-directional (i.e., two-tailed). In hypothesis testing, decision rules
are applied to define rare outcomes that lead to the rejection of the null hypothesis and
the directionality of the alternative hypothesis tells us whether there are one or two
tails of sampling distributions that define these rare outcomes. In a one-tailed test, the
region of the comparison distribution that establishes when the null hypothesis would
be rejected is all on one side (i.e., tail) of the distribution. In a two-tailed test, the region
is spread out in the tails of both sides of the sampling distribution.

Operationalization: The process of defining concepts so that they can be observed and
measured.

Ordinal Measurement: The type of level of measurement of quantitative variables in
which categories can be arranged in order (from low to high or high to low), but the
actual distance between them is either unknown or unequal. Examples include
frequency of smoking marijuana [coded “never,” “rarely”, “sometimes”, and “always”]
or the rating of your golf skills [coded low, medium, and high].

Ordinary Least Squares (OLS): A procedure used in regression analysis that generates
numerical estimates of the y-intercept (a) and slope (b) which minimizes the sum of the
squared errors in predicting Y from X.

323
Analyzing Criminological Data Appendix A


Outliers: A case value that is numerically distant from the majority of case values.
These Influential points and deviant observations may dramatically alter and distort the
calculated values of many descriptive statistics. After we identify outliers and influential
points in a data file, we try to understand why the particular case is an exceptional
observation , and then usually delete it from our analysis.

Partial Correlation Coefficient: A standardized measure of the relationship between
two quantitative variables after controlling for the influence of other variables.

Partitioning Variance: The separation of the total variance in a dependent variable into
two distinct parts: (1) variability between groups in their mean ratings on the dependent
variable and (2) variability within groups around their group means. This is basic
statistical principle that underlies the analysis of variance (ANOVA).

Percentage Distribution: A data summary column based on a standardized frequency
distribution ranging from 0 to 100.

Percentile: A numerical point in a cumulative percentage distribution, between 0 and
100, with a specific percentage of cases that fall at or below it. For example, the 80th
percentile indicates than 80% of the cases are below this point in a distribution.

Pie Chart: A visual data display in which each “slice” represents the frequency of a
single variable attribute relative to the frequencies of all other attributes. All categories
of a variable are represented by the entire pie. Pie charts are most useful when you
have 5 or fewer categories. They become hard to read and of limited value when there
are more than 10 categories/slices.

Point Estimate: A sample statistic that represents the single best estimate of the
unknown population parameter.

Population: A group that contains all entities (people, places, events, agencies) we want
to learn about. Criminologists often use the process of statistical inferences to estimate
the characteristics of populations by taking random samples from them.

Population Parameters: Measures of a variable’s characteristics that are derived by
observing an entire population. These population parameters are specific numbers and
other types of statistical information that we are trying to estimate from a sample of
data. These values are unknown (that is why we drawn random samples to estimate
them) and they are represented by Greek symbols (e.g., a, s ). Sample statistics are
used to estimate the values of these population parameters.

Probability Sampling: A sampling method in which entities are randomly selected and
each entity has an equal and known chance of being selected from a population.

324
Analyzing Criminological Data Appendix A


Qualitative and Quantitative Data: Observations of descriptive characteristics that
cannot be rank-ordered (qualitative data) compared to observations of rank-ordered
categories or numerical values based on measurement (quantitative data).

Qualitative Variables: Types of nominal variables (e.g., gender, race, place of


residence) that differ in their qualities, type, kind and/other distinct types of attributes.

Quantitative Variables: The measurement of characteristics that vary in their quantity,
magnitude, intensity, and/or duration. The words “how often…” or “how much..” are
immediate signals that the researcher is talking about the measurement of a
quantitative variable. Ordinal, interval, and ratio variables are quantitative variables.

Random samples: The fundamental/essential type of sample that is always assumed in
statistical inference. Probability theory is unable to provide us with guidance about the
accuracy of our estimates of population parameters or tests statistical hypotheses about
these population parameters from a sample of data unless that sample is randomly
selected from that population.

Range: A measure of dispersion that indicates the difference between the largest and
smallest case values in a distribution (symbolized as R). Because these two extreme
points are often exceptional cases, intermediate ranges (like the interquartile range) are
often used to further gauge the dispersion of scores.

Ratio Measurement: The highest level of measurement of a quantitative variable. It has
the specific properties of a non-arbitrary zero point and equal distance between the
categories. When expressed in equal width intervals, variables such as age, years of
formal education, income, and number of traffic tickets are examples of ratio measures
because the value “0” is meaningful.

Raw Data: Unanalyzed data. It is the state of data that is often entered into statistical
packages (e.g., SPSS, SAS, STATA, Excel) for subsequent analysis.

Reductionist Fallacy: Making claims about groups when using individuals as the unit of
analysis. This error in inference occurs when data is collected on individuals but the
researcher makes claims about the properties of groups or geographical units (e.g.,
neighborhoods, states) for which they are members.

Reliability: The extent to which a variable, as measured, will produce consistent results
when repeated measures are taken.

Regression Coefficient: The slope in the regression equation, symbolized as b (when
describing unstandardized regression coefficients) or b (when referring to standardized

325
Analyzing Criminological Data Appendix A

regression coefficients). It represents the change in Y for a unit change in X in the


following general linear equation: Y = a + bX.

Regression Equation: A mathematical function that represents the formal statement of
the relationship between a dependent variable (Y) and an independent variable (X).
The linear form of this regression equation is Y = a + b X.

Research Questions: Questions that guide investigations into events or processes and
help us to solve problems.

Row Percentage Table: A contingency table that presents the percentages of cases
within each row that are observed within each column of the table.

Sample: A subset of entities drawn from a larger group that we want to learn about.
These sample observations are often used to draw conclusions about a large population
of cases/observations.

Sampling Bias: Discrepancies between sample and population characteristics that result
from using a nonrandom or nonprobability sampling method. Common sources of
sampling bias include (1) using outdated lists to select sample observations, (2) growing
rates of non-response from survey respondents, and (3) population mobility and the
decline in land lines that introduce bias in telephone survey respondents.

Sampling Distributions: A hypothetical distribution that represents all possible
outcomes of a sample statistic based on an infinite number of samples. Sampling
distributions are used for comparative purposes to establish the likely outcomes under
particular conditions (e.g., the null hypothesis is true). For purposes of statistical
inference, knowledge of the nature of the sampling distribution allows the researcher to
make informed decision about whether the observed sample values are a rare or
common outcome under a null hypothesis and the expected sampling distribution of
possible outcomes that derives from it. The most commonly used sampling
distributions in criminological research are the standard normal (z) probability
distribution, the t-distribution, the chi-square ( χ 2 ) distribution, and the F-distribution.

Sampling Error: The degree of inaccuracy in findings that results from using data from
random samples instead of an entire population. More specifically, sampling error
derives from the discrepancy between sample statistics and population parameters due
solely to random chance. The amount of sampling error is often unknown (because we
typically don’t know the value of the population parameter). However, based on
statistical theory, the magnitude of sampling error can be reduced by taking larger
random samples from the respective population.

Sample Statistics: Measures of a variable’s characteristics that are derived from a
sample of observations. These sample values of particular descriptive statistics are used

326
Analyzing Criminological Data Appendix A

in inferential statistics to infer population values. The most commonly computed


sample statistics are means and standard deviations. We use sample statistics to
estimate population values. Latin letters are used to symbolize sample statistics (e.g., s
= standard deviation).

Scatterplot or Scatter Diagram: A visual display of the relationship between two
quantitative variables. The data points on this graph are the joint intersection of the Y
and X scores for each individual or case. The scatterplot provides an immediate visual
image of the general direction and relative magnitude of association between these two
quantitative variables when they are plotted graphically as data points in Cartesian
coordinates (x, y).

Significance Level (a): The specific probability level associated with outcomes defined
as rare under a null hypothesis. The significance level is also called the alpha (a) level of
a statistical test. It represents the probability of making a Type I Error in statistical
inference. The researcher decides what particular significance level to establish for any
given problem. The most commonly used significance levels are .05 and .01.

Skewed data: A data distribution characterized by asymmetry in the spread of values
across a variable’s attributes. A distribution is positively skewed (also called right
skewed) when the values cluster toward the bottom end of the scale and drift off
toward the other end or tail of the distribution (i.e., the right tail). In contrast, a data
distribution is negatively skewed (also called left skewed) when the values cluster
toward the top end of the scale and drift off toward the other end or tail of the
distribution (i.e., the left tail).

Slope (b): Also called the unstandardized regression coefficient, the slope represents
the numerical change in the value of Y for a unit change in X. For example, in the
regression equation (Y = 10 + 5 X), the slope is 5 and it means that a unit change in X
results in a 5-unit change in Y.

Standard Error of the Estimate: The standard error is technically the standard deviation
of a sampling distribution. It is a measure of sampling error (i.e., the discrepancy
between sample statistics and population parameters that is attributable solely to errors
from random sampling).

Standard Deviation: A measure of dispersion that indicates the mean of all deviation
scores (symbolized as s for the sample standard deviation and $ for the population
standard deviation). It is computed as the square root of the sum of average squared
deviations from the mean. Along with the mean and sample size, the standard
deviation is a descriptive statistic that is widely used in criminological research. The
larger the standard deviation, the greater the spread of scores around the mean.

327
Analyzing Criminological Data Appendix A

Standard Normal (z) Distribution: A hypothetical sampling distribution of probabilities


that represents all possible outcomes of a sample statistic for large samples. Under this
sampling distribution, a specific proportion of cases fall within given standard deviations
(z-scores) from the mean. For example, in the normal distribution, about 68% of the
cases will fall within +1.00 and -1.00 standard deviations (z-scores) from the mean and
95% of the cases will fall within ∓ 1.96 z-scores from the mean.

Standardized Partial Regression Coefficient (Bj..n): A summary measure of the effect of
a standard unit change in a particular independent variable (e.g., Xj) on the dependent
variable after controlling for the influence of all of the independent variables (Xn) in the
estimated regression model. These standardized partial regression coefficients are also
commonly referred to as “beta weights”.

Standardized Scores: Scores that are adjusted to a common metric so that they can be
easily interpreted and compared across variables and samples. Common standardized
scores in criminological research include percentages, z-scores, t-scores, chi-square
scores, and F-ratios.

Statistical Control: A statistical method for adjusting for the impact of other variables
on the basic relationship between X and Y.

Statistical Independence: The null hypothesis tested in contingency table analysis that
assumes no relationship between the variables.

Statistical Inference: The process in which sample data is used to make claims about
populations from which the sample was drawn and test hypotheses.

Statistical Significance: A hypothesis test outcome that indicates a true difference
between the observed and hypothesized values that is not likely to occur based on
chance alone. Statistical significance is affected by sample size, so some caution is
necessary in interpreting the substantive meaning of a “statistically significant” result
when the sample size is large (N > 1,000) or small (N < 30).

Substantive Significance: The degree to which statistical results are substantively
meaningful for criminologists or the general public. The results of hypothesis tests may
be statistically significant (because they are based on very large sample [N > 1,000]) but
substantively insignificant (because the observed differences are small). In contrast,
some sample results are substantively significant (even thought they are not statistically
significant due to the sample size) because they suggest important differences to be
considered in future studies.

t-Distribution: Hypothetical sampling distributions of probabilities that represent the
distribution of all possible outcomes of a sample statistic. It is similar to the normal
curve (the standard normal [z] distribution) except that its shape is affected by sample

328
Analyzing Criminological Data Appendix A

size. As the sample size increases, the t- and z-distributions converge and more closely
resembles the same probability distributions. The t-distribution is used in constructing
confidence intervals and hypothesis testing when the population standard deviation is
unknown and the sample size(s) is less than 50 (N < 50).

Total Percentage Table: A contingency table that presents the percentage of the total
number of observed cases within each cell.

Total Variation (SSTotal): The total amount of variation in a dependent variable,
represented by the sum of the squared deviations of individual scores from the overall
mean across all observations.

t-Tests: A particular type of statistical test of population parameters from sample data
that use the t-distribution as the comparative basis for parameter estimation and
hypothesis testing.

Type I Error: The particular type of error in statistical inference that derives from the
decision to reject a true null hypothesis. This is also called the alpha level ( α ) or
significance level of a test.

Type II Error: The particular type of error in statistical inference that derives from the
decision to accept a false null hypothesis (i.e., failing to reject the null hypothesis when,
in fact, it is false). By minimizing the possibility of making a Type II Error, you increase
the probability of a Type I Error and vice versa.

Unit of Analysis: The entity analyzed in a research study. Major units of analysis in
criminological research include individuals, groups, crime events, organizations, and
various geographical units (e.g., city blocks, neighborhoods, cities, counties, states,
nations).

Univariate Analysis: The examination of data distributed across a single variable’s
attributes. It is a type of analysis that focuses on the distribution of one variable.
Common statistics used in univariate analyses include various descriptive statistics like
means, medians, modes, standard deviations, and variances.

Unstandardized Partial Regression Coefficients (bj..n): The effect on Y of a unit change
in a particular independent variable (e.g., Xj) after controlling for the influence of all of
the independent variables (Xn) in the regression model.

Unstandardized Regression Coefficients (b): The slope in a regression equation when it
is expressed in the original units of a variable’s measurement. It measures the impact of
one or more independent variables (X) on a dependent variable. They are considered
“unstandardized” coefficients because they are expressed in the same original units as
the independent and dependent variables.

329
Analyzing Criminological Data Appendix A


Validity: The degree to which a variable, as measured, accurately captures the concept
it intends to measure.

Variable: A characteristic that differs across people, objects, places, or events.

Variance: A measure of dispersion that indicates the average squared distance between
each case and the mean (symbolized by s2 for the sample variance and $ 2 for the
population variance). The larger the variance, the greater the spread of scores around
the mean.

Variation Ratio: A measure of dispersion, ranging from 0 to 1, that indicates the
proportion of cases outside the modal category (symbolized as VR).

Within-Group Variation (SSWithin): The variation in a dependent variable that is
represented by differences between each individual score from its group mean. This
source of variation in ANOVA-type analyses is similar to the “residual error variance” in
predicting Y from X in a linear regression analysis.

Y-Intercept (a): The point on a graph where the regression line crosses the y-axis. It is
the estimated value of “a” in the following linear regression equation: Y = a + b X. For
this regression equation, the y-intercept (a) is the value of Y when X=0.

Zone of Rejection: The area or areas under a sampling distribution that represents rare
outcomes if the null hypothesis (Ho) is true. This zone of rejection identifies the
particular sample outcomes that will lead to the rejection of the null hypothesis in favor
of the alternative hypothesis. There are two zones of rejection in the tails of the
sampling distribution when the alternative hypothesis is non-directional. One zone of
rejection (i.e., in either the left [lower] or right [upper] tail of the distribution) is
identified when the alternative hypothesis specifies a particular direction.

Z-scores: Standardized scores that represent the number of standard deviations an
individual score is from the sample mean. Raw scores are converted to these
standardized z-scores that have a mean of 0 and a standard deviation of 1. Z-scores are
also called normal deviates to designate their use for normal distributions. Raw scores
are converted to Z-scores by the following formula: Z = (x – mean of x)/ standard
deviation. Positive Z-score identify scores that are greater than the mean and negative
Z-scores refer to scores that are below the mean.

Z-Tests: Particular types of tests of statistical significance and parameter estimation
that use the normal probability distribution for comparative purposes in defining rare
and common outcomes.

330
Analyzing Criminological Data Appendix B

Table of Area Under the Normal Curve (Select Values)

Column A Column B Column C Column A Column B Column C

Z-Score z to Mean Area > z Z-Score z to Mean Area > z

0.00 .0000 .5000 1.95 .4744 .0256


.10 .0398 .4602 1.96 .4750 .0250
.15 .0596 .4404 2.00 .4772 .0228
.20 .0793 .4207 2.05 .4798 .0202
.25 .0987 .4013 2.10 .4821 .0179
.30 .1179 .3821 2.15 .4842 .0158
.35 .1368 .3632 2.20 .4861 .0139
.40 .1554 .3446 2.25 .4878 .0122
.45 .1736 .3264 2.30 .4893 .0107
.50 .1915 .3085 2.33 .4901 .0099
.55 .2088 .2912 2.35 .4906 .0094
.60 .2257 .2743 2.40 .4918 .0082
.65 .2422 .2578 2.45 .4929 .0071
.70 .2580 .2420 2.50 .4938 .0064
.75 .2734 .2266 2.55 .4946 .0054
.80 .2881 .2119 2.58 .4951 .0049
.85 .3023 .1977 2.60 .4953 .0047
.90 .3159 .1841 2.65 .4960 .0040
.95 .3289 .1711 2.70 .4965 .0035
1.00 .3413 .1587 2.75 .4970 .0030
1.05 .3531 .1469 2.80 .4974 .0026
1.10 .3643 .1357 2.85 .4978 .0022
1.15 .3749 .1251 2.90 .4981 .0019
1.20 .3849 .1151 2.95 .4984 .0016
1.25 .3944 .1056 3.00 .4987 .0013
1.28 .3997 .1003 3.10 .4990 .0010
1.30 .4032 .0968 3.20 .4993 .0007
1.35 .4115 .0885 3.30 .4995 .0005
1.40 .4192 .0808 3.40 .4997 .0003
1.45 .4265 .0735 3.50 .4998 .0002
1.50 .4332 .0668 3.60 .49984 .00016
1.55 .4394 .0606 3.70 .49989 .00011
1.60 .4452 .0548 3.80 .49993 .00007
1.64 .4495 .0505 3.90 .49995 .00005
1.65 .4505 .0495 4.00 .49997 .00003
1.70 .4554 .0446 >4.01 ≈ .50000 ≈ .00000
1.75 .4599 .0401
1.80 .4641 .0359
1.85 .4678 .0322
1.90 .4713 .0287 Widely Used Z’s/Areas Are Highlighted

331
Analyzing Criminological Data Appendix B

Critical Values of t-Distribution

One-Tailed Test
α = .10 .05 .025 .01 .005
Two-Tailed Test
------------------------------
α = .20 .10 .05 .02 .01
--------------------------------------------------------------------
df

1. 3.078 6.314 12.706 31.821 63.657


2. 1.886 2.920 4.303 6.965 9.925
3. 1.638 2.353 3.182 4.541 5.841
4. 1.533 2.132 2.776 3.747 4.604
5. 1.476 2.015 2.571 3.365 4.032
6. 1.440 1.943 2.447 3.143 3.707
7. 1.415 1.895 2.365 2.998 3.499
8. 1.397 1.860 2.306 2.896 3.355
9. 1.383 1.833 2.262 2.821 3.250
10. 1.372 1.812 2.228 2.764 3.169
11. 1.363 1.796 2.201 2.718 3.106
12. 1.356 1.782 2.179 2.681 3.055
13. 1.350 1.771 2.160 2.650 3.012
14. 1.345 1.761 2.145 2.624 2.977
15. 1.341 1.753 2.131 2.602 2.947
16. 1.337 1.746 2.120 2.583 2.921
17. 1.333 1.740 2.110 2.567 2.898
18. 1.330 1.734 2.101 2.552 2.878
19. 1.328 1.729 2.093 2.539 2.861
20. 1.325 1.725 2.086 2.528 2.845
21. 1.323 1.721 2.080 2.518 2.831
22. 1.321 1.717 2.074 2.508 2.819
23. 1.319 1.714 2.069 2.500 2.807
24. 1.318 1.711 2.064 2.492 2.797
25. 1.316 1.708 2.060 2.485 2.787
26. 1.315 1.706 2.056 2.479 2.779
27. 1.314 1.703 2.052 2.473 2.771
28. 1.313 1.701 2.048 2.467 2.763
29. 1.311 1.699 2.045 2.462 2.756
30. 1.310 1.697 2.042 2.457 2.750
31. 1.309 1.696 2.040 2.453 2.744
32. 1.309 1.694 2.037 2.449 2.738
33. 1.308 1.692 2.035 2.445 2.733
34. 1.307 1.691 2.032 2.441 2.728
35. 1.306 1.690 2.030 2.438 2.724
36. 1.306 1.688 2.028 2.434 2.719
37. 1.305 1.687 2.026 2.431 2.715
38. 1.304 1.686 2.024 2.429 2.712
39. 1.304 1.685 2.023 2.426 2.708

331
Analyzing Criminological Data Appendix B

Critical Values of t-Distribution (continued)

One-Tailed Test
α = .10 .05 .025 .01 .005
------------------------------
Two-Tailed Test
α = .20 .10 .05 .02 .01
--------------------------------------------------------------------
df

40. 1.303 1.684 2.021 2.423 2.704


41. 1.303 1.683 2.020 2.421 2.701
42. 1.302 1.682 2.018 2.418 2.698
43. 1.302 1.681 2.017 2.416 2.695
44. 1.301 1.680 2.015 2.414 2.692
45. 1.301 1.679 2.014 2.412 2.690
46. 1.300 1.679 2.013 2.410 2.687
47. 1.300 1.678 2.012 2.408 2.685
48. 1.299 1.677 2.011 2.407 2.682
49. 1.299 1.677 2.010 2.405 2.680
50. 1.299 1.676 2.009 2.403 2.678
51. 1.298 1.675 2.008 2.402 2.676
52. 1.298 1.675 2.007 2.400 2.674
53. 1.298 1.674 2.006 2.399 2.672
54. 1.297 1.674 2.005 2.397 2.670
55. 1.297 1.673 2.004 2.396 2.668
56. 1.297 1.673 2.003 2.395 2.667
57. 1.297 1.672 2.002 2.394 2.665
58. 1.296 1.672 2.002 2.392 2.663
59. 1.296 1.671 2.001 2.391 2.662
60. 1.296 1.671 2.000 2.390 2.660
70. 1.294 1.667 1.994 2.381 2.648
80. 1.292 1.664 1.990 2.374 2.639
90. 1.291 1.662 1.987 2.368 2.632
100. 1.290 1.660 1.984 2.364 2.626
200. 1.284 1.657 1.978 2.355 2.613
1.282 1.645 1.960 2.326 2.576

332
Analyzing Criminological Data Appendix B

Critical Value of Chi-Square Test ( χ 2 )

Significance level ( α )

df .10 .05 .025 .01 .001


-----------------------------------------------------------------------

1 2.706 3.841 5.024 6.635 10.828


2 4.605 5.991 7.378 9.210 13.816
3 6.251 7.815 9.348 11.345 16.266
4 7.779 9.488 11.143 13.277 18.467
5 9.236 11.070 12.833 15.086 20.515
6 10.645 12.592 14.449 16.812 22.458
7 12.017 14.067 16.013 18.475 24.322
8 13.362 15.507 17.535 20.090 26.125
9 14.684 16.919 19.023 21.666 27.877
10 15.987 18.307 20.483 23.209 29.588
11 17.275 19.675 21.920 24.725 31.264
12 18.549 21.026 23.337 26.217 32.910
13 19.812 22.362 24.736 27.688 34.528
14 21.064 23.685 26.119 29.141 36.123
15 22.307 24.996 27.488 30.578 37.697
16 23.542 26.296 28.845 32.000 39.252
17 24.769 27.587 30.191 33.409 40.790
18 25.989 28.869 31.526 34.805 42.312
19 27.204 30.144 32.852 36.191 43.820
20 28.412 31.410 34.170 37.566 45.315
21 29.615 32.671 35.479 38.932 46.797
22 30.813 33.924 36.781 40.289 48.268
23 32.007 35.172 38.076 41.638 49.728
24 33.196 36.415 39.364 42.980 51.179
25 34.382 37.652 40.646 44.314 52.620
26 35.563 38.885 41.923 45.642 54.052
27 36.741 40.113 43.195 46.963 55.476
28 37.916 41.337 44.461 48.278 56.892
29 39.087 42.557 45.722 49.588 58.301
30 40.256 43.773 46.979 50.892 59.703
31 41.422 44.985 48.232 52.191 61.098
32 42.585 46.194 49.480 53.486 62.487
33 43.745 47.400 50.725 54.776 63.870
34 44.903 48.602 51.966 56.061 65.247
35 46.059 49.802 53.203 57.342 66.619

333
Analyzing Criminological Data Appendix B

Critical Value of F-Test at .05 alpha level

df1 1 2 3 4 5 6 7 8 9 10
df2
1 161 199 216 225 230 234 237 239 241 242
2 18.5 19.0 19.2 19.3 19.3 19.3 19.4 19.4 19.4 19.4
3 10.1 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74
6 5.99 5.143 4.757 4.534 4.387 4.284 4.207 4.147 4.099 4.060
7 5.59 4.737 4.347 4.120 3.972 3.866 3.787 3.726 3.677 3.637
8 5.32 4.459 4.066 3.838 3.687 3.581 3.500 3.438 3.388 3.347
9 5.12 4.256 3.863 3.633 3.482 3.374 3.293 3.230 3.179 3.137
10 4.96 4.103 3.708 3.478 3.326 3.217 3.135 3.072 3.020 2.978
11 4.84 3.982 3.587 3.357 3.204 3.095 3.012 2.948 2.896 2.854
12 4.75 3.885 3.490 3.259 3.106 2.996 2.913 2.849 2.796 2.753
13 4.67 3.806 3.411 3.179 3.025 2.915 2.832 2.767 2.714 2.671
14 4.60 3.739 3.344 3.112 2.958 2.848 2.764 2.699 2.646 2.602
15 4.54 3.682 3.287 3.056 2.901 2.790 2.707 2.641 2.588 2.544
16 4.49 3.634 3.239 3.007 2.852 2.741 2.657 2.591 2.538 2.494
17 4.45 3.592 3.197 2.965 2.810 2.699 2.614 2.548 2.494 2.450
18 4.41 3.555 3.160 2.928 2.773 2.661 2.577 2.510 2.456 2.412
19 4.38 3.522 3.127 2.895 2.740 2.628 2.544 2.477 2.423 2.378
20 4.35 3.493 3.098 2.866 2.711 2.599 2.514 2.447 2.393 2.348
21 4.32 3.467 3.072 2.840 2.685 2.573 2.488 2.420 2.366 2.321
22 4.30 3.443 3.049 2.817 2.661 2.549 2.464 2.397 2.342 2.297
23 4.28 3.422 3.028 2.796 2.640 2.528 2.442 2.375 2.320 2.275
24 4.26 3.403 3.009 2.776 2.621 2.508 2.423 2.355 2.300 2.255
25 4.24 3.385 2.991 2.759 2.603 2.490 2.405 2.337 2.282 2.236
26 4.22 3.369 2.975 2.743 2.587 2.474 2.388 2.321 2.265 2.220
27 4.21 3.354 2.960 2.728 2.572 2.459 2.373 2.305 2.250 2.204
28 4.20 3.340 2.947 2.714 2.558 2.445 2.359 2.291 2.236 2.190
29 4.18 3.328 2.934 2.701 2.545 2.432 2.346 2.278 2.223 2.177
30 4.17 3.316 2.922 2.690 2.534 2.421 2.334 2.266 2.211 2.165
31 4.16 3.305 2.911 2.679 2.523 2.409 2.323 2.255 2.199 2.153
32 4.15 3.295 2.901 2.668 2.512 2.399 2.313 2.244 2.189 2.142
33 4.14 3.285 2.892 2.659 2.503 2.389 2.303 2.235 2.179 2.133
34 4.13 3.276 2.883 2.650 2.494 2.380 2.294 2.225 2.170 2.123
35 4.12 3.267 2.874 2.641 2.485 2.372 2.285 2.217 2.161 2.114
36 4.11 3.259 2.866 2.634 2.477 2.364 2.277 2.209 2.153 2.106
37 4.10 3.252 2.859 2.626 2.470 2.356 2.270 2.201 2.145 2.098
38 4.10 3.245 2.852 2.619 2.463 2.349 2.262 2.194 2.138 2.091
39 4.09 3.238 2.845 2.612 2.456 2.342 2.255 2.187 2.131 2.084
40 4.08 3.232 2.839 2.606 2.449 2.336 2.249 2.180 2.124 2.077
41 4.08 3.226 2.833 2.600 2.443 2.330 2.243 2.174 2.118 2.071
42 4.07 3.220 2.827 2.594 2.438 2.324 2.237 2.168 2.112 2.065
43 4.07 3.214 2.822 2.589 2.432 2.318 2.232 2.163 2.106 2.059
44 4.06 3.209 2.816 2.584 2.427 2.313 2.226 2.157 2.101 2.054
45 4.06 3.204 2.812 2.579 2.422 2.308 2.221 2.152 2.096 2.049

334
Analyzing Criminological Data Appendix B

Critical Value of F-Test at .05 alpha level (Continued)

df1 1 2 3 4 5 6 7 8 9 10
df2
47 4.05 3.195 2.802 2.570 2.413 2.299 2.212 2.143 2.086 2.039
48 4.04 3.191 2.798 2.565 2.409 2.295 2.207 2.138 2.082 2.035
49 4.04 3.187 2.794 2.561 2.404 2.290 2.203 2.134 2.077 2.030
50 4.03 3.183 2.790 2.557 2.400 2.286 2.199 2.130 2.073 2.026
51 4.03 3.179 2.786 2.553 2.397 2.283 2.195 2.126 2.069 2.022
52 4.03 3.175 2.783 2.550 2.393 2.279 2.192 2.122 2.066 2.018
53 4.02 3.172 2.779 2.546 2.389 2.275 2.188 2.119 2.062 2.015
54 4.02 3.168 2.776 2.543 2.386 2.272 2.185 2.115 2.059 2.011
55 4.02 3.165 2.773 2.540 2.383 2.269 2.181 2.112 2.055 2.008
56 4.01 3.162 2.769 2.537 2.380 2.266 2.178 2.109 2.052 2.005
57 4.01 3.159 2.766 2.534 2.377 2.263 2.175 2.106 2.049 2.001
58 4.01 3.156 2.764 2.531 2.374 2.260 2.172 2.103 2.046 1.998
59 4.00 3.153 2.761 2.528 2.371 2.257 2.169 2.100 2.043 1.995
60 4.00 3.150 2.758 2.525 2.368 2.254 2.167 2.097 2.040 1.993
61 4.00 3.148 2.755 2.523 2.366 2.251 2.164 2.094 2.037 1.990
62 4.00 3.145 2.753 2.520 2.363 2.249 2.161 2.092 2.035 1.987
63 3.99 3.143 2.751 2.518 2.361 2.246 2.159 2.089 2.032 1.985
64 3.99 3.140 2.748 2.515 2.358 2.244 2.156 2.087 2.030 1.982
65 3.99 3.138 2.746 2.513 2.356 2.242 2.154 2.084 2.027 1.980
66 3.99 3.136 2.744 2.511 2.354 2.239 2.152 2.082 2.025 1.977
67 3.98 3.134 2.742 2.509 2.352 2.237 2.150 2.080 2.023 1.975
68 3.98 3.132 2.740 2.507 2.350 2.235 2.148 2.078 2.021 1.973
69 3.98 3.130 2.737 2.505 2.348 2.233 2.145 2.076 2.019 1.971
70 3.98 3.128 2.736 2.503 2.346 2.231 2.143 2.074 2.017 1.969
71 3.98 3.126 2.734 2.501 2.344 2.229 2.142 2.072 2.015 1.967
72 3.97 3.124 2.732 2.499 2.342 2.227 2.140 2.070 2.013 1.965
73 3.97 3.122 2.730 2.497 2.340 2.226 2.138 2.068 2.011 1.963
74 3.97 3.120 2.728 2.495 2.338 2.224 2.136 2.066 2.009 1.961
75 3.97 3.119 2.727 2.494 2.337 2.222 2.134 2.064 2.007 1.959
76 3.97 3.117 2.725 2.492 2.335 2.220 2.133 2.063 2.006 1.958
77 3.96 3.115 2.723 2.490 2.333 2.219 2.131 2.061 2.004 1.956
78 3.96 3.114 2.722 2.489 2.332 2.217 2.129 2.059 2.002 1.954
79 3.96 3.112 2.720 2.487 2.330 2.216 2.128 2.058 2.001 1.953
80 3.96 3.111 2.719 2.486 2.329 2.214 2.126 2.056 1.999 1.951
81 3.96 3.109 2.717 2.484 2.327 2.213 2.125 2.055 1.998 1.950
82 3.96 3.108 2.716 2.483 2.326 2.211 2.123 2.053 1.996 1.948
83 3.96 3.107 2.715 2.482 2.324 2.210 2.122 2.052 1.995 1.947
84 3.95 3.105 2.713 2.480 2.323 2.209 2.121 2.051 1.993 1.945
85 3.95 3.104 2.712 2.479 2.322 2.207 2.119 2.049 1.992 1.944
86 3.95 3.103 2.711 2.478 2.321 2.206 2.118 2.048 1.991 1.943
87 3.95 3.101 2.709 2.476 2.319 2.205 2.117 2.047 1.989 1.941
88 3.95 3.100 2.708 2.475 2.318 2.203 2.115 2.045 1.988 1.940
89 3.95 3.099 2.707 2.474 2.317 2.202 2.114 2.044 1.987 1.939
90 3.95 3.098 2.706 2.473 2.316 2.201 2.113 2.043 1.986 1.938
95 3.94 3.092 2.700 2.467 2.310 2.196 2.108 2.037 1.980 1.932
100 3.94 3.087 2.696 2.463 2.305 2.191 2.103 2.032 1.975 1.927

335
Appendix C: Notetaking Guides (Chapter 1)

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 1: Asking important (and interesting) questions

Concepts Learning Objectives (LO)

- Research question LO1: Create three types of research questions


- Hypothesis
- Data LO2: Produce hypotheses from research questions
- Observations
- Qualitative and quantitative LO3: Explain where data come from and how they are
used to answer criminal justice questions

LO4: Differentiate between qualitative and


quantitative data


LO1: Create three types of research questions

Q1. What is a research question"? Define it in your own words.


Research question:

Q2. List and define 3 different types of research questions.

Type of Research Question Definition


1. Descriptive What are the characteristics of a problem?
2.
3.

Q3. Give 3 examples of descriptive research questions involving prosecution and pretrial
services.

1. What percent of auto theft charges are dropped or dismissed?


2. _______________________________________________________

3. _______________________________________________________

336
Appendix C: Notetaking Guides (Chapter 1)

Q4. Give 3 examples of relational research questions involving prosecution and pretrial
services.


1. In what types of cities are prosecutors most likely to dismiss auto
theft charges?

2. _____________________________________________________

3. ______________________________________________________

Q5. Give 3 examples of causal research questions involving prosecution and pretrial services.


1. Does the offender's age influence the likelihood that prosecutors
will dismiss charges for auto theft charges?

2. ______________________________________________________

3. ______________________________________________________

Q6. The most common order or sequence for asking/answering the 3 types of research
questions to solve criminological problems?

1. ________________ 2. ______________ 3. ________________

337
Appendix C: Notetaking Guides (Chapter 1)

LO2: Produce hypotheses from research questions


Q7. What is a hypothesis"? Define it in the space below.


Hypothesis:

Q8. List two different hypotheses (HD) from the following descriptive research question:


What percent of U.S. auto theft charges are dropped or dismissed?

1. Hypothesis (HD1): 40% of auto theft charges are dropped

2. Hypothesis (HD2): ______________________________________


Q9. List two different hypotheses (HR) from the following relational research question:


In what types of cities are prosecutors most likely to dismiss auto theft charges?

1. Hypothesis (HR1): Prosecutors in Midwestern cities are more likely to dismiss car
theft charges than prosecutors in West-coast cities.

2. Hypothesis (HR2): _________________________________________________


__________________________________________________


Q10. List two different hypotheses (HC) from the following relational research question:


Does the offender's age influence the likelihood that prosecutors will dismiss charges
for auto theft charges?

1. Hypothesis (HC1): Juveniles under 16 years old are more likely to have their car
theft charges dismissed than juveniles over 16 years old.

2. Hypothesis (HC2): __________________________________________________
___________________________________________________

338
Appendix C: Notetaking Guides (Chapter 1)

LO3: Explain where data come from and how they are used to answer criminal
justice questions

Q11. What are "data" and what are they used for in criminological research?


Definition of data: ____________________________________________

Data are used for what purposes? 1. ____________________________

2. ____________________________

Q12. Define primary data collection and secondary data collection.


Primary data collection:

Secondary data collection:



Q13. Describe 3 ways of conducting primary data collection on juvenile gang activity:


1. Watch juvenile gang members interacting on the street at night.

2. _____________________________________________________

3. _____________________________________________________

Q14. Describe 3 way of conducting secondary data collection on juvenile gang activity:


1. Compile newspaper stories about juvenile gang activity.

2. _____________________________________________________

3. _____________________________________________________

339
Appendix C: Notetaking Guides (Chapter 1)

LO4: Differentiate between qualitative and quantitative data

Q15. What is a "quantitative variable" and what are its major properties?


Define quantitative variable: ____________________________________
____________________________________
Properties of quantitative variables:

1. ______________________________________________

2. ______________________________________________

3. ______________________________________________

Q16. What is a "qualitative variable" and what are its major properties?


Define qualitative variable: ____________________________________
____________________________________
Properties of qualitative variables:

1. ______________________________________________

2. ______________________________________________

3. ______________________________________________

Q17. Give examples of qualitative and quantitative variables for the following concepts:


1. Arrest Record:
Quantitative: Number of Prior Arrests (0, 1, 2, 3, 4, 5 or more)
Qualitative: Type of Prior Arrest (Violent, Property, Drug Crime)

2. Burglary Prevention Activities:
Quantitative: ___________________________________________
Qualitative: ___________________________________________

3. Criminal Sentence Given:


Quantitative: ___________________________________________
Qualitative: ___________________________________________

340
Appendix C: Notetaking Guides (Chapter 2)

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 2: Creating measures based on questions

Concepts Learning Objectives (LO)


- GIGO (Garbage In, Garbage Out) LO1: Operationalize concepts into variables
- Operationalization
- Variable LO2: Judge the strength of specific measures
- Validity
- Reliability LO3: Evaluate the measurement properties of a
- Attributes classification
- Exhaustive
- Mutually Exclusive LO4: Classify variables based on level of
- Homogeneous measurement
- Levels of measurement

LO1: Operationalize concepts into variables

Q1. Define “GIGO" and explain why this concept is so important in criminological research.

Define GIGO: ________________________

Why is this concept so important in criminological research?

_____________________________________________________

_____________________________________________________

Q2. What are 4 major sources of GIGO in criminology research?


1. Bad measures of concepts

2.

3.

4.

347
Appendix C: Notetaking Guides (Chapter 2)

Q3. Define “operationalization”

Operationalization:

Q4. Why is the concept “crime” so difficult to operationalize and measure?

Q5. What are the three initial steps in the process of operationalization?



1. Identify the concepts within the stated hypothesis. (What will you need to
measure to answer your research question?)

2. _______________________________________________________________

3. ______________________________________________________________

Q6. What is a variable?

Q7. Identify four variables in the adjudication and sentencing stages of criminal processing

1. Trial verdict

2. Number of pretrial motions


3. _____________________________________________

4. ______________________________________________

348
Appendix C: Notetaking Guides (Chapter 2)

LO2: Judge the strength of specific measures


Q8. Define validity and give an example of valid measures of the strength of criminal evidence

Define validity: _______________________________________________


_______________________________________________
Valid measures of the strength of criminal evidence?

1. Presence of matching DNA at crime scene.
2. ______________________________________________________

3. ______________________________________________________

Q9. Define reliability and give an example of a reliable and unreliable measure of the concept
“safety precautions”.


Reliability: _______________________________________________________

Reliable measure of safety precautions: ________________________________

Unreliable measures of safety precautions: ______________________________


Q10. How can someone argue that police reports on crime incidents are not a valid or reliable
measure of the true extent of crime?

Why are police reports not a valid measure of the true amount of crime?

____________________________________________________

_____________________________________________________

Why are police reports not a reliable measure of the true amount of crime?

______________________________________________________

______________________________________________________

349
Appendix C: Notetaking Guides (Chapter 2)

LO3: Evaluate the measurement properties of a classification


Q11. List the attributes for the following qualitative variables?


Qualitative Variable Attributes

1. Type of defense attorney? Private, Appointed, Public Defender

2. Race/Ethnicity of offender?

3. Weapon used in homicide?



Q12. List the attributes for the following qualitative variables?
Quantitative Variable Attributes

1. Number of felony charges? 0, 1, 2, 3, 4, 5 or more charges

2. Amount of monetary bail?

3. Days between arrest and conviction?


Q13. Define these measurement properties of a variable’s attributes:

Variable’s Attributes Definition of Term

1. Mutually Exclusive?

2. Exhaustive?

3. Homogeneous?

Q14. Provide a mutually exclusive classification of attributes for the following variables:

Variables Mutually Exclusive Attributes

1. Type of Charge? Misdemeanor, Gross Misdemeanor, Felony

2. Number of Prior Arrests?

3. Type of Plea?

4. Case Outcome?

350
Appendix C: Notetaking Guides (Chapter 2)

Q15. Provide an exhaustive classification of attributes for the following variables:

Variables Exhaustive Classification

1. Type of Offense? Murder, Rape, Robbery, Burglary, Other

2. Length of Probation?

3. Type of Court?

4. Number of Charges?

Q16. Explain why the following variables do NOT involve homogenous classifications of
attributes

Variables (Attributes) Why not homogeneous?


1. Victim's age? 0-19 covers juveniles & adults, the
(0-19,20-50, over 50) categories 20-50 and >50 are too wide
2. Prison Sentence?
(0-2, 3-5,6 or more years)

3. Type of Crime?
(violent, property, drug offenses)

4. Type of evidence?
(scientific, circumstantial, testimony)

Q17. For each of the following variables, develop a mutually exclusive, exhaustive, and
homogeneous classification of its attributes:

Variables Mutually Exclusive, Exhaustive, & Homogenous

Method of execution? Lethal injection, hanging, electrocution, other

Type of trial?

Type of court?

Number of victims?

351
Appendix C: Notetaking Guides (Chapter 2)

LO4: Classify variables based on level of measurement

Q18. Define “level of measurement”

Level of measurement:

Q19. Fill in the process of operationalization and the specific location where decisions are
made about creating variables and the level of measurement:

Concepts Indicators _______________ _______________


Q20. What are two levels of measurement for categorical variables?


1. ________________________

2. _________________________

Q21. What are two levels of measurement for continuous variables?


1. ________________________

2. _________________________

Q22. What are the major properties of each of the following levels of measurement?

Levels of Measurement: Properties of these levels of measurement:

Nominal: Categories can't be ranked or ordered


numerically; allows you to make only qualitative
distinctions between categories.

Ordinal:

Interval:

Ratio:

352
Appendix C: Notetaking Guides (Chapter 2)

Q23. Provide nominal measures of each of the following variables:

Variables Nominal measures:

Burglar alarm in house? No, Yes

Offender's Gender?

Type of Charge?

Sentence Imposed?

Q24. Provide ordinal measures of each of the following variables:

Variables Ordinal measures:

Level of household security? Low, Medium, High

Amount of bail?

Number of prior arrests?

Extent of evidence in case?


Q25. Provide interval/ratio measures of each of the following variables:

Variables Interval/Ratio measures:

Number of security alarms in house? 0,1,2,3,4,5... etc.

Monetary value of burglary loss?

Number of cases dismissed?

Trial length in days?


353
Appendix C: Notetaking Guides (Chapter 3) 354

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 3: Collecting Data to Answer Questions

Concepts Learning Objectives (LO)


- Unit of analysis
- Reductionist fallacy LO1: Infer the unit of analysis used in research studies
- Ecological fallacy
- Samples LO2: Critique statistical claims and interpretations of data
- Populations based on the unit of analysis
- Probability sampling
- Nonprobability sampling LO3: Develop probability and nonprobability sampling
- Sampling bias strategies
- Sampling error
LO4: Assess levels of error/bias associated with sampling
procedures


LO1: Infer the unit of analysis used in research studies

Q1. Identify the 5 major stages in the research process:


1. Research Questions and Hypothesis

2. Operationalize Concepts

3. _________________________________________

4. _________________________________________

5. _________________________________________

Q2. GIGO problems are most likely at these 2 stages of the research process:


1. _________________________________________

2. _________________________________________

354
Appendix C: Notetaking Guides (Chapter 3) 355

Q3. Define the term “unit of analysis”?

Q4. What are four different units of analysis in criminological research?

1. Individuals
2. _________________________________________
3. _________________________________________

4. _________________________________________

Q5. Give examples of the following units of analysis in criminological research?

Unit of Analysis Examples

Individuals: 1. Prosecutor’s charging decisions


2. _________________________________
3. _________________________________

Groups and 1. City’s murder rates


Geographical Areas: 2. _________________________________
3. _________________________________

Events: 1. prison riots


2. _________________________________
3. _________________________________

Organizations/Agencies: 1. Police Department’s Traffic Division


2. _________________________________
3. _________________________________

355
Appendix C: Notetaking Guides (Chapter 3) 356

Q6. Identify the unit of analysis from the following hypotheses:

Hypothesis Unit of Analysis

1. Crime analysts in urban areas have higher Crime analysts


salaries than those in urban areas.

2. U.S. rates of imprisonment are higher than the


rates in other countries. _______________________

3. Guns are more common in murders than rapes. _______________________


356
Appendix C: Notetaking Guides (Chapter 3) 357

LO2: Critique statistical claims and interpretations of data based on the unit of
analysis

Q7. Define the term “reductionist fallacy” and describe when it occurs in interpreting
statistical claims.


Reductionist Fallacy: __________________________________________

___________________________________________

When does this error in inference occur in criminological research?

______________________________________________________

_______________________________________________________


Q8. Define the term “ecological fallacy” and describe when it occurs in interpreting statistical
claims.


Ecological Fallacy: ___________________________________________

___________________________________________

When does this error in inference occur in criminological research?

______________________________________________________

_______________________________________________________


Q9. _____________________ is the error in making inferences about individuals on the


basis of data collected on groups.

Q10. ______________________ is the error in making inferences about group differences on


the basis of data from individuals.

357
Appendix C: Notetaking Guides (Chapter 3) 358

Q11. Identify whether the following statistical inferences represent a legitimate claim, a
reductionist fallacy, or an ecological fallacy?

Type of
Unit of Observation, Unit of Analysis, and Hypothesis Statistical
Inference

A study of sex offenders in the U.S. finds that 30% of those


released from prison repeated their crimes. Based on this Reductionist
data, a newspaper reporter claims that the U.S. has higher Fallacy
rates of recidivism (repeat offending) than other countries.

Police departments that have higher proportions of female


officers are found to have lower incidents of excessive-force
_____________
complaints than police departments with fewer female
officers. Based on this information, you conclude that
female officers are less likely to be involved in excessive-
force complaints than male officers.

Previous research finds that homes with dogs are less likely
to be burgled than homes without dogs. Thus, you conclude _____________
that if you have a dog at home, you should decrease your
chances of being a victim of a home burglary.

358
Appendix C: Notetaking Guides (Chapter 3) 359

LO3: Develop probability and non-probability sampling strategies


Q12. Define what researchers refer to as the "population" and "sample":

Concept Definition

Population:

Sample:

Q13. Are population or sample data implied by the following research questions?

Research Questions Population


or Sample?

1. The number of executions in the world's 184 countries. Population

2. The parole success of 20 inmates from a state prison. __________

3. The incidents of school violence in 100 U.S. high schools. __________

4. The proportion of homes in 50 countries with firearms. __________

5. The auto theft rate in 25 cities in Midwestern states. __________


Q14. Define and distinguish between "probability sampling" and "non-probability sampling":

Concept Definition and Distinct Features

Probability Sampling:

Non-probability Sampling:

Q15. Explain why criminologists argue that probability-based samples are better than non-
probability samples?

359
Appendix C: Notetaking Guides (Chapter 3) 360

LO4: Assess levels of error/bias associated with sampling procedures


Q16. Define and distinguish between "sampling bias" and "sampling error":

Concept Definition and Distinct Features

Sampling Bias:

Sampling Error:

Q17. What are the major sources of sampling bias in criminological research?

1.

2.

3.

4.

Q18. What is the best way to reduce sampling error in criminological research?

360
Appendix C: Notetaking Guides (Chapter 4)

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 4: Organizing and Displaying Data

Concepts Learning Objectives (LO)
- Data reduction, Raw data
- Univariate analysis LO1: Produce univariate displays of data distributions
- Frequency distributions from raw data
- Percentage distributions
- Cumulative distributions LO2: Calculate values that describe data distributions
- Bar chart / Pie chart /Histogram /
Line graph LO3: Interpret graphical displays of data distributions
- Percentile
- Skewed data and Outliers LO4: Judge the spread and shape of data distributions


LO1: Produce univariate displays of data distributions from raw data

Q1. Define the concepts "data reduction", "raw data", and "univariate analysis".

Concepts Definition

Data Reduction:
Raw Data:
Univariate Analysis:

Q2. Know the general structure of a dataset in a spreadsheet and fill in the labels that represent
a single case, a single variable, and a single cell in the following dataset:

361
Appendix C: Notetaking Guides (Chapter 4)

Q3. What is an univariate frequency distribution and why is it important for purposes of data
reduction?

Concept Definition and Why is it Important?

Univariate Frequency Definition: ___________________________________


Distribution: ___________________________________
Why important? _______________________________
____________________________________

Q4. Define the notation and numerical values used in the following univariate frequency
distribution:

Frequency Distribution Interpret Notation and Numerical Values


Lethal Weapon f
Variable Name = ________________
Gun 30
Knife 15 f = ________________
Blunt Object 10 n = ________________
Other Weapon 5 30 = _________________
n = 60 10 = _________________


Q5. Develop a frequency distribution with proper labels from the following raw data on the
number of charges for 20 criminal defendants:

Raw Data Construct a frequency distribution for this data:


Number of Charges f
Number of Charges: ______ ____
1,2,3,1,1,1,2,1,2,1 ______ ____

2,1,1,3,1,1,2,1,1,2 ______ ____


n = _____

362
Appendix C: Notetaking Guides (Chapter 4)

Q6. Using the following frequency distribution of murder rates in 20 countries, provide two
alternative ways of categorizing these data (i.e., combine/collapse the categories).

Frequency Frequency Distribution Frequency Distribution


Distribution (Alternative Categories #1) (Alternative Categories #2)

Murder Rates f Murder Rates f Murder Rates f


< 1.0 2 ____________ ___ ____________ ___
1.0 - 2.0 3 ____________ ___ ____________ ___
2.1 - 3.0 6 ____________ ___ ____________ ___
3.1 - 5.0 4 ____________ ___
5.1 - 10.0 3 n = 20 n = 20
> 10.0 2
n = 20

Q7. Identify some general/basic rules that should be followed in constructing frequency tables?

1. ___________________________________ 3. _________________________________

2. ___________________________________ 4. _________________________________

Q8. What's wrong with each of these frequency distributions of detective salaries (k=1000's)?

Freq. Dist. #1 Freq. Dist. #2 Freq. Dist. #3

Detective Salary f Detective Salary f Detective Salary f


< $20k 2 < $20k 2 < $20k 2
$20k - $30k 3 $30k - $40k 2 $20k - $22k 3
$25k - $50k 7 $51k - $60k 1 $23k - $37k 6
$45k - $75k 6 $70k - $80k 3 $38k - $76k 4
$70k - $100k 4 $85k - $100k 2 $77k - $113k 3
> $100k 2 > $150k 1 > $114k 2
n = 20 n = 20 n = 20

Problems: Problems: Problems:


1. _________________ 1. _________________ 1. _________________
2. _________________ 2. _________________ 2. _________________

363
Appendix C: Notetaking Guides (Chapter 4)

LO2: Calculate values that describe data distributions

Q9. What is a percentage distribution and why is it often more useful than frequency
distribution?

Concept Definition and Usefulness?

Definition: ___________________________________
Percentage Distribution: ___________________________________
Why is it more useful than frequency distribution?
1.__________________________________________
2. __________________________________________
3. __________________________________________

Q10. Convert the following frequency distribution (f) into a percentage distribution (pct):

Execution Method f pct


Electrocution 4 __________
Firing Squad 2 __________
Gas Chamber 14 __________
Hanging 10 __________
Lethal Injection 170 __________
n= 200 Σ = 100 %
Computing Formula: pct = (f/n)100

Q11. What is a cumulative distribution and what does it tell us?

Concept Definition and Usefulness?

Definition: ___________________________________
Cumulative Distribution: ___________________________________
What does it tell us? ___________________________
1.__________________________________________
2. _________________________________________

364
Appendix C: Notetaking Guides (Chapter 4)

Q12. Cumulative distributions can only be constructed and interpreted for what type of
variables?

Q13. Convert a frequency distribution (f) into a cumulative frequency distribution (cf) and then
interpret the results of this cumulative distribution.

Days between arrest


and conviction f cf
< 30 days 10 __________
31 - 60 days 20 __________
61 - 90 days 80 __________
91 - 120 days 50 __________
> 120 days 40 __________
n= 200 n = 200
Number of defendants in this example who were convicted in:
60 days or less? _____________
90 days or less? ______________

Q14. Convert a percent distribution (pct) into a cumulative percent distribution (cpct) and then
interpret the results of this cumulative distribution.

Days between arrest


and conviction pct Cpct
< 30 days 5 __________
31 - 60 days 10 __________
61 - 90 days 40 __________
91 - 120 days 25 __________
> 120 days 20 __________
Σ = 100 % Σ = 100 %
Percent of defendants in this example who were convicted in:
60 days or less? _____________
90 days or less? ______________

365
Appendix C: Notetaking Guides (Chapter 4)

Q15. What is the proper way to interpret frequency distribution tables for qualitative and
quantitative variables?

Type of Variables Proper Interpretation of the frequency/percent distributions

Qualitative
Variables:

Quantitative
Variables:

366
Appendix C: Notetaking Guides (Chapter 4)

LO3: Interpret graphical displays of data distributions

Q16. What is a bar chart and what do the bars represent?

Concept Definition and Characteristics

Definition: ____________________________________________
____________________________________________
Bar Chart
Bars Represent? _______________________________________

_____________________________________________

Q17. Convert a frequency distribution into a bar chart:

Frequency Distribution Bar Chart Label Categories?


Sentence Type f 35
Match letter in bar chart
Fine 30 30 with Sentence Type:
Probation 20 25 A =______________
20
Jail 10 15
B = ______________
Prison 15 10 C = ______________
5
Death 5 0
D = ______________
n = 80 A B C D E E = ______________

Interpretation: 1. Most common sentence is ____________________________
2. Second most common sentence is _____________________
3. Least likely sentence type is _____________________

Q16. What is a bar chart and what do the bars represent?

Concept Definition and Characteristics

Definition: ____________________________________________
Pie Chart _____________________________________________

Pie Slices Represent? ____________________________________


______________________________________________

367
Appendix C: Notetaking Guides (Chapter 4)

Q17. Convert a percent distribution into a pie chart:

Percent Distribution Pie Chart Label Categories?


Case Outcome pct Match letter in pie chart
A with Sentence Type:
Dismissed 10
Found Not Guilty 15 B A =______________
Guilty Plea 45 C B = ______________
Judge Guilty Verdict 25 D C = ______________
Jury Guilty Verdict 5 E D = ______________
100% E = ______________

Interpretation: 1. Most common case outcome is ____________________________


2. Second most common case outcome is _____________________
3. Least likely case outcome is _____________________

Q18. Visual methods for qualitative and quantitative variables:

Type of Variables Visual Graphic Method


1. ________________________________
Qualitative Variables:
2. ________________________________

1. ________________________________
Quantitative Variables:
2. ________________________________

Q19. What is a histogram and what do the connected bars represent?

Concept Definition and Characteristics

Definition: ________________________________________
Histogram _________________________________________

Connected Bars Represent: ___________________________


__________________________________________

368
Appendix C: Notetaking Guides (Chapter 4)

Q20. Convert a frequency distribution into a histogram:

Frequency Distribution Histogram Label Categories?


Prior Arrests f 60
Match letter in Histogram
0 prior arrests 50 with prior arrests
1 prior arrests 30 40 A =______________
2 prior arrests 15 B = ______________
20
3 prior arrests 10 C = ______________
4 or more arrests 5 0 D = ______________
A B C D E
n = 100 E = ______________

Interpretation: 1. Most common number of prior arrests is _______________________


2. Second most common number of prior arrests is _______________
3. Least common number of prior arrests is _____________________

Q21. What is a line graph and what does the line represent?

Concept Definition and Characteristics

Definition: ________________________________________
Line Graph _________________________________________

LIne Represents: ___________________________


__________________________________________

Q22. General conclusions about when to use data tables (frequency/percent distributions) or
visual methods (bar/pie charts, histograms, line graphs).

User Characteristic or Purpose Data Tables or Visual Methods

1. Statisticians and other Academic Audience? Method = ________________


2. Practitioners and the General Public? Method = ________________
3. Fast and easier interpretations? Method = _________________
4. Precision and accuracy of facts displayed Method = _________________

369
Appendix C: Notetaking Guides (Chapter 4)

LO4: Judge the spread and shape of data distributions



Q23. Define a "symmetrical data distribution" and "percentiles":

Concept Definition

Symmetrical Data Definition:______________________________________
Distribution _______________________________________

Percentile Definition:______________________________________
_______________________________________

Q24. Identify the particular spread pattern in the following frequency distribution tables:

Frequency Distribution Table Equal or Symmetrical Spread



Bail Amount f

≤ $1,000 400
$1,000 - $1,999 400
$2,000 - $2,999 400
Spread = ________________
$3,000 - $4,999 400

≥ $5,000 400
n = 2,000


Bail Amount f

≤ $1,000 300
$1,000 - $1,999 400
$2,000 - $2,999 600
Spread = _______________
$3,000 - $4,999 400
≥ $5,000 300
n = 2,000

Q25. How is the cumulative percentage (cpct) column in a frequency distribution table used to
identify whether the spread of scores across categories is symmetrical?

370
Appendix C: Notetaking Guides (Chapter 4)

Q26. Know the following pattern of the relative frequency of cases in the categories of a
frequency distribution table when you have a symmetrical distribution:

1. The cases will cluster around the center categories of the distribution.

2. The relative frequency of cases decreases as you move away from this
central cluster.

Q27. Identify the particular spread pattern in the following frequency distribution tables:

Frequency Distribution Table Symmetrical or Skewed Spread



Bail Amount f

≤ $1,000 100
$1,000 - $1,999 400
$2,000 - $2,999 1000
Spread = ________________
$3,000 - $4,999 400

≥ $5,000 100
n = 2,000


Bail Amount f

≤ $1,000 600
$1,000 - $1,999 500
$2,000 - $2,999 400
Spread = _______________
$3,000 - $4,999 300
≥ $5,000 200
n = 2,000


Bail Amount f

≤ $1,000 50
$1,000 - $1,999 250
$2,000 - $2,999 400
Spread = _______________
$3,000 - $4,999 500
≥ $5,000 800
n = 2,000

371
Appendix C: Notetaking Guides (Chapter 4)

Q28. Know that the primary graphic method for assessing the shape of the distribution of a
quantitative variable involves the construction of a histogram.

Q29. What is meaning of the term "skewed data"?

Concept Definition
________________________________
Skewed Data:
________________________________

Q30. What are the two types of skewed distributions and what are their characteristics?

Types of Skewed Characteristics


Distributions
1. _________________ _______________________________________
2. _________________ _______________________________________

Q31. Identify the particular shape of the data distribution in the following histograms:

Histograms Symmetrical or Skewed Shape


1500


1000

500

0
< $1k $1-2k $2-3k $3-5k > $5k
Shape = ________________

1000

500

0
< $1k $1-2k $2-3k $3-5k > $5k
Shape = _______________
1000

500

0 Shape = _______________
< $1k $1-2k $2-3k $3-5k > $5k

372
Appendix C: Notetaking Guides (Chapter 4)

Q32. Identify the particular type of skewedness in the following histograms:

Frequency Distribution Table Positive (right) or Negative (left) Skew?


1000

500
Type of Skew = _______________
0
< $1k $1-2k $2-3k $3-5k > $5k

1000


500
Type of Skew = _______________
0
< $1k $1-2k $2-3k $3-5k > $5k

Q33. What is an "outlier" and what effect does it have on the skew of a distribution?

Concept Definition and Effect on Skew/Shape of Data Distribution

Definition: ________________________________________
Outlier _________________________________________

Effect on Shape/Skew of Distribution:


1. ________________________________________
2. __________________________________________

373
Appendix C: Notetaking Guides (Chapter 5)

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 5: Describing the "Typical" Case

Concepts Learning Objectives (LO)
- Descriptive statistics
- Measures of central tendency LO1: Calculate measures of central tendency
- Mode
- Median LO2: Select the most appropriate measure of central
- Mean tendency

LO3: Judge the accuracy of central tendency measures
based on data distributions

LO4: Infer a distribution’s shape based on multiple
measures of central tendency


LO1: Calculate measures of central tendency

Q1. Define "descriptive statistics" and why are they important in criminological studies:

Concept Definition and Why is it Important?

Definition: _______________________________________
Descriptive Statistics _______________________________________
Why important? __________________________________
________________________________________

Q2. Define the concept "measures of central tendency":

Concept Definition

Measures of Central Tendency Definition: _____________________________


_______________________________________
_______________________________________

374
Appendix C: Notetaking Guides (Chapter 5)

Q3. List the three most common measures of central tendency:

3 common measures of central tendency?

1. __________________________________
2. ___________________________________
3. ___________________________________

Q4. Define the "mode":

Q5. "Compute" the numerical value of the mode for the following numbers of pretrial motions in 20
criminal trials:

Number of Pretrial Motions:


1,0,1,0,1,2,3,2,1,1 Mode = ________
1,2,3,2,1,5,1,15,1,1

Q6. Identify the modal category and the modal interval in the following frequency tables:

Modal Category Modal Interval


Officer Salaries f
Weapon f
< $20k 10
Gun 30
Knife 15 $20k - $30k
25

Blunt Object 10 $30.1k - $50k 45

Other Weapon 5 $50.1k - $100k 15

n = 60 > $100k 5

n = 60



Modal Category = ____________
Modal Interval = ____________



Q7. Level of measurement necessary for computing mode?

375
Appendix C: Notetaking Guides (Chapter 5)

Q8. Define the median and list its major properties:

Concept Definition and Major Properties?

Definition: _______________________________________
Median _______________________________________
Major Properties? __________________________________
________________________________________

Q9. List the three-step process to compute the median position (MP) and median (Md) in a data
distribution:

Steps in Computing the Median Position and Median?

1. _________________________________________________
2. _________________________________________________
3. _________________________________________________

Q10. Identify the median position (MP) and median (Md) from the distribution of raw scores

Raw Scores----> Ranked-Order Scores Median Position (n+1)/2 Median

{0,8,4} --------> ________________ __________________ __________

{0,8,4,6} -----> _________________ __________________ __________

{0,8,4,6,7,2} -----> _________________ __________________ __________



Q11. Calculate the median position (MP) and median category from rank-ordered categorical data
(i.e., ordinal data):

Frequency Distribution MP (n+1)/2 Median Category

Days from Arrest


to Conviction f cf cpct
< 30 days 10 10 5

31-60 days 20 30 15 __________ __________
61-90 days 80 110 55
91-120 days 50 160 80
> 120 days 40 200 100

n=200 100%

376
Appendix C: Notetaking Guides (Chapter 5)

Q12. Define the mean ( x ) and each symbol in its computing formula ( x = # $ )

Concept Definition and Interpret Symbols in Computing Formula ( x = & ' )

Definition: _______________________________________
Mean _______________________________________
Interpret symbols in the mean’s computing formula ( x = # $ ):
1. ∑# =_________________________
2. n = ___________________________

Q13. Compute means from a data distributions of raw scores

Raw Scores Mean ( x = # $ )

5,8,4,2,6,5,5,4,6,5 __________

10,12,11,15,12,10,14,12,13,11 __________

Q14. Compute rates to compare units of different sizes (e.g., crime rates for 2 cities):

Crime Rate
City # of Crimes Population (Crime x population) x 100,000

BigCity 4,000 2,000,000 200 per 100,000


SmallCity 200 5000 __________

Conclusion: More crime in BigCity but far higher crime rate in
SmallCity than BigCity

Q15. Compute means from a frequency distribution table

Frequency Distribution Mean ( x = *# $ )

Prior Arrests (x) f fx


0 30 0
1 20 20
2 15 30 __________
3 10 30
4 5 20
n= 80 ∑fx = 80

377
Appendix C: Notetaking Guides (Chapter 5)

LO2: Select the most appropriate measure of central tendency

Q16. List the primary strengths and limitations of the mode as a measure of central tendency?

Measure of Central
Tendency Strengths and Limitations

Strengths:
1. __________________________________________
2. __________________________________________
3. __________________________________________
Mode: 4. __________________________________________
Limitations:
1. __________________________________________
2. __________________________________________
3. __________________________________________

Q17. List the primary strengths and limitations of the median as a measure of central tendency?

Measure of Central
Tendency Strengths and Limitations

Strengths:
1. __________________________________________
2. __________________________________________
3. __________________________________________
Median: 4. __________________________________________
Limitations:
1. __________________________________________
2. __________________________________________
3. __________________________________________

378
Appendix C: Notetaking Guides (Chapter 5)

Q18. List the primary strengths and limitations of the mean as a measure of central tendency?

Measure of Central
Tendency Strengths and Limitations

Strengths:
1. __________________________________________
2. __________________________________________
3. __________________________________________
Mean: Limitations:
1. __________________________________________
2. __________________________________________
3. __________________________________________

Q19. List three general rules for selecting measures of central tendency:

General rules for selecting measures of central tendency



1. ____________________________________________________________

2. ____________________________________________________________

3. ____________________________________________________________

379
Appendix C: Notetaking Guides (Chapter 5)

LO3: Judge the accuracy of central tendency measures based on data distributions

Q20. The goal in selecting a particular measure of central tendency?

Q21. Identify when to use specific measures of central tendency for particular types of data
distributions:

Measure of Central Tendency Used for What Type of Data Distribution

Mode Qualitative or multimodal distributions

Median __________________________________________

Mean __________________________________________

Q22. Identify the location of the mode (Mo), median (Md), and mean ( x ) in the following
distributions:

Symmetrical Distribution Positive Skewed Distribution Negative Skewed Distribution



______ ___ ___ ____ ___ ___ ___

Q23. The mean will seriously overestimate the value of the average case in what type of
distribution?

Q24. The mean will seriously underestimate the value of the average case in what type of
distribution?

380
Appendix C: Notetaking Guides (Chapter 5)

Q25. Identify the location of the mode (Mo), median (Md), and mean ( x ) in the following
distribution:

Bimodal Distribution

____ _____ ____

381
Appendix C: Notetaking Guides (Chapter 5)

LO4: Infer a distribution’s shape based on multiple measures of central tendency

Q26. Draw the shape of the distribution from the following values of the mode, median, and mean:

Values of Measures of Central Tendency Draw Shape of the Distribution

Mode = 25

Median = 25

Mean = 25

Mode = 25

Median = 25

Mean = 5

Mode = 5

Median = 5

Mean = 25

Mode = 5

Median = 10

Mean = 25

Mode = 5, 25

Median = 15

Mean = 25

382
Appendix C: Notetaking Guides (Chapter 5)

Q27. Estimate values of mode, median, and mean from the shape of the data distribution

Shape of Distribution Estimated Value of:


1500

Mode = ________
1000
500 Median = ________

0
< $1k $1-2k $2-3k $3-5k > $5k Mean = _________

1000
Mode = ________
500
Median = ________
0
< $1k $1-2k $2-3k $3-5k > $5k Mean = _________

1000
Mode = ________
800

600
Median = ________
400

200
Mean = _________
0
< $1k $1-2k $2-3k $3-5k > $5k


383
Appendix C: Notetaking Guides (Chapter 6)

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 6: Assessing Differences Among Cases


Concepts Learning Objectives (LO)
- Measures of dispersion
- Variation ratio LO1: Compare relative levels of dispersion within data
- Range distributions
- Interquartile range
- Variance LO2: Calculate appropriate measures of dispersion
- Standard deviation
- Kurtosis LO3: Select the most appropriate measure of dispersion
- Box plots
LO4: Assess visual depictions to compare variability
within and across distributions


LO1: Compare relative levels of dispersion within data distributions

Q1. Define "dispersion" and give three criminological examples of research questions involving
differences in dispersion/variability across groups

Concept Definition and Examples of Dispersion in Criminological Research


Definition: _______________________________________
Dispersion _______________________________________

3 research questions involving dispersion/variability across groups?


1. _________________________________________

2. _________________________________________

3. _________________________________________

384

Appendix C: Notetaking Guides (Chapter 6)

Q2. Identify the data distributions with the lowest and highest degree of dispersion

Distribution A Distribution B Distribution C Distribution D

0 10 10 20
20 20 15 21
40 30 20 22
60 40 25 23
80 50 30 24

Distribution with Least Dispersion = ________
Distribution with Most Dispersion = ________

Q3. How do measures of dispersion increase our ability to more fully describe the typical case or
midpoint of a distribution?

385

Appendix C: Notetaking Guides (Chapter 6)

LO2: Calculate appropriate measures of dispersion

Q4. List the 5 most common “measures of dispersion”:

5 Most Common Measures of Dispersion

1. ______________________
2. ______________________
3. _____________________
4. _____________________
5. _____________________

Q5. Define the “variation ratio” and list (1) when it is used and (2) its major properties:

Concept Definition and Characteristics


Definition: _____________________________________________
Variation _____________________________________________
Ratio (VR) 1. When used: _________________________________________
2. Major properties: ____________________________________

Q6. List the steps in computing the variation ratio (VR):

Steps in Computing the Variation Ratio (VR)?

1. _________________________________________________
2. _________________________________________________
3. _________________________________________________

Q7. The larger the value of the variation ratio, the _______ dispersion and the ______ clustering in
the data distribution.

Q8. The larger the value of the variation ratio, the _______ dispersion and the ______ clustering in
the data distribution.

386

Appendix C: Notetaking Guides (Chapter 6)

Q9. Compute the variation ratio (VR) for the following distributions:

Distribution A Distribution B

Type of Sentence f Type of Sentence f

Fine 5 Fine 5

Probation 10 Probation 25
Jail 80 Jail 30
Prison 5 Prison 40
n = 100 n = 100


VR = 1 – p (mode) = __________ VR = 1 – p(mode) = __________


Which distribution has the most dispersion (least clustering): __________


Q10. Define the “range” and list (1) when it is used and (2) its major properties:

Concept Definition and Characteristics


Definition: _____________________________________________
Range (R) _____________________________________________
1. When used: _________________________________________
2. Major properties: ____________________________________

Q11. List the steps in computing the range (R):

Steps in Computing the Range (R)?

1. _________________________________________________

2. _________________________________________________

387

Appendix C: Notetaking Guides (Chapter 6)

Q12. Compute the range (R) for the following sets of probation risk scores (x’s) for 10 juveniles

Probation Scores (X) Range ( R = XMax – XMin )

Set 1: 50, 70, 80, 90, 60, 40, 30, 90, 100, 10 _____________

Set 2: 70, 80, 90, 80, 70, 70, 80, 90, 100, 80 _____________

What set of scores has the most variability? ______________


Q13. Define the “interquartile Range” and its major properties:

Concept Definition and Characteristics


Definition: _____________________________________________
Interquartile _____________________________________________
Range (IQR)
Major properties: ______________________________________
_____________________________________________

Q14. Why is the interquartile range often referred to as the “midspread” or “middle fifty”?

Q15. List the steps in computing the interquartile range (IQR):

Steps in Computing the Interquartile Range (IQR)?

1. Find the median value to divide the distribution in half

2. _________________________________________________

3. _________________________________________________
_________________________________________________

388

Appendix C: Notetaking Guides (Chapter 6)

Q16. Identify the 50th percentile case, 25th percentile case, the 75th percentile case, and the
interquartile range in the following set of scores.

Set of Scores Interquartile Range Calculations (IQR = Q3 – Q1)

10
20 50th percentile score value = ________
35 25th percentile score value = ________
40
55 75th percentile score value = ________
60 Interquartile Range = ________
80

Q17. Define the “variance” and its major properties:

Concept Definition and Characteristics


Definition: _____________________________________________
Variance _____________________________________________
Major properties: ______________________________________
_____________________________________________

Q18. List the steps in computing the “sample variance (s2)”:

Steps in Computing the Sample Variance (s2)?

1. Calculate the mean


2. _________________________________________________

3. _________________________________________________
4. _________________________________________________

Q19. Know the meaning of each symbol in the formula for the sample variance (s2)
Formula Interpret Symbols:

s2 = ∑ (x1 − x)2 / n -1 (x
∑ 1 − x)2 = ____________________________
∑ ( x − x )
= _________________________________________
2
1

389

Appendix C: Notetaking Guides (Chapter 6)

Q20. Compute the sample variance (s2) for the following data by filing in the table below:

X Xi - x (Xi - x )2 Steps in Computing Formula:


1 _______ _______ 1. Compute x = ∑X /n = _______
3 _______ _______ 2. Take deviations from x
5 _______ _______ 3. Square deviations from x
2 _______ _______ 4. Sum up the squared deviations = _____
1 _______ _______ 5. Plug in values in formula for sample

variance.
∑ = _____
x = ∑X /n ∑ = ____
s2 =
∑(x − x)
1
2

= _______
n −1

Q21. Visual recognition of small and large sample variances for sets of scores:

Sample Data Relative Size of Variances


Sample #1: 10, 5, 15, 40, 70, 60, 100 Sample with largest variance = ___________

Sample #2: 40, 42, 41, 43, 44, 42, 40 Sample with smallest variance = __________

Sample #3: 30, 60, 40, 50, 40, 40, 50

Q22. Identify the differences between the “sample variance (s2)” and “population variance (σ2 )”

Type of Variance Formula How population variance differs from sample variance?

Sample Variance ∑(x − x) 2

1. __________________________________________
1
s2 =
n −1
2. __________________________________________
Population ∑(x − µ ) 2

σ2 =
1

3. __________________________________________
Variance N

390

Appendix C: Notetaking Guides (Chapter 6)

Q23. Define the “standard deviation” and its major properties:

Concept Definition and Characteristics


Definition: _____________________________________________
Standard _____________________________________________
Deviation Major properties: ______________________________________
_____________________________________________

Q24. Know how to calculate the “sample standard deviation (s)” from (1) its general formula and
(2) by using the sample variance.

Two Methods for Calculating the Sample Standard Deviation (s)

1. Computing Formula: s = ∑ (x 1 − x)2



n −1

2
2. Take Square Route of Sample Variance: s = s

Q25. Assume that the sample standard deviations of annual income are far larger for female than
male police officers ( SFemale= $10,000; SMale= $2,000). What do this tell you about the
differences in the dispersion/variability/clustering in incomes for the two samples?

Q26. List differences between the standard deviations of the “sample (s)” and “population (σ )”

Type of Formula How the population standard deviation differs from


Standard the sample standard deviation?
Deviation

Sample
s =
∑(x − x)
1
2

1. __________________________________________
Standard
n −1
Deviation 2. __________________________________________
3. __________________________________________
Population
Standard σ =
∑(x − µ )
1
2


Deviation N

391

Appendix C: Notetaking Guides (Chapter 6)

Q27. Describe two ways in which “outliers” (i.e., extreme scores) influence the size of the variance
and standard deviation.

Two ways outliers influence variances and standard deviations:

1. __________________________________________________

2. __________________________________________________

392

Appendix C: Notetaking Guides (Chapter 6)

LO3: Select the most appropriate measure of dispersion

Q28. Identify the strengths and limitations of the variation ratio (VR):

Measure of Dispersion Strengths and Limitations of this Measure

Strengths: _______________________________________
Variation Ratio (VR) ________________________________________

Limitations: _______________________________________
________________________________________

Q29. Identify the strengths and limitations of the range (R):

Measure of Dispersion Strengths and Limitations of this Measure

Strengths: _______________________________________
Range (R) ________________________________________

Limitations: _______________________________________
________________________________________

Q30. Identify the strengths and limitations of the interquartile range (IQR):

Measure of Dispersion Strengths and Limitations of this Measure

Strengths: _______________________________________
Interquartile Range (IQR) ________________________________________

Limitations: _______________________________________
________________________________________

393

Appendix C: Notetaking Guides (Chapter 6)

Q31. Identify the strengths and limitations of the variance (s2 or σ2 ):

Measure of Dispersion Strengths and Limitations of this Measure

Strengths: _______________________________________
Variance (s2 or σ2) ________________________________________

Limitations: _______________________________________
________________________________________

Q32. Identify the strengths and limitations of the standard deviation (s or σ ):

Measure of Dispersion Strengths and Limitations of this Measure

Strengths: _______________________________________
Standard Deviation ________________________________________
(s or σ)
Limitations: _______________________________________
________________________________________

Q33. List 4 general rules for selecting the appropriate measure of dispersion

General Rules for Selecting the Appropriate Measure of Dispersion

1. Use the variation ratio to describe nominal (qualitative) variables


2. _______________________________________________________

3. _______________________________________________________
4. _______________________________________________________

394

Appendix C: Notetaking Guides (Chapter 6)

LO4: Assess visual depictions to compare variability within and across distributions

Q34. Define “kurtosis” and its major properties:

Concept Definition and Properties


Definition: _____________________________________________
Kurtosis _____________________________________________
Major properties: ______________________________________
_____________________________________________

Q35. List three general levels of kurtosis and identify their primary characteristics:

Levels of Kurtosis Definition and Characteristics


1. ___________________ 1. _______________________________

2. ___________________ 2. _______________________________

3. ___________________ 3. _______________________________

Q36. Draw graphs that indicate the following three general degrees of kurtosis:

Degrees of Kurtosis Draw Graphic Representation:



Mesokurtic


Leptokurtic


Platykurtic

395

Appendix C: Notetaking Guides (Chapter 6)

Q37. Define “box plots” and their major properties:

Concept Definition and Properties


Definition: _____________________________________________
Box Plots _____________________________________________
Major properties: ______________________________________
_____________________________________________

Q38. List the 5 major components for reading/constructing a box plot:

Major Components for Reading/Constructing a Box Plot

1. The “box” in the plot displays the middle 50% of cases (the interquartile range)
2. _________________________________________________________________
3. _________________________________________________________________
4. _________________________________________________________________
5. _________________________________________________________________

Q39. Interpret the following box plots of the differences in the length of prison sentences for
offenders with no prior arrests and 1-3 prior arrests?

Interpreting Box Plot Results


1. What group received the longest sentences on average? _________________

2. What group has the largest variation in prison sentences? ________________

3. What group has the smallest variation in prison sentences? _______________

396

Appendix C: Notetaking Guides (Chapter 6)

Q40. How are outliers defined and shown in box plots?

2 Ways of Defining Outliers in Box Plots:

1. _________________________________________________________________
2. _________________________________________________________________

397

Appendix C: Notetaking Guides (Chapter 7)

Analyzing Criminological Data


NOTE-TAKING GUIDES (Fill out this form while reading the chapter)

Chapter 7: Assessing Differences Among Cases

Concepts Learning Objectives (LO)

- Statistical Inference LO1: Identify “best practices” for making statistical
- Sampling distributions inference
- Binomial distribution
- Standard normal (z) distribution LO2: Develop areas of rare and common outcomes
- Student’s t-distribution for different types of sampling distributions
- Chi-square distribution
- F-distribution L03: Create standardized scores and derive
- Standardized scores probabilities from associated sampling
- Z-scores distributions
- Standard error (SE)
- Degrees of freedom (df) LO4: Demonstrate the impact of the standard error
and degrees of freedom on the shape of
sampling distributions



LO1: Identify “best practices” for making statistical inferences

Q1. Define "statistical inference" and why this concept is so important in criminological
research:

Concept Definition and Why Important


Statistical Definition: _____________________________________________
Inference
_____________________________________________

Why is this concept so important? __________________________


______________________________________________

______________________________________________

398

Appendix C: Notetaking Guides (Chapter 7)

Q2. Identify 5 best practices to improve the accuracy of statistical inferences from
sample data to avoid the GIGO problem:

5 Ways to Improve Statistical Inferences:

1. Use valid and reliable measures of concepts


2. _____________________________________________
3. _____________________________________________
4. _____________________________________________
5. ____________________________________________

Q3. What is the “gold standard” for making statistical inferences from sample data?

399

Appendix C: Notetaking Guides (Chapter 7)

LO2: Develop areas of rare and common outcomes for different types of
sampling distributions


Q4. Define “sampling distribution”:

Concept Definition
Sampling Definition: _____________________________________________
Distribution _____________________________________________

Q5. List two expected outcomes that should occur when taking multiple random
samples and plotting the sample means:


1. The shape of the sampling distribution = ____________________________
2. The sample means will cluster around
what area of the sampling distribution? ____________________________


Q6. Circle and identify areas in the following sampling distributions that represent rare
and common outcomes:

Circle/identify “rare” and “common” Circle/identify “rare” and “common”


outcomes in this sampling distribution: outcomes in this sampling distribution:

400

Appendix C: Notetaking Guides (Chapter 7)

Q7. List 5 major types of sampling distributions used in statistical inference?

Types of Sampling Distributions

1. Binomial Distribution
2. ____________________________
3. ____________________________
4. ____________________________
5. ____________________________

Q8. Define the "binomial distribution" and list the 2 factors that determine the shape
of this sampling distribution:

Concept Definition and Factors Influencing the Distribution’s Shape



Definition: _____________________________________________
Binomial _____________________________________________
Distribution
Factors Influencing the Distribution’s Shape?
1. __________________________________________________
2. __________________________________________________

401

Appendix C: Notetaking Guides (Chapter 7)

Q9. Interpret the following binomial distribution of the number of “successes” in


random sample of 10 “trials”:

Number of Repeat Offenders among Random Sample of 5 Parolees

0.3

0.2

0.1

0
0 1 Repeater 2 3 4 5
Repeaters Repeaters Repeaters Repeaters Repeaters

Most Common Outcome(s) : ________________
Least Common Outcome(s) : ________________
Approximate Probability of getting 2 or fewer repeat offenders: _______
Approximate Probability of getting 4 or 5 repeat offenders: ____________

Q10. When sample sizes involve more than _______ cases, the standard normal (z)
distribution replaces the binomial distribution for computing the probability of
dichotomous outcomes (i.e., success/failure, yes/no, support/oppose).

Q11. Define the "standard normal (z) distribution" and list the 3 properties that make it
useful for identifying rare and common outcomes:

Concept Definition and Properties that Identify Rare/Common Outcomes



Definition: _____________________________________________
Standard _____________________________________________
Normal (z)
Distribution Properties of Normal Curve to Identify Rare/Common Outcomes?

1. Value of the mode, median, and mean are the same


2. __________________________________________________

3. __________________________________________________

402

Appendix C: Notetaking Guides (Chapter 7)

Q12. Identify the meaning of the lines and numbers in the following graph of the
standard normal (z) distribution (i.e., the “normal curve”)

Symbolic Representation of the Normal Curve


The value of 0 represents ? _______________________
The vertical lines and numbers under them (∓ 1.00, 2.00, 3.00) represent?
______________________________________________________

Negative values represent? ______________________________________
Positive values represent? ______________________________________
The space between two vertical lines represents: ____________________
_______________________________________________________
The entire graph covers what percent of the cases? ____________
Percent of cases greater than 0? _____ Percent of cases less than 0: _____

Q13. Statistical inferences about sample values that are derivable from populations
with a normal distribution of scores:

Normal Curve and Statistical Inferences about Sample Values

1. Samples should rarely produce statistical values that are greater


than or less than 2 standard deviations for the mean.

2. _____________________________________________________
_____________________________________________________

3. _____________________________________________________
_____________________________________________________

403

Appendix C: Notetaking Guides (Chapter 7)

Q14. Define the "Student’s t-distribution" and list when to use this sampling
distribution:

Concept Definition and When to use this Sampling Distribution



Definition: __________________________________________
Student’s
t-Distribution __________________________________________
When to use t-Distribution?
1. __________________________________________________
2. __________________________________________________

Q15. List the major properties of the t-distribution:

Properties of t-Distribution

1. Sample size influences the level of kurtosis in t-distributions


2. ________________________________________________
3. ________________________________________________
4. ________________________________________________
5. ________________________________________________

Q16. Define the "Degrees of Freedom" and list its computing formula in a single
sample:

Concept Definition and Computing Formula


Definition: __________________________________________
Degrees of __________________________________________
Freedom (df)
Computing formula in one sample: _____________________

404

Appendix C: Notetaking Guides (Chapter 7)

Q17. Know that the t-distribution and the standard normal (z) distribution are similar
sampling distributions and are virtually identical in large samples (n > 50)

Q18. Define the "Chi-Square Distribution" and list the characteristics of this sampling
distribution:

Concept Definition and Properties


Definition: _______________________________________
Chi-Square _______________________________________
Distribution ( χ 2 ) Properties:
1. Shape of chi-square is dependent upon degrees of freedom
2. Chi-Square distributions have a positive skew
3. ____________________________________________
4. ____________________________________________

Q19. Circle and label the common and rare outcomes in the following chi-square
distribution


Sampling Distribution Circle and Label Common and Rare Outcomes



Chi-Square
Distribution

405

Appendix C: Notetaking Guides (Chapter 7)

Q20. Define the "F-Distribution" and list the characteristics of this sampling
distribution:

Concept Definition and Properties


Definition: _______________________________________
F- Distribution _______________________________________
Properties:
1. F-distribution is positively skewed
2. _____________________________________________
3. _____________________________________________
4. _____________________________________________
5. _____________________________________________

Q21. Circle and label the common and rare outcomes in the following f-distribution


Sampling Circle and Label Common and Rare Outcomes in this
Distribution Sampling Distribution

F- Distribution

406

Appendix C: Notetaking Guides (Chapter 7)

LO3: Create standardized scores and derive probabilities associated with


sampling distributions

Q22. Define "standardized scores" and how are they used in sampling distribution:

Concept Definition and Use of Standardized Scores

Definition: _______________________________________
Standardized _______________________________________
Scores
How are they used and why are they useful? ___________
____________________________________________
____________________________________________
____________________________________________

Q23. Fill in the conversion formulas for standard scores in the following sampling
distributions:

Sampling Distribution Name of Standard Conversion Formula


Scores

Binomial Distribution Binomial values __________________


Standard Normal (z)
z-scores __________________
Distribution


t-scores
Student t-Distribution __________________

Chi-Square Distribution x2-values __________________

F- Distribution f-ratios __________________

407

Appendix C: Notetaking Guides (Chapter 7)

Q24. Define "z-scores" and identify the symbols/notation in the conversion formula:

x1 − x
Concept Definition and Elements of Conversion Formula ( z = )
s
Definition: _______________________________________
z- scores _______________________________________

Identify Symbols/Notation in Conversion Formula:


z = ____________ xi = ___________________
x = ____________ s = ___________________

Q25. Fill in blanks to convert sample scores (xi) to z-scores:

Applying formula to convert x scores to z-scores ( z = (xi - x )/ s )

1. xi = 120, x = 100, and s = 10 à z = +2.0


2. xi =115, x = 100, and s = 10 à z = ________
3. xi =100, x = 100, and s = 10 à z = ________
4. xi = 90, x = 100, and s = 10 à z = ________
5. xi = 75, x = 100, and s = 10 à z = ________

Q26. Fill in the z-scores associated with each of the sample scores (X’s) at the bottom of
the following normal distribution ( = 100; s = 20):

Convert raw scores (x) to z-scores ( x = 100; s = 20 )


X’s = 20 40 60 80 100 120 140 160 180
Z’s = ____ ____ ____ ____ ____ ____ ____ ____ ___

408

Appendix C: Notetaking Guides (Chapter 7)

Q27. Converting standardized z-scores to probabilities and percentile ranks require the
following information:

Necessary information to convert z-scores into probabilities and


percentile ranks

1. A variable’s scores must resemble a normal distribution.


2. Need a “normal curve” table that links z-scores to probabilities.
3. Know how to interpret this table.

Q28. Use this abbreviated Normal Curve Table to answer the following questions:

Areas Under a Normal Curve for Particular Z-Scores

Column A Column B Column C


Z-Score (+ or -) Mean to Z Area > Z
1.00 .3413 .1587
1.28 .3997 .1003
1.64 .4495 .0505
1.96 .4750 .0250
2.00 .4772 .0228
2.33 .4901 .0099
2.58 .4951 .0049
3.00 .4987 .0013

Find % of cases in a normal distribution:

a. Between Mean and Z of +1.00? Answer= 34.13%

b. Between Mean and Z of +1.96 ? Answer=_______

c. Between Mean and Z of -2.33 ? Answer=_______

d. Greater than Z of +1.00? Answer=_______

e. Greater than Z of +1.96? Answer=_______

f. Less than Z of -1.96? Answer= 2.5%

g. Less than Z of -1.00? Answer=______

409

Appendix C: Notetaking Guides (Chapter 7)

Q29. List steps to find z-scores representing particular percentile ranks (e.g., top 5%)

Steps in Computing Percentile Ranks from Normal Curve Table

1a. Pick Percentile Rank of Interest (e.g., Top 5%)


1b. Convert top 5% into a proportion (.05).
2. ______________________________________________
3. _______________________________________________
4. _______________________________________________

Q30. List steps to find z-scores that represent the middle proportion of cases in a normal
distribution (e.g., Middle 95%)

Steps in finding z-scores representing a particular “middle”


proportion of cases in a normal distribution

1a. Pick “Middle Percentage” of Interest (e.g., Middle 95%)


1b. Divide 95% by 2 (95%/2 = 47.5)
2. ______________________________________________
3. ______________________________________________
4. ______________________________________________
5. ______________________________________________
6. ______________________________________________

410

Appendix C: Notetaking Guides (Chapter 7)

LO4: Demonstrate the impact of the standard error and degrees of


freedom on the shape of sampling distribution

Q31. Define the “standard error of the estimate” and what it measures:

Concept Definition and What it Measures

Definition: _______________________________________
Standard Error of _______________________________________
the Estimate (se)
What does it measure? ____________________________
____________________________________________
_____________________________________________

Q32. List 3 ways in which the standard error of the sampling distribution
and a variable’s standard deviation are similar.


Sources of similarity in the standard error and standard deviation

1. ____________________________________________________________
2. ____________________________________________________________
3. ____________________________________________________________



Q33. As the sample size increases, the estimated value of the standard
error ___________ in size.

411

Appendix C: Notetaking Guides (Chapter 7)


Q34. What are the properties of the degrees of freedom for sampling
distributions?

Properties of the Degrees of Freedom for Sampling distributions

1. The shape of most sampling distributions (except __________) is influenced


by sample size
2. For the t-distribution, as the sample size increases, the degrees of freedom
__________ in size.

412

Appendix C: Notetaking Guides (Chapter 8) 413

Analyzing Criminological Data


NOTE-TAKING GUIDE (Fill in form while reading chapter)

Chapter 8: Estimating Population Values from Sample Data

Concepts Learning Objectives (LO)

- Sample statistics LO1: Explain how sample statistics are used to estimate
- Population parameters population parameters
- Point estimate
- Confidence intervals LO2: Calculate and interpret confidence intervals from
- Confidence level large random samples
- Margin of error
LO3: Calculate and interpret confidence intervals from
small random samples

LO4: Demonstrate how the width of a confidence interval
is influenced by the confidence level, the sample
size, and the standard error of the estimate



LO1: Explain how sample statistics are used to estimate population
parameters

Q1. Define "statistical inference" and describe when these inferences are made:

Concept Definition and When are these inferences made:


Statistical Definition: _____________________________________________
Inference
_____________________________________________

When do criminologists make statistical inferences? ___________


______________________________________________

______________________________________________

413

Appendix C: Notetaking Guides (Chapter 8) 414

Q2. Define "sample statistics", "population parameters" and describe how these two
concepts are related:

Concepts Definition and How are these concepts related:

Sample Statistics Definition: _______________________________________


_______________________________________
Population Parameters Definition: ______________________________________
______________________________________
How are concepts related? _________________________
_______________________________________
________________________________________

Q3. What type of Latin characters/symbols represents the following sample statistics?

Sample mean = ________ Sample variance = ________


Sample standard deviation = ________ Sample standard error= ________

Q4. What Greek characters/symbols represents the following population parameters?



Population mean = ________ Population variance = ________

Population standard deviation = ______ Population standard error= _______



Q5. Most criminological research involves computing sample __________ to represent
population _____________.


Q6. Give two reasons why sampling distributions so important in making statistical
inferences:

1. ________________________________________________

2. _________________________________________________


414

Appendix C: Notetaking Guides (Chapter 8) 415


Q7. If we take one large random sample from a population, the following pattern should
be found between our sample statistics (e.g., x ) and the unknown population
parameter (e.g., μ):

1. The single most likely outcome is that the sample mean ( ) will have the
same numerical value as the ______________________

2. There is a 68% chance that the sample estimate is within _____ standard
deviations of the true population value.

3. There is a 95% chance that the sample estimate is within _____ standard
deviations of the true population value.

3. There is a 95% chance that the sample estimate is within _____ standard
deviations of the true population value.

Q8. Sampling distributions enable criminologists to use one random sample to derive
the following information:

Information derivable from taking one random sample and knowing


the properties of sampling distributions:


1. A single best estimate of the population parameter (i.e., a “point
estimate”).
2. ____________________________________________________
___________________________________________________

415

Appendix C: Notetaking Guides (Chapter 8) 416

LO2: Calculate and interpret confidence intervals from large sample data


Q9. Define "point estimates", “confidence intervals”, and provide examples of these
types of statistical inference:

Concepts Definitions and Examples


Point Definition: ___________________________________________
Estimates ___________________________________________
Examples: 1. ___________________________________________
2. ___________________________________________

Confidence Definition: ___________________________________________
Intervals ___________________________________________
(CI) Examples: 1. ___________________________________________
2. ___________________________________________

Q10. Properties of confidence intervals (CI) include the following:

Properties of Confidence Intervals (CI):

1. CI’s are centered around the point estimate derived


from the sample statistics.
2. ________________________________________
________________________________________

Q11. Particular information needed from the sample and sampling distribution to
calculate confidence intervals:

Required Information to Calculate Confidence Intervals (CI):

1. The sample was randomly selected from the population.


2. _______________________________________________
3. _______________________________________________

416

Appendix C: Notetaking Guides (Chapter 8) 417

Q12. Define "confidence levels", “standard errors”, “margin of error”, and describe
how these concepts are related to each other:

Concepts Definitions and how the concepts are related


Confidence Definition: ___________________________________________
Levels (CL) ___________________________________________

Standard Definition: ___________________________________________
Errors (se) ___________________________________________

Margin of Definition: ___________________________________________
Error (ME)
___________________________________________
How concepts are related: ________________________________
___________________________________________

Q13. List 2 factors that influence a researcher's selection of a particular confidence


level:

1. _______________________________________________

2. _______________________________________________

Q14. Fill in the blanks below with the appropriate confidence level (CL), "middle"
location in the sampling distribution, and the standardized scores for large samples
(Note: know how these answers are actually derived from the other information):

Converting Confidence Levels into Standardized Scores in Large Samples

Confidence Level (CL) Location in Sampling Distribution Standardized Scores

68 % CL “Middle 68% of Cases” z = _______________

_______________ “Middle 80% of Cases” z = ± 1.28

90 % CL _______________________ z = ± 1.64

95 % CL “Middle 95% of Cases” z = _______________

_______________ _______________ z = ± 2.33

99 % CL “Middle 99% of Cases” z = _______________

417

Appendix C: Notetaking Guides (Chapter 8) 418

Q15. List the computing formula for the standard error (se) and describe how the
sample size influences the size of its estimated value.

Concept Computing Formula and Effect of Sample Size


Computing Formula = _____________________
Standard
Error (se) How does sample size influence the size of the standard error?
___________________________________________

Q16. List the computing formula for the margin of error (ME) and calculate this measure
for data from a large random sample:

Concept Computing Formula and Examples of Calculations


1. Computing Formula: ME = _____________________
Margin of
Error (ME) 2. Calculate ME from the following data in a large sample:
Example 1: 68% CL, se=10 ---> ME = ± (1.00)(10)= ∓ 10
Example 2: 90% CL, se=10 ---> ME = ± ____________
Example 2: 95% CL, se=10 ---> ME = ± ____________

3. What effect does increasing the confidence level (CL) on the
numerical value of the margin of error? ______________
_____________________________

Q17. Based on their symbolic representation, name each of the following types of
confidence intervals for large random samples:

Symbolic Form of
Confidence Interval Type of Confidence Interval for Large Samples
µ ± zCL sµ Confidence Interval for a Population Mean

x ± zCL sex ________________________________________

PPopulaton ± zCL sp ________________________________________

pSample ± zCL sep ________________________________________


418

Appendix C: Notetaking Guides (Chapter 8) 419

Q18. List the general steps in the construction and interpretation of confidence
intervals for large random samples:

General steps in constructing/interpreting confidence intervals in large samples



1. Take a large random sample from the population (n > 50).
2. Compute basic sample statistics (", s, se)
3. ____________________________________________________________
4. ____________________________________________________________
5. ____________________________________________________________
6. ____________________________________________________________
7. _________________________________________________________________

Q19. Know how to develop and interpret confidence intervals for population means
and population proportions under the following conditions:


Problem #1: In a large random sample of homicide detectives (n=200), the average
detective investigated 100 homicides over a 10-year period ("̿ = 100). The estimated
standard error is 15. Develop and interpret confidence intervals (CI) for these sample
data.


A. List the appropriate formula for estimating the confidence interval for this unknown
population parameter: ________________________________
B. Calculate the 80% Confidence Interval: 100 ± 1.28 (15) = 80.8 to 119.2

C. Calculate the 90% Confidence Interval: ________________________________

D. Interpret the results of the 90% confidence interval and the sample estimates of the
unknown population parameter:

1. The single best estimate of the mean number of investigations by homicide

detectives in the U.S. is _________.

2. We are 90% confident that the actual population mean falls somewhere between
_________ and ________ homicide investigations per detective.

419

Appendix C: Notetaking Guides (Chapter 8) 420


Problem #2: A large random sample of homicides (n=100) indicates that 60% of these
killing were solved by an arrest (pArrest = 60%). The estimated standard error is 5%.
Develop and interpret confidence intervals (CI) from this sample data.

A. List the appropriate formula for estimating the confidence interval for this unknown
population parameter: _______________________________

B. Calculate the 95% Confidence Interval: _______________________________

C. Calculate the 98% Confidence Interval: _______________________________

D. Interpret the results of the 98% confidence interval and the sample estimates the
unknown population parameter:

1. _____________________________________________________________

2. _____________________________________________________________

_____________________________________________________________

420

Appendix C: Notetaking Guides (Chapter 8) 421

LO3: Calculate and interpret confidence intervals from small sample data

Q20. Recognize the similarity of the process in constructing confidence intervals for
large (N>50) and small (n<50) samples.

Steps in constructing confidence intervals in small and large samples



1. Take a random sample from a population.
2. Compute sample statistics.
3. Convert a selected confidence level into standardized scores.
4. ____________________________________________________________
5. ____________________________________________________________

Q21. The primary difference between confidence intervals for large and small samples
is the sampling distribution used to establish to confidence levels into
standardized scores. These standardized scores are based on the ____
distribution in small samples and the _______ distribution for large samples.

Q22. Based on their symbolic representation, name each of the following types of
confidence intervals for small random samples:

Symbolic Form of
Confidence Interval Type of Confidence Interval for Small Samples
µ ± tCL sµ Confidence Interval for a Population Mean

x ± tCL sex ________________________________________

PPopulaton ± tCL sp ________________________________________

pSample ± tCL sep ________________________________________


Q23. The shape of the t-distribution depends on its degrees of freedom. The degrees of
freedom for the t-distribution are based on the following formula: ____________

421

Appendix C: Notetaking Guides (Chapter 8) 422

Q24. Know how to read a t-distribution table and how to convert the value of alpha ($)
into an confidence level.

How to derive particular t-scores to represent particular confidence levels:



1. Convert the value of alpha (e.g., $ =.10) in any t-distribution table into a
confidence level (CL) by applying the formula: CL = 100 – $ 100 .
2. ____________________________________________________________
3. ____________________________________________________________

Q25. Identify the t-value that represents the following confidence levels for particular
degrees of freedom

Degrees of
Confidence Level Freedom (df) t-value
90% CL 15 ± _________
95% CL 25 ± _________
95% CL 40 ± _________
99% CL 10 ± _________

Q26. Know how to develop and interpret confidence intervals for population means
and population proportions under the following conditions:


Problem #1: In a small random sample of auto thieves (n=16), the average age of the
offender was 22 years old ("̿ = 22). The estimated standard error is 2.5. Develop and
interpret confidence intervals (CI) for these sample data.

A. List the appropriate formula for estimating the confidence interval for this unknown
population parameter: ________________________________
B. Calculate the 90% Confidence Interval: 22 ± 1.753 (2.5) = 17.6 to 26.4
C. Calculate the 95% Confidence Interval: ________________________________
D. Interpret the results of the 95% confidence interval and the sample estimates of the
unknown population parameter:

1. The single best estimate of the mean number of investigations by homicide


detectives in the U.S. is _________.

2. We are 95% confident that the actual population mean falls somewhere between
_________ and ________ homicide investigations per detective. 422


Appendix C: Notetaking Guides (Chapter 8) 423


Problem #2: A small random sample of sex offense cases (n=41) indicates that 25% of
these cases had DNA evidence (pDNA= 25%). The estimated standard error is 4%.
Develop and interpret confidence intervals (CI) from this sample data.

A. List the appropriate formula for estimating the confidence interval for this unknown
population parameter: _______________________________

B. Calculate the 95% Confidence Interval: _______________________________

C. Calculate the 99% Confidence Interval: _______________________________

D. Interpret the results of the 99% confidence interval and the sample estimates the
unknown population parameter:

1. _____________________________________________________________

2. _____________________________________________________________

_____________________________________________________________

423

Appendix C: Notetaking Guides (Chapter 8) 424

LO4: Demonstrate how the width of a confidence interval is influenced


by the confidence level, the sample size, and the standard error of
the estimate
Q27. Identify how the following factors influence the width of any confidence intervals:

Factor Impact on Width of Confidence Interval

Increase confidence Increase confidence level à greater width of confidence interval


level (68% to 95%)
Increase sample size ___________________________________________________
Increase standard ___________________________________________________
error (se)

Q28. List two reasons why criminologists prefer to use larger random samples than
smaller random samples:

Why large random samples preferred over small random samples?

1. __________________________________________________
2. __________________________________________________

Q29. The range of confidence interval will be wider in which of the following situations:

Widest Range of the


Situation Confidence Interval
A. 68% Confidence Level
B. 90% Confidence Level A, B, or C? _________
C. 95% Confidence Level
A. Standard Error = 20
B. Standard Error = 15 A, B, or C? _________
C. Standard Error = 10
A. N = 200
B. N = 100 A, B, or C? _________
C. N = 20

424

Appendix C: Notetaking Guides (Chapter 9)

Analyzing Criminological Data


NOTE-TAKING GUIDE (Fill in form while reading chapter)

Chapter 9: Testing Claims and Other Hypotheses

Concepts Learning Objectives (LO)

- Hypothesis testing LO1: Recognize and establish various types of null
- Types of hypothesis tests and alternative hypotheses
- Null hypotheses (Ho)
- Alternative hypotheses (Ha) LO2: Explain how decision rules in hypothesis
- One-tail and two-tail tests testing are established and what factors
- Decision rules influence them
- Type I and Type II errors
- Zone of rejection LO3: Apply and interpret hypothesis test results
- Significance level (alpha) about population means and proportions in
- Critical value(s) of test statistics large samples
- Statistical vs. Substantive
Significance LO4: Apply and interpret hypothesis test results
about population means and proportions in
small samples


LO1: Recognize and establish various types of null and alternative
hypotheses

Q1. Identify the basis steps in the process of hypothesis testing (in general terms):
General steps in hypothesis testing

1. Some makes a claim about a population value.
2. ____________________________________________________________
3. ____________________________________________________________
4. ____________________________________________________________
5. ____________________________________________________________
6. ____________________________________________________________

425

Appendix C: Notetaking Guides (Chapter 9)

Q2. Define "hypothesis testing ":



Q3. Define "null hypothesis (Ho)" and describe why it is called "null":

Concept Definition and Why is it called "null" hypothesis:

Definition: _____________________________________________
Null _____________________________________________
Hypothesis Why is it called "null"? ___________________________________
(Ho)
_____________________________________________

Q4. Identify 4 distinct types of hypothesis tests and their symbolic representation:


Example of Symbolic
Type of Hypothesis Test Representation of Ho

1. One-Sample Test of a Population Mean Ho: μ = 6.5 years

2. ___________________________________ _______________________

3. ___________________________________ _______________________

4. ___________________________________ _______________________

Q5. Converting verbal claims/statements about population parameters into symbolic


representations of null hypotheses (Ho).

Claims/Statement about Population Parameters Symbolic Form of Ho


The average time between arrest and conviction is 90 days Ho: μDays = 90

The average convicted burglary is 23 years old ________________

70% of U.S. citizens favor the death penalty ________________

The proportion of parolees who return to prison is .40 ________________

There are no gender differences in the average police salary ________________

Drug arrest rates are similar in the U.S. and Canada ________________

The percent of trials that lead to guilty verdict is the same


for both public and private defense attorneys ________________

No gender differences exist in the risks of being mugged ________________

426

Appendix C: Notetaking Guides (Chapter 9)

Q6. Define the "alternative hypothesis (Ha)" and the factors influencing its direction:

Concept Definition and Factors it's Directional Nature:

Definition: _____________________________________________
Alternative _____________________________________________
Hypothesis Factors influencing the directional nature of Ha:
(Ha)
1. Previous research
2. _______________
3. _______________

Q7. Be able to identify the 3 different types of alternative hypothesis (Ha) from the
following types of null hypotheses (Ho)

Types of Null Hypothesis (Ho) Types of Alternative Hypotheses (Ha)


1. Ha: μ ≠ 100
Ho: μ = 100 2. Ha: μ > 100
3. Ha: μ < 100
1. _______________
Ho: Ppop = 75% 2. _______________
3. _______________
1. _______________
Ho: μ1 - μ2 =0 2. _______________
3. _______________
1. _______________
Ho: P1 - P2 =0 2. _______________
3. _______________

427

Appendix C: Notetaking Guides (Chapter 9)

Q8. Definition "One-and Two-Tailed Tests" and describe when each is used


Concepts Definition and When are they used:

Definition: _______________________________

One-and-Two ____________________________________
Tailed Tests When to use 1-Tailed Test? _________________
____________________________________
When to use 2-Tailed Test? _________________
____________________________________

Q9. Use the knowledge of the nature of the alternative hypothesis (Ha) to identify
whether using one-tailed or two-tailed tests and visually represent it

Draw appropriate sampling


Alternative distribution with proper
Hypothesis (Ha) 1- or 2- Tailed Test? tail(s) shaded


Ha: μ ≠ 100 2-tailed test



Ha: μ < 100 ____________



Ha: μ > 100 ____________

428

Appendix C: Notetaking Guides (Chapter 9)

LO2: Explain how decision rules in hypothesis testing are established and
what factors influence them


Q10. Define "decision rules" and the factors involved in these rules for hypothesis
testing:

Concept Definition and Type of Factors in Decision Rules:

Definition: _______________________________
____________________________________
Decision Rules Type of factors involved in these decision rules?
1. Sampling Distributions

2. _________________
3. _________________
4. _________________

Q11. What are "sampling distributions" and how are they used in hypothesis testing:


Concepts Definition and Use in Hypothesis Testing

Definition: _______________________________

____________________________________
Sampling How are sampling distributions used in
Distributions hypothesis testing? _______________________

____________________________________
____________________________________

Q12. Define "zone(s) of rejection" and how are they used in hypothesis testing:


Concepts Definition and Use in Hypothesis Testing:
Definition: _______________________________
____________________________________

Zones(s) of How are zone(s) of rejection used in hypothesis
Rejection testing? ________________________________
____________________________________

____________________________________

429

Appendix C: Notetaking Guides (Chapter 9)

Q13. The number of zones of rejection for testing the null hypothesis and their location
in a sampling distribution depend entirely on ____________________________

Q14. Identify the alternative hypothesis (Ha) implied by the shaded zone(s) of rejection a
in the following sampling distributions.


Alternative Hypothesis (Ha)
Zone(s) of rejection (shaded areas) Implied by shaded areas in
for rejecting the null hypothesis (Ho) graph of sampling distribution



Ha: μ1 -: μ2 ≠ 0



Ho: μ1 -: μ2 = 0


Ha: ________________



Ho: μ1 -: μ2 = 0



Ha: ________________




Ho: μ1 -: μ2 = 0

Q15. Define the "significance level (α)" in hypothesis testing and identify the most
common values of alpha(α) used in criminological research:

Concepts Definition and Alpha Values in Hypothesis Testing:

Definition: ____________________________________
Significance
_________________________________________
Level (α)
Most common values of alpha in hypothesis testing?
α = ______ α = ______ α = ______

430

Appendix C: Notetaking Guides (Chapter 9)

Q16. Define "Type I and Type II Errors" and how researchers reduce the risks of
committing a type I error.

Concepts Definitions and Examples

Definition: _____________________________________
Type I Error _____________________________________
How to reduce the risks of this error in hypothesis testing:
______________________________________

Type II Error Definition: _____________________________________


_____________________________________

Q17. Define "critical values" for hypothesis testing, describe why they are important,
and identify 3 factors that determine their numerical value(s).

Concepts Definitions, Why Important, and Factors Influencing Them

Definition: _____________________________________
Critical _____________________________________
Value(s) Why are they important in hypothesis testing? __________
_____________________________________
_____________________________________
Factors that determine the numerical score of critical value(s):
1. the sampling distribution used.
2. __________________________________________
3. __________________________________________

Q18. When the alternative hypothesis is non-directional (e.g., Ha: P ¹ .75), there will be
_______ critical value(s) and they will be found in ____ tail(s) of the sampling
distribution.

Q19. Identify the location of sample values on a sampling distribution under the
following conditions:

A. Common outcomes if the null hypothesis is true will be found around what
area of the sampling distribution, the middle or tail(s)? ______________

B. Rare outcomes if the null hypothesis is true will be found around what area of
the sampling distribution, the middle or tail(s)? ______________

431

Appendix C: Notetaking Guides (Chapter 9)

Q20. Identify the critical value(s) that would lead to rejecting the null hypothesis in
favor of the alternative hypothesis under the following conditions:

Conditions/Properties of the Research Question Critical Value(s):

z = _____________
Ho: μ =20; Ha: μ ¹ 20; large random samples, α = .05
z = _____________
Ho: P =.70; Ha: P ¹ .70; large random samples, α = .05
z = _____________
Ho: μ1- μ2 = 0; Ha: μ1- μ2 ¹ 0; large random samples, α = .05
z = _____________
Ho: P1- P2 = 0; Ha: P1- P2 ¹ 0; large random samples, α = .05

Ho: μ =200; Ha: μ > 200; large random samples, α = .01 z = _____________
Ho: μ =200; Ha: μ < 200; large random samples, α = .01 z = _____________
Ho: P =45%; Ha: μ > 45%; large random samples, α = .01 z = _____________
Ho: μ1- μ2 = 0; Ha: μ1- μ2 < 0; large random samples, α = .01
z = _____________

Q21. Identify how the specific elements of decision rules for testing hypotheses can be
derived/inferred from the following graph of the standard normal (z) distribution:

Information for hypothesis testing derivable from this normal (z) distribution


Information/Characteristics of Test How do we know this from the graph?

1. Null Hypothesis is a test of population mean Ho: U =0 at the middle of the graph refers
to the population mean (U).
2. Large Sample Test It says that we are using the normal (z)
distribution and we use this sampling
distribution when we have large sample(s)

3. The alternative hypothesis is non-directional ________________________________

4. The significance level is set at .05 (α =.05) __________________________________

5. Sample values greater than z of ± 1.96 will __________________________________


lead to rejecting the null hypothesis.

432

Appendix C: Notetaking Guides (Chapter 9)

LO3: Apply and interpret hypothesis test results about population means
and proportions in large samples
Q22. List the general steps in hypothesis testing:

General steps in hypothesis testing



1. Develop a null (Ho) and alternative (Ha).
2. ____________________________________________________________
3. ____________________________________________________________
4. ____________________________________________________________
5. ____________________________________________________________
6. ____________________________________________________________

Q23. Use the following information to conduct a One-Sample Test of a Population


Mean based on a large random sample:

Background information and sample data: Assume that a national media outlet claims
that the average highway patrol officer gives out 30 tickets per day. Past research and
logically reasoning suggests that this average number is too high. To evaluate the
media’s claim, you draw a large random sample of 100 highway patrol officers and ask
them about their ticketing practices. The following descriptive statistics are computed
from this sample data: ! = 20 tickets, s = 25, n= 100. You decide to test the null
hypothesis with α = .05.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.

Ho: __________________ Ha: __________________


2. Develop critical value(s) to define the zone of rejection based on the directionality of
the Ho and the alpha (α)= .05.
Critical Value(s): ____________

3. Compute sample statistics (see descriptive statistics above) and estimate of the level
of sampling error by computing the standard error [se]). se = s/ Ön = ____________

4. Convert the sample mean into standardized (z) scores by using the following
conversion formula: z = (! - ")/ se = ____________
--- Continued---

433

Appendix C: Notetaking Guides (Chapter 9)


---Continued---

5. Compare the obtained z-score (______) with the expected z-score (______).

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of the obtained and
expected z-scores on this graph.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? ____________________________________________________________

Q24. Use the following information to conduct a One-Sample Test of a Population


Proportion based on a large random sample:

Background information and sample data: A national study on parolees suggest that
25% of convicted sex offenders repeat their crimes after imprisonment. However, other
research and theories of sex offender motivation do not provide clear evidence to
predict whether this claim is too low or too high. To evaluate the claim from this
national study, you draw a large random sample of 81 paroled sex offenders and assess
whether or not they have been rearrested (i.e., recidivate). The following descriptive
statistics are computed from this sample data: pSample = 20% recidivated, sep = 4.4, n=
81. You decide to test the null hypothesis with α = .05.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.
Ho: __________________ Ha: __________________

---Continued--

434

Appendix C: Notetaking Guides (Chapter 9)


---Continued---

2. Develop critical value(s) to define the zone of rejection based on the directionality of
the Ho and the alpha (α)= .01.
Critical Value(s): ____________

3. Compute sample statistics (see descriptive statistics above)

4. Convert the sample mean into standardized (z) scores by using the following
conversion formula: z = (pSample- PPopulation)/ se = ____________

5. Compare the obtained z-score (______) with the expected z-score (______).

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of the obtained and
expected z-scores on this graph.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

435

Appendix C: Notetaking Guides (Chapter 9)

Q25. Use the following information to conduct a Two-Sample Test of Differences in


Population Means based on large random samples:

Background information and sample data: Are there differences in court processing
time (i.e., the number of days between arrest and conviction) for cases of capital
murder than non-capital murder cases. Based on past research and theories about legal
defense strategies, you expect capital murder cases to take longer to process than non-
capital cases. To evaluate the null hypothesis, you draw two large random samples from
court records and compute the average court processing time for each type of murder
case. The following descriptive statistics are computed from these sample data:
! capital= 380 days; s2capital = 200; ncapital = 60; ! non-capital= 300 days; s2capital = 190; ncapital =
80; secap-noncap = 5.7. You decide to test the null hypothesis with α = .05.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.

Ho: ________________________ Ha: ___________________________


2. Develop critical value(s) to define the zone of rejection based on the directionality of
the Ho and the alpha (α)= .05.
Critical Value(s): ____________

3. Compute sample statistics and estimate of the level of sampling error by computing
the standard error [se]). See descriptive statistics above

4. Convert the sample means into standardized (z) scores by using the following
conversion formula: z = ( x cap - x non-cap ) / secap-noncap = ____________

5. Compare the obtained z-score (______) with the expected z-score (______).

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of zobt and zexp.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

436

Appendix C: Notetaking Guides (Chapter 9)

Q26. Use the following information to conduct a Two-Sample Test of Differences in


Population Proportions based on large random samples:

Background information and sample data: Are there age differences between juveniles
and adults in whether or not they have engaged in shoplifting. Based on past research
and theories about crime over the life course, you expect adults to be far less likely than
juveniles to steal things from stores. To evaluate the null hypothesis, you draw two large
random samples and ask respondents whether or not they have engaged in this crime.
The following descriptive statistics are computed from these sample data: pAdult= .25;
nAdults = 80; pJuv= .30; njuv = 60; seAdu-Juv = .08. You decide to test the null hypothesis
with α = .01.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.

Ho: ________________________ Ha: ___________________________


2. Develop critical value(s) to define the zone of rejection based on the directionality of
the Ho and the alpha (α)= .01.
Critical Value(s): ____________

3. Compute sample statistics and estimate of the level of sampling error by computing
the standard error [se]). See descriptive statistics above

4. Convert the sample means into standardized (z) scores by using the following
conversion formula: z = (pAdult - pJuv ) / seAdu-Juv = ____________

5. Compare the obtained z-score (______) with the expected z-score (______).

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of zObt and zExp.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

437

Appendix C: Notetaking Guides (Chapter 9)

LO4: Apply and interpret hypothesis test results about population means
and proportions in small samples

Q27. Identify major difference between hypothesis testing for large and small samples:

Answer: Hypothesis tests for large random samples use the standard normal (z)
distribution as the sampling distribution for defining rare and common
outcomes.

For small random samples, standard scores that identify rare and
common outcomes if the null hypothesis is true are established on the
basis of this sample distribution: _________________

Q28. Know how to read a t-distribution table and identify the critical t-values that
define the zone(s) of rejection associated with the null hypothesis at different
significance levels (α). An abbreviated t-distribution table is shown below:

Critical t-values for selective values of α and


degrees of freedom (df)

One-Tailed Test: α= .05
α= .025
α= .005
Two- Tailed Test: α= .10 α= .05 α= .01

Degrees of
Freedom (df)

10 1.812 2.228 3.169
15 1.753 2.131 2.947
20 1.725 2.086 2.845

25 1.708 2.060 2.787

30 1.697 2.042 2.750


40 1.684 2.021 2.704

50 1.676 2.009 2.678

Q29. Identify the critical values of the t-distribution under the following conditions:

A. Ho: μ =20, Ha: μ ¹ 20; n = 21, df =20 (n-1), α = .05 -à critical value(s) of t = _________
B. Ho: p =63%, Ha: p > 63%; n = 21, df =20 (n-1), α = .05 -à critical value(s) of t = ______
C. Ho: u1-u2 = 0, Ha: u1-u2 < 0; n1 = 26, n2 = 16, df =40 [(n1-1)+ (n2-1)], α = .05
à critical value(s) of t = ______

438

Appendix C: Notetaking Guides (Chapter 9)

Q30. Use the following information to conduct a One-Sample Test of the Population
Mean based on a small random sample (n < 50):

Background information and sample data: A media watchdog group contends that the
average American teenager watches 21 hours of violence on television each week.
Based on past research, you expect the weekly hours of viewing violent television
programs to be lower than this average. To evaluate the null hypothesis, you interview
a small random sample of 26 teenagers and ask them about their television viewing
habits. The following descriptive statistics are computed from these sample data:
! Viol.TV = 14 hours per week; s = 10; n = 26. You select a significance level of α = .05.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.

Ho: ________________________ Ha: ___________________________


2. Develop critical value(s) to define the zone of rejection based on the Ha, α = .05, and
the degrees of freedom (df= n - 1):
Critical Value(s): t = ____________

3. Compute sample statistics (see above) and estimate of the level of sampling error by
computing the standard error [se]). se = s/ Ön = ____________

4. Convert the sample means into standardized (t) scores by using the appropriate
conversion formula: t = (! - ")/ se = _______________ = _______

5. Compare the obtained t-score (________) with the expected critical value(s) of the
t-score (______) for α = .05 and degrees of freedom (df) = ________

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of tobt and texp.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

439

Appendix C: Notetaking Guides (Chapter 9)

Q31. Use the following information to conduct a One-Sample Test of the Population
Proportion based on a small random sample (n < 50):

Background information and sample data: The FBI's Uniform Crime Reports indicate
that 60% of homicides known to the police are cleared by an arrest. Based on past
research and concerns about the reliability of police data, you question this claim but
don't know if the actual clearance rate is higher or lower than this hypothesized value.
To evaluate the null hypothesis, you take a small random sample of 26 homicides and
calculate the percentage of them that resulted in an arrest. The following descriptive
statistics are computed from these sample data: pSample = 58% resulted in an arrest;
s = 5; n = 26. You select a significance level of α = .01.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.

Ho: ________________________ Ha: ___________________________


2. Develop critical value(s) to define the zone of rejection based on the Ha, α = .01, and
the degrees of freedom (df= n - 1):
Critical Value(s): t = ____________

3. Compute sample statistics (see above) and estimate of the level of sampling error by
computing the standard error [sep]). sep = s/ Ön = ____________

4. Convert the sample means into standardized (t) scores by using the appropriate
conversion formula: t = (pSample - PPopulation)/ sep = _______________ = _______

5. Compare the obtained t-score (________) with the expected critical value(s) of the
t-score (______) for α = .01 and degrees of freedom (df) = ________

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of tobt and texp.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

440

Appendix C: Notetaking Guides (Chapter 9)

Q32. Use the following information to conduct a Two-Sample Test of Differences in


Population Means based on small random samples:

Background information and sample data: Are their gender differences in the average
monetary bail assigned for pretrial release of drug offenders? Based on past research,
you expect males to have higher bail amounts than females. To evaluate the null
hypothesis, you draw two large random samples from court records and compute the
average bail amount for each gender. The following descriptive statistics are computed
from these sample data: ! male = $5,000; s2male = 7,000; nmale = 100; ! female = $4,980;
s2female = 10,000; nfemale = 80; sem-f = 14. You select a significance level of α = .01 to test
the null hypothesis.


1. List in symbol form the null hypothesis (Ho) and the alternative (Ha) based on the
information provided.

Ho: ________________________ Ha: ___________________________


2. Develop critical value(s) to define the zone of rejection based on the Ha, α = .01, and
the degrees of freedom [df= (nm - 1)+ (nf - 1)]:
Critical Value(s): ____________

3. Compute sample statistics and estimate of the level of sampling error by computing
the standard error [sem-f]. See descriptive statistics above.

4. Convert the sample means into standardized (t) scores by using the following
conversion formula: t = ( x male - x female) / sem-f = ____________

5. Compare the obtained t-score (________) with the expected critical value(s) of the
t-score (______) for α = .01 and degrees of freedom (df) = ________

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of tobt and texp.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.

What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

441

Appendix C: Notetaking Guides (Chapter 9)

Q33. Use the following information to conduct a Two-Sample Test of Differences in


Population Proportions based on small random samples:

Background information and sample data: Are their differences by the type of defense
attorney (public vs. private) in the prevalence of guilty pleas in murder cases. Based on
past research and theories about courtroom practices, you expect private attorneys to
be less likely to have their murder defendants plead guilty than public defenders. To
evaluate the null hypothesis, you draw two small random samples and compute the
proportion of murder cases that result in a guilty plea for each type of attorney. The
following descriptive statistics are computed from these sample data: pPrivate = .69 (69%
of their clients plead guilty); nPrivate = 13; pPublic= .84 (84% of their clients plead guilty);
nPublic = 37; sePriv-Pub = .02. You decide to test the null hypothesis with α = .05.

1. List in symbol form the null (Ho) and the alternative (Ha) hypotheses based on the
information provided.

Ho: ________________________ Ha: ___________________________


2. Develop critical value(s) to define the zone of rejection based on the Ha, α = .05, and
the degrees of freedom [df= (npri - 1)+ (npub - 1)]:
Critical Value(s): ____________

3. Compute sample statistics and estimate of the level of sampling error by computing
the standard error [sepri-pub]. See descriptive statistics above

4. Convert the sample means into standardized (z) scores by using the following
conversion formula: t = (pPriv - pPub) / sepri-pub = ____________

5. Compare the obtained t-score (________) with the expected critical value(s) of the
t-score (______) for α = .05 and degrees of freedom (df) = ________

Do the following: (1) Shade in the zone(s) of rejection for this problem on the
sampling distribution shown below and (2) label the locations of tObt and tExp.



6. Conclusion: Do you reject or not reject the Null Hypothesis (Ho)? ____________
Why? _____________________________________________________________.
What does the sample data tell us about the accuracy of the Alternative Hypothesis
(Ha)? _____________________________________________________________

442

Appendix C: Notetaking Guides (Chapter 10)

Analyzing Criminological Data


NOTE-TAKING GUIDE (Fill in form while reading the chapter)

Chapter 10: Measuring the Association Between Two Qualitative
Variables

Concepts Learning Objectives (LO)

- Bivariate association LO1: Construct and interpret a bivariate
- Contingency table contingency table
- Joint frequencies
- Marginal frequencies LO2: Construct and interpret percentage
- Table of total percentages contingency tables to assess bivariate
- Table of row percentages relationships
- Table of column percentages
- Independent variable LO3: Visually assess the strength of associations in
- Dependent variable bivariate contingency tables
- Chi-square test
- Statistical independence LO4: Calculate and interpret observed and
- Chi-square distribution expected chi-square values for testing the
hypothesis of statistical independence


LO1: Construct and interpret a bivariate contingency table


Q1. Provide examples of attributes for the following qualitative variables:
Qualitative Variables: Attributes:

Type of Crime? Violent, Property, Drug, Other

Weapon used? _______________________________

Type of Attorney? _______________________________

Victim-Offender Relationship? _______________________________


Q2. Define "bivariate association ":

443

Appendix C: Notetaking Guides (Chapter 10)

Q3. What is a "contingency table " and how this table used in statistics?

Concept Definition and Use:

Definition: _____________________________________________
Contingency _____________________________________________
Table How is this table used? ___________________________________
_____________________________________________

Q4. Define the following structural features of a contingency table: joint (cell)
frequencies, row marginal frequencies, and column marginal frequencies.

Concept Definition

Joint (cell) Definition: _________________________________________


Frequencies _________________________________________

Row Marginal Definition: _________________________________________


Frequencies
_________________________________________
Column Marginal Definition: _________________________________________
Frequencies
_________________________________________

Q5. Identify the joint (cell) frequencies, row marginal frequencies, and column marginal
frequencies in the following contingency table of victimization and gender:


Contingency Table Insert Cell/Marginal Frequencies
Row 1 Marginal Freq (no) = __________
Row 2 Marginal Freq (yes) = _________
Column 1 Marginal Freq (male) = _______
Column 2 Marginal Freq (female) = _______

Cell row1,col1 (no,male) = _______


Cell row1,col2 (no,female) = _______
Cell row2,col1 (yes,male) = ________
Cell row2,col2(yes,female) = _________

444

Appendix C: Notetaking Guides (Chapter 10)

Q6. Convert the following raw data into a contingency table:


Raw Data Use Raw Data to Fill in Contingency Table

Case # Offender Robbery Contingency Table of the relationship


1 Adult Street
between age of offender and robbery type
2 Juvenile Street
3 Adult Bank Adult Juvenile Row Total
4 Adult Street
5 Juvenile Bank Bank ____ ____ ________
6 Adult Bank
7 Adult Bank Street ____ ____ ________
8 Juvenile Street
Column
9 Adult Bank ____ ____ N =______
Total
10 Adult Street

Q7. The __________________ frequencies and the ___________________ frequencies


represent the univariate frequency distributions of each variable in a contingency
table.

445

Appendix C: Notetaking Guides (Chapter 10)

LO2: Construct and interpret percentage contingency tables to assess


bivariate relationships

Q8. Define the "total percentage table" and identify the steps in computing this
contingency table:

Concept Definition and How to Create this Table:

Definition: _____________________________________________
Total _____________________________________________
Percentage Steps in creating this table?
Table
1._______________________________________________
2 _______________________________________________

Q9. Use the table of observed cell frequencies to create a table of total percentages:

Table of Observed Cell Frequencies

Conversion formula: Total % cell = [cell frequency/ total N] x 100


Table of Total Percentages

Interpretation of relative prevalence of joint attribute combinations:

1. Most common joint attributes: ____% of cases involve no DNA and no conviction.

2. Least common joint attributes: _________________________________________

446
Appendix C: Notetaking Guides (Chapter 10)

Q10. Define the "row percentage table" and list the computing formula for converting
observed cell frequencies into row percentages:

Concept Definition and Computing Formula

Definition: _____________________________________________
Row _____________________________________________
Percentage Computing Formula:
Table
Row Cell % = [ cell frequency / marginal ____ frequency] x 100

Q11. Use the table of observed cell frequencies to create and interpret a row
percentage table:

Table of Observed Cell Frequencies

Conversion formula: Row Cell % = [cell frequency/ marginal row frequency] x 100
Table of Row Percentages

98.5 .5 %

Interpretation of marginal and joint row percentages:

1. Less than 1% (.8%) of cases involve DNA evidence.

2. Convictions are slightly more likely (2.5%) than non-convictions (____) when
DNA evidence is available.

447
Appendix C: Notetaking Guides (Chapter 10)

Q12. Define the "column percentage table" and list the computing formula for
converting observed cell frequencies into column percentages:

Concept Definition and Computing Formula

Definition: _____________________________________________
Column _____________________________________________
Percentage Computing Formula:
Table
Column % Cells= [ cell frequency / marginal ______ frequency] x 100

Q13. Use the table of observed cell frequencies to create and interpret a column
percentage table:

Table of Observed Cell Frequencies


Conversion formula: Column Cell % = [cell frequency/ marginal column freq] x 100
Table of Column Percentages


Interpretation of marginal and joint column percentages:

1. ____% of cases resulted in a conviction.

2. Cases with DNA evidence are far more likely to result in a conviction
(____%) than cases without DNA evidence (____%).

448

Appendix C: Notetaking Guides (Chapter 10)

Q14. Define "independent variable (IV)” and “dependent variable (DV)”



Concept Definition:

Independent Definition: _________________________________

Variable (IV) _________________________________

Dependent Definition: _________________________________
Variable (DV) _________________________________

Q15. Identify the independent and dependent variables in the following causal research
questions involving qualitative variables:


Independent (IV) and

Causal Research Question Dependent Variables (DV)

Does having a security alarm (yes, no) reduce IV: _____________________
one’s risks of residential burglary (yes, no)? DV: ____________________

Are trial verdicts (guilty, not guilty) influenced IV: _____________________
by the type of legal counsel (private, public)?
DV: ____________________

Q16. For causal analysis of contingency tables, how many tables are interpreted? ____

Q17. List the “golden rule” for calculating and interpreting percentage tables in causal
analysis of contingency tables:


“Golden Rule” for calculating/interpreting
percentage tables in causal analysis:
_________________________________________
_________________________________________
_________________________________________

Q18. When the independent variable is the “column variable” and the dependent
variable is the “row variable”, the proper percentage table to calculate/interpret
is: ____________________

449

Appendix C: Notetaking Guides (Chapter 10)

Q19. When the independent variable is the “row variable” and the dependent variable
is the “row variable”, the proper table to calculate/interpret is: ________

Q20. Identify the proper percentage table to interpret in the following examples:


Table of Observed Cell Frequencies

Carry Firearms? (DV) Gender (IV)
Male Female Total Row
No 90 40 130
Yes 10 10 20
Total Column 100 50 150

Proper table to interpret, row or column percentages? _____________



Table of Observed Cell Frequencies

Gender (IV) Carry Firearms? (DV)

No Yes Total Row

Male 90 10 100
Female 40 10 50
Total Column 130 20 150

Proper table to interpret, row or column percentages? _____________

Q21. A good summary of bivariate relations in a contingency table analysis involves the
interpretation of these two components:

Good summaries interpret these 2 components of contingency tables:


1. The marginal distributions for the dependent variable
2. __________________________________________________

450

Appendix C: Notetaking Guides (Chapter 10)

Q22. Summarize the results of the following contingency tables:

Table of Column Percentages

Support Death Penalty (DV)? Geographical Location (IV)

Rural Urban Total Row


Oppose 20% 50% 40%
Support 80% 50% 60%
Total Column 100% 100% 100%
Interpretation:
1. _______________________________________
2. _______________________________________

Q22. Summarize the results of the following contingency tables:

Table of Row Percentages

Gender (IV) Support for Death Penalty (DV)?

Oppose Support Total Row


Male 30% 70% 100%
Female 40% 60% 100%
Total Column 35% 65% 100%
Interpretation:
1. _______________________________________
2. ________________________________________

451

Appendix C: Notetaking Guides (Chapter 10)

LO3: Visually assess the strength of associations in bivariate contingency


tables

Q23. Identify the magnitude of percentage differences between categories that


represent “no/weak”, “moderate”, “strong”, and “perfect” bivariate relationships in
a contingency table.

Strength of Bivariate Association Magnitude of Percentage Differences

No/Weak Relationship 0 to 10% differences in categories

Moderate Relationship ______________________________

Strong Relationship ______________________________

Perfect Relationship ______________________________


Q24. Identify the strength of the bivariate relations in the following contingency tables:

Prior Record Strength of Relationship


Pretrial Release No Yes (weak, moderate, strong, perfect)


No 20% 70%
_______________
Yes 80% 30%

100% 100%

Prior Record Strength of Relationship


Pretrial Release No Yes (weak, moderate, strong, perfect)

No 20% 25%
_______________
Yes 80% 75%

100% 100%

Q25. When is caution required when applying these “rules of thumb” for interpreting
the strength of a bivariate relationship in a contingency table?

_______________________________________________

452

Appendix C: Notetaking Guides (Chapter 10)

LO4: Calculate and interpret observed and expected chi-square values for
testing the hypothesis of statistical independence

Q26. Define "Chi-square test” and the null hypothesis of “statistical independence”
underlying it


Concept Definition:

Chi-Square (c2) Definition: _________________________________

Test _________________________________

Statistical Definition: _________________________________
Independence _________________________________

Q27. If two qualitative variables are statistically independent, what is the nature of the
relationship between these variables? _______________________________

Q28. Define the “Chi-square (c2) distribution”


Definition: ____________________________________

____________________________________

Q29. Identify the steps in testing the null hypothesis of statistical independence in
contingency tables that are similar to other hypothesis tests:

General steps in hypothesis testing in contingency table analysis



1. Develop a null (Ho) hypothesis of statistical independence (i.e., the two
variables are unrelated.
2. Use the chi-square sampling distribution to define rare and common
outcomes when the null hypothesis is true.
3. ____________________________________________________________
4. ____________________________________________________________
5. ____________________________________________________________

453

Appendix C: Notetaking Guides (Chapter 10)

Q30. Compute the degrees of freedom (df) for the following contingency tables using
the formula: df = [# rows – 1] x [# column – 1]

Dimensions of Contingency Degrees of


Table (rows and columns) Freedom (df)

2 rows, 2 columns = 1 df
3 rows, 2 columns = _____
3 rows, 3 columns = _____
4 rows, 2 columns = _____
4 rows, 3 columns = _____

Q31. Use the Chi-Square Distribution Table to identify the critical value of the chi-
square test (ccv2) under the following conditions:

Chi-Square Distribution Table Critical Values of c2 for specific conditions:


2 rows, 2 col., α = .05 à ccv2 = _________

2 rows, 3 col., α = .05 à ccv2 = _________

3 rows, 2 col., α = .01 à ccv2 = _________

3 rows, 3 col., α = .01 à ccv2 = _________

2 rows, 4 col., α = .001 à ccv2 = _________

4 rows, 2 col., α = .001 à ccv2 = _________

Q32. List the general steps in calculating/interpreting Chi-square test results:

General steps in hypothesis testing with a Chi-square test



1. Develop a Contingency Table (Observed Cell Frequencies Table).
2. ____________________________________________________________
3. ____________________________________________________________
4. ____________________________________________________________
5. ____________________________________________________________
6. ____________________________________________________________

454

Appendix C: Notetaking Guides (Chapter 10)

Q33. Test the null hypothesis of statistical independence for the following contingency
table:

1. Develop a Contingency Table (Observed Cell Frequencies Table).



Offender’s Gender
Male Female Total
Case No 200 150 350
Conviction? Yes 250
200 50
Total 400 200 600

2. Calculate the Expected Frequencies for Each Cell (_____)



Offender’s Gender
Male Female Total
Case No 233.3 _____ 350

Conviction? Yes _____ _____ 250
Total 400 200 600

Expected cell frequencies = [row marginal x column marginal]/ total N]


Example: No Conviction, Male = [350 x 400]/600 = 233.3

3. Compute the Observed Chi-Square Value (c2 obs ) using the following formula:

∑ "#$ &'()*(+, &'( -
c2 obs =
(+, &'()
= [(200-233.3)2/233.3] + [___] + [___] + [___] = ___

4. Identify the Critical Value for the Chi-Square Value (c2 exp ):
Expected c2 value (c2 exp ) for α = .05 and df = (# row - # column= 1) = _________

5. Compare the Observed and Expected Chi-Square Values:


If observed c2 value (c2 obs) > expected c2 value (c2 exp), reject null hypothesis.
Decision for this analysis, reject or not reject Ho? ________________

455

Appendix C: Notetaking Guides (Chapter 10)

6. Convert Observed Cell Frequencies Table to Column Percentage


Table and interpret the results

Observed Cell Frequencies

Offender’s Gender
Male Female Total

Case No 200 150 350
Conviction? Yes 250
200 50
Total 400 200 600

Column Percentage Table

Offender’s Gender
Male Female Total

Case No 50% 75% 58.3%
Conviction? Yes 41.7%
50% 25%

Total 100% 100% 100%

Interpretation:

1. ___________________________________________________

2. ___________________________________________________

Q34. What are the general guidelines for balance the ideas of “statistical significance”
and “substantive significance”?

General Guidelines for Balancing Issues of Statistical and Substantive Significance



1. Don’t place too much emphasis on small percentage differences that are
statistically significant when using large samples.
2. ____________________________________________________________
3. ____________________________________________________________
4. ____________________________________________________________

456

Appendix C: Notetaking Guides (Chapter 11)

Analyzing Criminological Data


NOTE-TAKING GUIDE (Fill in form while reading the chapter)

Chapter 11: Analyzing Variation Within and Between Group Means

Concepts Learning Objectives (LO)

- ANOVA LO1: Describe the major sources of variability in a


- Partitioning variance quantitative dependent variable
- Total variation LO2: Apply visual methods to identify and assess the
- Between-group variation magnitude of group differences
- Within-group variation
- Correlation ratio (η2) LO3: Calculate and interpret the correlation ratio (η2)
- F-distribution and LO4: Conduct and evaluate formal tests of hypotheses
hypothesis testing about differences in group means
- Mean sum of squares
between groups
- Mean sum of squares
within groups.
- F-ratio

LO1: Analyzing Variation Within and Between Group Means


Q1. What is the purpose of “analysis of variance” (ANOVA) and what level of
measurement is required for using this statistical method:

Concept Purpose and Level of Measurement:

Purpose: _____________________________________________
Analysis of _____________________________________________
Variance Level of Measurement of Variables Required:
(ANOVA)
Independent Variable: _________________________________
Dependent Variable: ________________________________

Q2. The technique called “ANOVA” examines the differences in the _______ values of
the dependent variance for the different categories of the independent variable.

457

Appendix C: Notetaking Guides (Chapter 11)

Q3. Define the basic concept of “partitioning variance” that underlies the ANOVA
method.

Concept Definition

Partitioning Definition: _________________________________________


Variance _________________________________________

Q4. Define “total variation” (SSTotal), “between-group variation” (SSBetween), and “within-
group variation” (SSWithin)

Concept Definition

Total Definition: ________________________________________


Variation
(SSTotal) ________________________________________

Between- Definition: ________________________________________


Group
Variation ________________________________________
(SSBetween)

Within- Definition: ________________________________________


Group
Variation ________________________________________
(SSBetween)

Q5. Identify the formulas for the 3 major sources of variation in ANOVA: total,
between-group, and within-group variation”:

Formulas Source of Variation


(Total, Between-, Within-Group):
∑ (xi - " GM)2 _______________________

∑n (" Group - " GM)2 _______________________

∑ (xi - " Group)2 _______________________

458

Appendix C: Notetaking Guides (Chapter 11)

Q6. The three sources of variation is linked in the following formula:


SSTotal = _________ + _________

Q7. Conduct an analysis of variance (ANOVA) on the following data to assess whether
the type of drug user influences the number of drug relapses:

Number of Drug Relapses Type of Drug User Group Mean

0 Marijuana

0 Marijuana x mj= ______

0 Marijuana

0 Meth

6 Meth x meth =
______
6 Meth

8 Multi

10 Multi x mult =
______
12 Multi

x Grand Mean= ______

1. Look at the data and note any patterns or general observations:


___________________________________________________________________
2. Compute the grand mean (" GM ) and group means (" mj , " meth , " mult ). Put these mean values in the
shaded locations above ( ______ ).

3. Compute the total (SSTotal ), between-group (SSBetwen ), and within-group (SSWithin ) variation:

2
Total Variation = SSTotal = ∑ (xi − XGM ) = _____________

2
Between-Group Variation = SSBetween = ∑ n(xgroup − XGM ) = _____________

2
Within-Group Variation = SSWithin = ∑ (xi − XGroup ) = _____________

4. Interpret the relative sizes of between-/within-group variation and mean differences:

_______________________________________________________________

459

Appendix C: Notetaking Guides (Chapter 11)

LO2: Apply visual methods to identify and assess the magnitude of group
differences


Q8. Use bar charts and line graphs for visually inspecting the mean differences on the
dependent variable across the categories of the independent variable.

Interpret the pattern of differences in group means in the following line graph:

Investigative Hours by Crime Type


60
Investigative Hours

50
40
30
20
10
0
Murder Rape Robbery Auto Theft


Interpretation of mean differences:

The average hours of investigation is highest for murder
("̿ = 50) and lowest for ___________ ("̿ = ________)

Q9. To determine how well information about the type of crime explains variation in
the hours of police investigation, criminologist must look at what two sources of
variation in hourly investigations:

1. _________________________________________________

2. _________________________________________________

Q10. Use general comparisons of the relative sizes of between- and within-group
variation to identify the strength of bivariate associations in ANOVA:

1. Weak associations = Small between- and large within-group variation.

2. Moderate association = ____________________________________________

3. Strong association = ______________________________________________

460

Appendix C: Notetaking Guides (Chapter 11)

Q11. Making inferences about strength of relationship by comparing numerical values


of group means and standard deviations

Group Means ( x ) and Strength of Why is relationship weak,


Causal Relationship Standard Deviations (s) Relationship moderate, or strong?

Large between- ($45,000)


Gender differences in x Mal = $95,000, s = $750
Strong and small within- ($750 and
detective salaries x Fem = $50,000, s = $500
$500) group differences

Regional differences x South = 42 months., s = 30 _______________________


Moderate _______________________
in prison sentences x North = 36 months, s = 24
_______________________
_______________________
Age differences in x Juvenile = 3 yrs, s = 4.5 yrs
________ _______________________
length of probation x Adult = 2 yrs, s = 2.5 yrs
_______________________
_______________________
Court differences in x Civil = 9.7 days, s = 1.2
________ _______________________
trial length x Criminal = 2.2 days, s = .5
_______________________

Q12. Give two reasons why boxplots are useful as a visual approach for analyzing
patterns of variance:

Value of Boxplots as a visual method for analyzing variation:

1. __________________________________________________
2. __________________________________________________

Q13. What pattern of “boxes” and “whiskers” in a boxplot would indicate:


1. a strong bivariate association (i.e., large between- and small within-
group variation)? __________________________________________

2. a weak bivariate association (i.e., small between- and large within-


group variation)? __________________________________________

461

Appendix C: Notetaking Guides (Chapter 11)

LO3: Calculate and interpret the correlation ratio


Q14. Define the “correlation ratio” (η2), its multiple interpretations, and computing
formula:

Concept Definition, Interpretations, and Computing Formula:

Definition: _____________________________________________
_____________________________________________
2
Correlation Interpretations of η :
Ratio (η2) 1. __________________________________________________
2. __________________________________________________
Computing Formula for η2:
η2 = _____________________________________________

Q15. List the general properties of the correlation ratio (η2 ):

Properties of the correlation ratio (η2):

1. η2 ranges in numerical value from .00 to 1.00.

2. _____________________________________________

3. _____________________________________________

Q16. General “rules of thumb” for establishing guidelines for identifying weak
moderate, or strong correlation ratios (η2 ):
Strength of Bivariate Association Size of Correlation Ratio (η2 ):

Weak bivariate associations η2 = < .10

Weak to moderate bivariate associations η2 = __________

Moderate to moderately strong associations η2 = __________

Strong bivariate associations η2 = __________

462

Appendix C: Notetaking Guides (Chapter 11)

LO4: Conduct and evaluate formal tests of hypotheses about differences


in group means

Q17. Similarities between hypothesis testing in ANOVA and other analyses

Ho Testing in ANOVA is similar to other methods in these ways:

1. Establish a null hypothesis (Ho)

2. _________________________________________________

3. _________________________________________________

4. _________________________________________________

5. _________________________________________________

Q18. Differences between hypothesis testing in ANOVA and other analyses

Ho Testing in ANOVA differs from other methods in these ways:

1. The null hypothesis (Ho) is stated as the equality of means


across two or more groups (Ho: u1 =u2=u3= …. UGrand Mean)

2. _________________________________________________

3. _________________________________________________

Q19. Define the “F-distribution”

Concept Definition

F-Distribution Definition: ________________________


________________________

463

Appendix C: Notetaking Guides (Chapter 11)

Q20. Detailed Application of Testing Hypotheses with ANOVA and the F-Ratio

(Does the officer's gender influence the number of excessive force complaints?)

1. Develop the null hypothesis (Ho) about the population means.

Ho: μ Male = μ Female = ________________


2. Draw a random sample of cases and compute the sample’s overall mean
and group means (n = 20 [NMale= 10; NFemale= 10]

Male Female

Number of excessive force complaints


0 2
3 3 XGM = ∑ xi / n = ______________
5 12
2 1 x Males = ∑ xi / nMales = ______________
3 0
5 1
4 1 x Females = ∑ xi / nFemales = ______________
6 0
6 0
6 0

3. Compute the total variation (SSTotal), between-group variation (SSBetween), and


within-group variation (SSWithin) from the sample data.


2
SSTotal = ∑ (xi − XGM ) = ____________________________________

SSBetween = ∑ n(xgroup − XGM )2 = ____________________________________

SSWithin = ∑ (xi − XGroup )2 = ____________________________________

464

Appendix C: Notetaking Guides (Chapter 11)

4. Standardize components of the total variation (SSBetween and SSWithin) by


dividing them by their degrees of freedom (df).
a. Compute degrees of freedom for between-group variation:
df1 = k – 1, where k = # of groups
b. Compute degrees of freedom for within-group variation:
df2 = n – k, where n = sample size and k = # groups
c. Divide SSBetween by df1 to compute average variation between the groups:
MSBetween = SSBet / k
d. Divide SSWithin by df2 to compute average variation within the groups:
MSWithin = SSWithin / (n-k)
In this example, df1 = _________ MSBet = _________
df2 = _________ MSWithin = _________

5. Calculate the F-ratio of the mean sum of squared differences between the
group means (MSBetween) and the mean sum of squared differences within the
group means (MSWithin).
a. Define “F-Ratio”=

b. Compute the observed F-ratio (Fobs):
Fobs = MSBetween / MSWithin = _______________________________

6. Establish the critical value of the expected F-ratio that defines rare and
common outcomes in an F-distribution under the null hypothesis of no
differences in the means
Steps in identifying the critical value of the expected F-ratio to reject Ho:
1. Look at the F-Distribution Table for a particular alpha level (e.g., α = .05)
2. __________________________________________________________
3. __________________________________________________________

In this example, expected F-value (α = .05, df= 1,18) is ____________

465

Appendix C: Notetaking Guides (Chapter 11)

Critical F-Values for the F-Distribution (select values, α = .05)

df (k-1) 1 2 3 4 5 6 7 8 9

df (n-k)
1 161 199 215 224 230 234 237 239 241
2 18.5 19.0 19.2 19.3 19.3 19.3 19.4 19.4 19.4
3 10.1 8.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00
5 6.61 5.79 5.41 5.19 5.05 4.93 4.88 4.82 4.77
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18
10 4.96 4.10 3.71 3.48 3.33 3.22 3.13 3.07 3.02
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39
25 4.24 3.38 2.99 2.76 2.60 2.49 2.40 2.34 2.28
50 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07
75 3.97 3.12 2.73 2.49 2.34 2.22 2.13 2.06 2.01
100 3.94 3.09 2.70 2.46 2.30 2.19 2.10 2.03 1.97
500 3.86 3.01 2.62 2.39 2.23 2.12 2.03 1.96 1.90
1,000 3.85 3.00 2.61 2.38 2.22 2.11 2.02 1.95 1.89

Identify critical values of expected F-ratio under the following conditions:

a. α = .05, k =3, n= 10 à FExpected = ____________________

b. α = .05, k =2, n= 22 à FExpected = ____________________

a. α = .05, k =2, n= 52 à FExpected = ____________________

7. Compare the observed F-ratio and expected F-ratio to reach a decision


about whether to reject or not reject the null hypothesis (Ho).


If Fobs > Fexp , reject Ho that the means are equal across groups.
If Fobs < Fexp , do not reject Ho that the means are equal across groups.

In this example, observed F-ratio = ______ and expected F-ratio = ______ .

Thus, we would _________ the null hypothesis based on this data.

466

Appendix C: Notetaking Guides (Chapter 11)

8. Construct an ANOVA summary table that presents the major statistical


information used to test the null hypothesis.

File in this ANOVA summary table for this example of gender differences in
excessive force complaints:

ANOVA Summary Table

Source of Sum of Mean Sum


Variation Squares df of Squares F-Value Probability (α)

Between-Group ______ 1 ______ ______ ______

Within-Group ______ ____ ______

Total ______ 19


Interpretation of results:

______________________________________________________________________

______________________________________________________________________


Q21. Summaries of ANOVA results include the following information:


1. Comparison of the group means :

____ had a slightly higher average number of complaints than ____

2. The correlation ratio

___% of the variation in excessive force complaints was explained by gender.

3. The results of the F-test of the null hypothesis

Reject or not reject Ho? _______________________________________

467

Appendix C: Notetaking Guides (Chapter 12)

Analyzing Criminological Data


NOTE-TAKING GUIDE (Fill in form while reading chapter)

Chapter 12: Assessing Relationships Between Quantitative Variables

Concepts Learning Objectives (LO)

- Scatterplots LO1: Assess the strength and direction of bivariate


- Correlation coefficient (r) relationships using scatterplots

- Linear regression analysis LO2: Calculate and interpret bivariate correlation and
linear regression analyses
- Linear regression equation
(Y = a + bX) LO3: Assess the explanatory power of linear regression
models
- Y-intercept (alpha)
LO4: Demonstrate the logic in testing hypotheses about
- Unstandardized regression the coefficient of determination and regression
coefficient (slope) coefficients
- Ordinary Least Squares (OLS)
- Coefficient of determination (R2)
- Hypothesis testing in regression
analysis

LO1: Assess the strength and direction of bivariate relationships using
scatterplots

Q1. What is a “scatterplot” and how are the X and Y variables arranged in this graph:

Concept Definition and X-Y Graphic Representation:

Definition: ___________________________________________________
Scatterplot ___________________________________________________
The Structure of the Graph:
Dependent Variable (Y), vertical or horizontal axis: _______________
Independent Variable (X), vertical or horizontal axis: ______________

468

Appendix C: Notetaking Guides (Chapter 12)

Q2. What are 4 crucial types of information about the relationship between two
quantitative variables (Variables Y and X) that can be determined by looking at
their scatterplot?

Scatterplots provide information on:

1. Relationship Direction (i.e., whether X and Y are positively,


negatively, or have no bivariate relationship).
2. ________________________________________________
________________________________________________
3. ________________________________________________
________________________________________________
4. ________________________________________________
________________________________________________

Q2. Fill in the scatterplots with dots that show the following relationships between two
quantitative variables (Y, X):

A. Strong Positive Linear Relationship between Y and X:

B. Strong Negative Linear Relationship between Y and X:

469

Appendix C: Notetaking Guides (Chapter 12)

C. Weak Positive Linear Relationship between Y and X:

D. Weak Negative Linear Relationship between Y and X:

E. Moderately Strong Positive Linear Relationship between Y and X with an Outlier:

F. No Linear Relationship between Y and X:

470

Appendix C: Notetaking Guides (Chapter 12)

G. A Strong Non-Linear Relationship between Y and X:

Q3. Identify the direction (positive, negative) and strength (weak, moderate, strong) of the
bivariate relationships between Y and X depicted in the following scatterplots:


Relational Direction Strength of Association
Scatterplot (Positive, Negative) (Weak, Moderate, Strong)

_______________ _______________

_______________ _______________

_______________ _______________

Q4. Under what conditions are scatterplot not very useful for showing the nature and
magnitude of bivariate relationships between two quantitative variables?


471

Appendix C: Notetaking Guides (Chapter 12)

LO2: Calculate and interpret bivariate correlation and linear regression


analyses


Q5. Identify the purposes of correlation analysis and regression analysis. How do they
differ?

Concept Purposes and Differences

Correlation Purpose: ____________________________


Analysis ____________________________

Regression Purpose: ____________________________


Analysis ____________________________

Differences between correlation and regression:


______________________________________________________
______________________________________________________

Q6. Identify the properties of the correlation coefficient (r)


Properties of the Correlation Coefficient (r)

1. it is a measure of the linear association between two


quantitative variables.

2. it is the most common measure of association used in
criminological research.
3. ____________________________________________

4. ____________________________________________

Q7. Interpret the values of the following correlation coefficients (r):

Correlation Interpret the nature of the bivariate relationship

rXY = +1.00 ____________________________________________

rXY = .00 ____________________________________________

rXY = -1.00 ____________________________________________


472

Appendix C: Notetaking Guides (Chapter 12)

Q8. Fill in the steps and formulas for computing the correlation coefficient (r):

Steps to Compute a Correlation Coefficient

Steps Formulas

1. Compute the means for each variable. y


x =∑
n

x y = __________
n
2. Take deviations of individual scores from x − x y − y
their means.
3. ________________________________ (x − x )2 (y − y )2

4. Multiple together the deviation scores of (x − x )(y − y )


X and Y.
5. Compute the covariance of X and Y. cov xy =
∑(x − x)(y − y)
n −1

6. Compute the variances of X and Y. (y − y )2


s =∑
2 (x − x )2 s 2 =∑
__________
n −1
y
n −1
x

7. Compute the correlation coefficient (rxy). rxy =


cov xy
2 2
s s
x y

Q9. Fill in the blanks in the following table to compute the correlation coefficient (rxy):

Sample Data and Basic Calculations for Correlation Coefficient

x y x−x y− y (x − x )2 (y − y )2 (x − x )(y − y )

11 4 ___ 1 ____ 1 ____


10 3 ___ 0 ____ 0 ____
13 2 ___ -1 ____ 1 ____
15 5 ___ 2 ____ 4 ___
12 1 ___ - 2 ____ 4 ____

∑ =___ ∑ =15 ∑ = 0 ∑ = 0 ∑ =___ ∑ = 10 ∑ = ______

Compute these values:

# = _____ $ = _____ sX 2 = ________ sY 2 = ________ covXY = ______ rXY = ______

Interpretation of rXY : ________________________________________________

473

Appendix C: Notetaking Guides (Chapter 12)

Q10. General guidelines for interpreting the strength of the correlation coefficient (r):

Correlation Coefficient Strength of Correlation (weak, moderate, strong, perfect)



r = ± (1.00) Perfect Linear Relationship

r = ± (.70 to .99) ________________________________________

r = ± (.50 to .69) ________________________________________



r = ± (.35 to .49) ________________________________________

r = ± (.20 to .34) ________________________________________

r = ± (.10 to .20) ________________________________________


r = .00 ________________________________________

Q11. Identify the strength of the bivariate correlation by visual inspection of a scatterplot:

Scatterplot Estimate Correlation

r = - .80

r = - .40

r = .00 r = __________

r = + .40

r = + .80

Q12. Identify 3 assumptions necessary for using the correlation coefficient:

1. __________________________________________________________________________
2. __________________________________________________________________________
3. ___________________________________________________________________________

474

Appendix C: Notetaking Guides (Chapter 12)

Q13. List in symbolic forms the “linear regression equation” based on (1) population
parameters and (2) sample estimates of them:

Population and Sample Specifications Linear Regression Equation

Population Form of Equation: Yˆ = a + β X

Sample Form of Equation: ________________________

Q14. Define the “y-intercept”, “slope”, and “error term” in the equation Y = a +bX + e.

Concept Linear Equation Elements (Y = a + bX + e )

Y-Intercept (a) Definition: ____________________________


____________________________

Slope (b) Definition: ____________________________


____________________________

Error Term (e) Definition: ____________________________


____________________________

Q15. List the three primary sources of errors in predicting Y from X in a linear regression
analysis.

Sources of Error in Predicting Y from X:

1. _________________________________________________
2. _________________________________________________
3. _________________________________________________

Q16. What are the properties of the slope (b) and y-intercept term (a) when they are
estimated using the method of “Ordinary least squares” (OLS) regression?

Properties of OLS estimates of the y-intercept and slope:


_________________________________________________
_________________________________________________
475

Appendix C: Notetaking Guides (Chapter 12)

Q17. Based on the method of OLS regression, draw the regression line on the following
scatterplots that would minimize “the sum of squared errors in predicting Y from X”.

Draw “best fitting line” over the data points in these scatterplots

Q18. Errors in predicting Y from X are larger in which of the following graphs?

Largest Errors in Predicting Y from X, Graph A or B? _____________

Graph A Graph B

Q19. Define the “unstandardized regression coefficient”(b) in the equation Y = a +bX.

Concept Definition

Unstandardized Definition: ____________________________


Regression ____________________________
Coefficient (b)

476

Appendix C: Notetaking Guides (Chapter 12)

Q20. Interpret the substantive value of the unstandardized regression coefficient (b) in
the following regression equations.


Interpretation of Unstandardized Regression
Linear Regression Equation Coefficient (b)

Y$ = 10,000 + 8000 Xed b = 8000 indicates that a 1-unit increase in the


years of education leads to an $8,000 increase in
one’s income.

YCrime Rate = 2000 – 100 X# of police ______________________________________

______________________________________

Yyears prison = 1.2 + 2.1 X# of arrests _____________________________________

______________________________________


Q21. Interpret the substantive value of the y-intercept term (a) in the following
regression equations.


Linear Regression Equation Interpretation of Y-intercept Term (a)

Y$ = 10,000 + 8000 Xed a = 10,000 indicates that one’s income would be

$10,000 if they did not have any formal years of
education ( Xed = 0)

YCrime Rate = 2000 – 100 X# of police ______________________________________

______________________________________

Yyears prison = 1.2 + 2.1 X# of arrests ______________________________________

______________________________________

Q22. List the computing formulas for the slope (b) and y-intercept (a):


b (slope) = _____________________________

Y-intercept (a) = _____________________________

477

Appendix C: Notetaking Guides (Chapter 12)

Q23. Fill in the blanks in the following data to conduct an OLS regression analysis and
estimate the values of the slope (b) and y-intercept (a)

Computations for Regression Analysis


(Number of Prior Arrests (X) —> Length of Prison Sentence in years (Y))
 

y x x − xx (x − x )(y − y ) (y − y )2 (x − x )2 Y Yi − Y (Yi − Y )2

2 0 ____ ____ ____ ____ ____ ____ ____ ____

3 1 ____ ____ ____ ____ ____ ____ ____ ____

3 1 ____ ____ ____ ____ ____ ____ ____ ____

2 3 ____ ____ ____ ____ ____ ____ ____ ____

5 5 ____ ____ ____ ____ ____ ____ ____ ____

15 10 0 0 ____ ____ ____ ____ ____ ____


Fill in the blanks to complete the steps to compute the slope and y-intercept from the data above:

1. Compute the means for each variable: $ = ______ # = ______

2. Compute the deviations of scores from each mean [(y - &) and (x - ')]

3. Multiple together the deviation scores from each mean [(y - &)(x - ')] and sum them up: _____
2 2
4. Square the deviations from the mean of Y [(y - &) ] and mean of X [(x - ') ]

5. Compute the covariance of X and Y: covXY = Σ [(y - &)(x - ')]/ n - 1 = ______


2 2
6. Compute the variance of X: sX = Σ(x - ') / n -1 = __________
2
7. Compute the slope (b): b = covXY / sX = _______

8. Compute the y-intercept (a): a = y – b x = _______

9. Write out the linear regression equation: Y = a + b X = _______

Q24. Derive predicted values of Y from values of X for the following regression equations

Regression Equation Predicted Values of Y

Y = 100 + 1000 X X = 50 à Y = 100 + 1000 (5) = 5100


Y = 100 + 1000 X X = 10 à Y = __________________
Y = 8 - .50 X X = 10 à Y = __________________
Y = 8 - .50 X X = 5 à Y = __________________

478

Appendix C: Notetaking Guides (Chapter 12)

LO3: Assess the explanatory power of linear regression models


Q25. List 2 general ways to assess the overall “goodness of fit” in a regression analysis:

General methods to assess “goodness of fit” of regression models:

1. _________________________________________________
2. _________________________________________________


Q26. Describe the steps involved in the use of visual methods for assessing the “goodness
of fit” of a regression model.

Steps in using visual methods to assess “goodness of fit”:



1. Construct a scatterplot with the Y variable on the vertical axis and
the X variable on the horizontal axis.
2. _________________________________________________
_________________________________________________
3. _________________________________________________
_________________________________________________
4. _________________________________________________
_________________________________________________


Q27. Fill in the answers to these questions about using scatterplots to determine how well
an independent variable helps explain variation in a dependent variable:

1. If the scatterplot shows no clustering of points along this line, there are large

errors in predicting Y from X, and _______________________________.

2. If there is loose clustering around the line (i.e., data points form a large
football-like shape), _________________________________________


3. If these data points form a narrow band around the line, there are fewer
prediction errors, and ________________________________________

479

Appendix C: Notetaking Guides (Chapter 12)



Q28. Which of the following regression lines has the smaller errors in prediction and
provides a better explanation for variation in the dependent variable.

Smaller errors in predicting Y from X, Graph A or B? ____________


Graph A Graph B


Q29. Define the “coefficient of determination” (R2) and how it is interpreted:

Concept Definition and Interpretation:

Coefficient of Definition: ____________________________


Determination __________________________________

Interpretation: ________________________
__________________________________

Q30. General guidelines for interpreting the strength of the correlation coefficient (r):

Coefficient of
Determination Strength of Relationship (weak, moderate, strong, perfect)

R2 > .50 Strong Linear Relationship

R2 > .31 to 50 ________________________________________

R2 > .10 to .30 ________________________________________

R2 > .0 to .10 ________________________________________

480

Appendix C: Notetaking Guides (Chapter 12)

LO4: Demonstrate the logic of testing hypotheses about the coefficient of


determination and regression coefficients


Q31. What is the null hypothesis for testing the overall fit of a regression model by using
the coefficient of determination?

Ho: Ryx2 = _____________________


Q32. The computing formula for the coefficient of determination (Ryx2 ) is the following:

Ryx2 = _____________________


Q33. The coefficient of determination (Ryx2 ) represents how much of the variation in a
__________ variable is explained by the __________variable.

Q34. The hypothesis of no relationship between Y and X ( Ho: Ryx2 = 0 ) is tested with the
____________________ . Under this sample distribution, the null (Ho) hypothesis will be
rejected in favor of the alternative hypotheses (Ha) when the observed F-value is _______
than the expected F-value.

Q35. The test of the regression coefficient (Ho: b = 0) is evaluated by a t-distribution. The
test of this regression coefficient is the same as testing the hypothesis that
______________

Q36. Identify the steps in testing hypotheses about the slope (Ho: b=0) and y-intercept
(Ho: a = 0):

Steps in Hypothesis Testing about Slopes and Coefficient of Determination

1. Establish the null hypothesis (e.g., Ho: b=0; Ho: a =0).


2. ______________________________________________________________
3. ______________________________________________________________
4. ______________________________________________________________

481

You might also like