You are on page 1of 36

Texas School Survey

of Substance Use
1998
Methodology Report and Validity Analysis

For the Texas Commission on Alcohol and Drug Abuse

Prepared by
James A. Dyer, Ph.D.
Lisa O'Dell, Ph.D.
Bodhini Jayasuriya, Ph.D.
and
Melissa Tackett-Gibson

Public Policy Research Institute


Texas A&M University
College Station, TX 77843-4476
Introduction
The Public Policy Research Institute (PPRI), in conjunction with the Texas Commission on
Alcohol and Drug Abuse (TCADA), conducted the sixth statewide survey of drug and alcohol use
among Texas elementary and secondary students in the Spring of 1998. Originally implemented
in 1988 as a component of a larger survey assessing substance use among the state’s general
population, the school survey has since become an ongoing, independent project. District surveys
are offered every year and a statewide survey is conducted every two years. The 1998 assessment
provides follow-up data reflecting changes over the past eight years in grades four through
twelve.
The Texas School Survey project has two primary objectives. First, it serves to inform state and
local policy-makers about the extent and nature of the substance use problem in Texas schools.
Second, the statewide survey provides a standard of comparison for districts conducting local
assessments of drug and alcohol use.
The purpose of this document is to describe the methodology used to administer the 1998 Texas
School Survey of Substance Use. Following a brief introduction to the survey instrument itself,
attention is then focused on sample selection and survey administration procedures. Next,
methods for data processing and quality control are described. The report concludes with a review
of standard error estimates.
Survey Instrument

Two versions of the 1998 Texas School Survey of Substance Use were developed and
administered. The first was a six-page questionnaire designed for students in grades seven
through twelve. The second was a simplified, three-page instrument created for students in
grades four through six. The elementary survey differs from the secondary survey in that it has
simplified language and some complex questions were omitted. Elementary students were asked
about only four types of substances including tobacco (cigarettes, snuff, and chewing tobacco),
alcohol (beer, wine, wine coolers, and liquor), inhalants, and marijuana. Secondary students were
asked about the same substances, as well as a broader range of illicit drugs including powdered
cocaine, crack, hallucinogens, uppers, downers, steroids, ecstasy, Rohypnol, and heroin. Other
sets of questions, in both the elementary and secondary instruments, were designed to assess
behavioral correlates of substance use and students’ perceptions of support available to help them
cope with substance-related problems.
The questionnaire was in a format that could be scanned optically, similar to that used for
standardized testing. It was designed for anonymous self-administration by students with the aid
of a school district staff member to pass out the survey, read a common set of instructions,
monitor the class during survey administration, and collect the instruments after they are
completed. The survey instruments are included in Appendix A.
Survey Modifications

While the 1998 Texas School Survey of Substance Use content remained essentially the same as
that used in previous surveys, items on the elementary and secondary questionnaires were
revised. Modifications, particularly with regard to the secondary questionnaire, were
implemented in order to increase accuracy of response and to reduce the length and repetition of
the questionnaires. Revisions were made to ensure compatibility with previous survey data.
Survey modifications are outlined in Appendix B.
Modifications to the Elementary Questionnaire
One modification and one addition to the elementary survey instrument were made. The
modification was to remove gin from the list of alcoholic beverages and to include tequila.
Additionally, a question was included in the 1998 survey that solicited information on parental
involvement in school-sponsored open houses and meetings.
Modifications to the Secondary Questionnaire
More substantial modifications were made to the secondary survey. In general, they included
clarification of the instructions; modifications to drug use questions and lists; changes in the time
periods for which drug use was assessed; and the addition of questions soliciting information on
the use of marijuana and the gambling habits of students.
Slight modifications were made to clarify survey instructions. For example, the 1998 survey
instructs students to "darken one bubble only," in contrast to the previous instructions to "only
choose one" answer. Other modifications included the capitalization of some drug categories,
such as inhalants, in order to emphasize the question matrices. Additionally, the phrase "to get
high" was added to follow each substance name (i.e., "use glue to get high") in the inhalant
question matrix (Q16). The phrase was originally added to the question in 1994 to distinguish
between the legitimate use of pharmaceuticals and recreational substance use. The current
modifications were intended to further emphasize recreational use.
Notable modifications were made to substance use questions and drug lists in the 1998
questionnaire. Primarily these were the addition of heroin and the abbreviation of the drug lists in
some questions. The 1998 Texas School Survey of Drug and Alcohol Use included eight separate
questions about drug use, each of which was asked for 17 different drugs. With the addition of
heroin, modifications were deemed necessary to reduce the length of the survey and forestall
respondent roll-off. Therefore, long drug lists were broken up into several abbreviated lists,
including licit substances (tobacco and alcohol products), and illicit substances (marijuana,
cocaine, etc.). The long drug list, including all 18 substances, was used in only three questions:
the age of first use; frequency of use; and availability of substances. All other substance use
questions utilized the shortened drug lists.
In the previous survey, separate questions asked "how many times" respondents used a substance
in their lifetime, in the past year, and in the past thirty days. In 1998, these time periods were used
as response categories to eliminate the need for separate "use-over-time" questions. A single
question format asked respondents to evaluate “how recently” they had used substances, from no
use ("Never heard of/Never used"), use in the past month, use in the past school year, and use in
their lifetime. The combination of "Never heard of" and "Never used" in 1998 also reduced the
collection of essentially repetitive data.
Finally, five new questions were included in the 1998 Texas School Survey of Drug and Alcohol
Use; one question was deleted, and one was modified. The new questions included one to
measure how students perceive how their parents feel about teenage cigarette smoking. A second
was designed to solicit information regarding the ways in which students smoke marijuana (i.e.
joints, pipes, etc.). Additionally, three questions were included to solicit information about
gambling, including the purchase of lottery tickets and the amount of money spent on gambling in
the past year. Students were no longer asked whether they used substances during school or on
weekends.
The comparability of responses to the revised secondary instrument with those elicited from the
1996 questionnaire was evaluated twice. An initial pilot test in one school district showed greater
disparity between the two instruments than expected by chance. Because students appeared to be
under-reporting lifetime use, Items #17 and #18 in the new survey were further revised to ask
“How recently, if ever,” each of the substances had been used “even one time.”
A second pilot test assessed responses to this latter version of the 1998 secondary school survey.
A random sample of 456 students in one district were administered the new instrument and 270
individuals in this same district responded to the 1996 version. The table below gives the
percentages of participants who said that they had ever (i.e., in their lifetime) used each
substance. As can be seen, the values are in most cases highly similar across both surveys. The
differences that occurred were relatively small and inconsistent in direction—i.e., they were not
consistently higher for one survey than for the other. The version tested in this second pilot study
therefore constituted the final 1998 secondary school survey.
TABLE 1. Pilot Test Results: Lifetime Use as Reported in the 1996
And Final 1998 Secondary Surveys
SUBSTANCE 1996 SURVEY 1998 SURVEY
Any Tobacco Product 57.4 60.6
Cigarettes 54.7 56.8
Smokeless Tobacco 18.0 18.1
Any Alcohol Product 80.0 80.7
Beer 68.9 65.7
Wine Coolers 70.5 69.5
Wine 61.5 60.2
Liquor 63.2 64.9
Marijuana 41.5 42.6
Cocaine 11.8 13.3
Crack 5.7 5.5
Hallucinogens 16.3 18.7
Uppers 18.5 17.2
Downers 12.0 14.0
Rohypnol 6.3 8.3
Steroids 1.7 4.4
Ecstasy 8.9 11.0
All Inhalants 24.9 28.9
TABLE 1. Pilot Test Results (cont.)
SUBSTANCE 1996 SURVEY 1998 SURVEY
Liquid or Spray Paint 9.4 14.7
Whiteout, Correction Fluid 9.3 14.9
Gasoline 4.6 7.0
Freon 1.3 2.9
Poppers, Locker Room, etc. 3.7 3.0
Glue 6.0 8.4
Paint or Lacquer Thinner 7.2 9.3
Octane Booster 0.7 1.1
Other Sprays 5.0 5.5
Other Inhalants 12.0 11.4
Any Illicit Drug 44.7 46.4

Survey Sample

The sample of students for the 1998 survey was designed to be a random sample of all public
school students between the fourth and twelfth grades in the state. In order to make
administration practical, students were selected using a multi-stage cluster sampling procedure.
This involved sampling districts, schools within districts, and classrooms within districts. All
students in a sampled classroom were asked to participate in the survey.
Selection of Districts

The primary analytic cluster was the school district since the approval needed to administer the
survey had to be obtained at that level. Districts are sampled with the probability of selection
proportionate to size. Districts were stratified according to how urban the counties were in which
they were located. The most urban strata involved counties with metropolitan populations of
1,000,000 or more, the next strata with those between 250,000 and 1,000,000, and the third strata
with those metropolitan areas with less than 250,000. The remainder of the state constituted the
final major strata. Due to their large size relative to other districts, a total of nine districts were
sampled with a probability of one. This means that these districts are always selected as part of
the sample.
The strata were further subdivided by relative size of the districts, so that each stratum had large
and small districts. In addition, two of the strata also had substrata of probability-one districts.
The strata are listed in Table 2.
TABLE 2. Distribution of Selected Districts by Urban Class Size
Stratum Group
1-A Large Urban Counties-larger districts
1-B Large Urban Counties-smaller districts
1-P1 Large Urban Counties-probability-one districts

2-A Medium Urban Counties-larger districts


2-B Medium Urban Counties-smaller districts
2-P1 Medium Urban Counties-probability-one districts

3-A Small Urban Counties-larger districts


3-B Small Urban Counties-smaller districts

4-A Non-Urban Counties-larger districts


4-B Non-Urban Counties-smaller districts

Districts were selected for the state sample in the following manner:
1. The selected districts were listed separately for each of the four urbanization classes (1, 2,
3, and 4).
2. Within each urbanization class, districts were subdivided into probability-one districts
(P1), other large districts (A), and small districts (B). Ignoring the districts in P1, the
large and small district division is determined by ranking the districts in order total
enrollment. The list is divided so that half the total enrollment in the strata is in the
districts above the dividing point and half below it.
3. Within each stratum, except the probability-one strata, the districts are reordered based on
a random number weighted by the size of the district. As many districts as were required
were taken from the top of the list in each strata. The number of districts sampled from
each stratum is listed in Table 2.
4. If a district refused to participate in the survey, and all conversion strategies failed, it was
replaced with the next available district in that list of urban class and size stratum.

Obtaining cooperation from those districts that were randomly selected for the state sample was
sometimes a problem. Two potential barriers to participation were addressed as school districts
were recruited. Both concerns centered on the costs of the survey. Some schools were simply
unable to afford the costs of the survey and survey administration. Others wanted to participate
and pay for the survey with federal Drug Free School funds. However, federal law mandated that
all assessments paid for through Drug Free School grants obtain active parental consent prior to
survey administration.1 Due to the large number of students surveyed, the additional burden upon
school district staff, and the potential effects on the survey sample, efforts to obtain active
consent were not viable. Accordingly, TCADA allocated non-federal funds so that the project
was able to waive participation and sampling fees, and pay all shipping costs for all participating
sample districts. As an additional incentive, districts were also offered discounted fees for
participating the following year, and discounted campus level analyses fees.

1
Active parental consent requires that a parent or guardian sign a waiver allowing the school district to
administer a survey to his/her child. The 1998 Texas School Survey of Drug and Alcohol Use was already
required by Texas A&M's Internal Review Board of Human Subjects and Research to obtain passive
consent from parents. Parents were informed about the survey and its administration date, and were offered
the opportunity to deny (in written form) the school district permission to survey their children.
Accordingly, parental notification was made and tacit consent was obtained for all survey participants.
Forty-nine of the original 85 selected districts participated in the study. Thirty-six districts were
not able to participate, and most declined due to the lack of time and resources involved in survey
administration. Many districts were preparing students for TAAS testing, and expressed concerns
about diverting resources away from that preparation. In lieu of the declining districts, 19
additional districts were included in the final sample.
A total of sixty-seven secondary and sixty-six elementary districts agreed to be surveyed as part
of the state sample, although two districts (Beeville ISD and Whitesboro ISD) provided no
elementary-level data (See Table 4). The cooperation rate of the originally sampled districts was
58 percent, with rates ranging from 20 to 80 percent among the sampled strata. The cooperation
rate was lowest for smaller districts in medium sized urban counties (Strata 2), however there
were no consistent differences in cooperation rates between larger and smaller districts. In
general, there was a trend for non-urban districts to have low cooperation rates (See Table 3).
Attempts were made to replace non-participating districts with randomly selected districts within
the same strata. A total of 64 percent of the students in the schools originally sampled were in the
final sample.

TABLE 3. Cooperation Rate by Strata

Strata Strata Strata Strata Strata Strata Strata Strata Strata Strata
1A 1B 1P1 2A 2B 2P1 3A 3B 4A 4B
Total
Cooperatio
n Rate 50% 80% 80% 80% 20% 50% 80% 80% 47% 33%
(58%)

TABLE 4. State Sample by Strata


Original State Sample Actual State Sample
Strata 1 A N=10 Strata 1 A N=7
Arlington Arlington
Aldine Garland
Cypress-Fairbanks Irving
Fort Bend Mesquite
Garland Pasadena
Katy Plano
Klein Spring Branch
Pasadena
Plano
Richardson
Strata 1 B N=10 Strata 1 B N=9
Birdville Birdville
Denton Denton
Judson Grand Prairie
Harlandale Grapevine-Colleyville
Goose Creek Harlandale
Grand Prairie Judson
Grapevine-Colleyville Keller
Keller Mansfield
Lamar Consolidated Spring
Mansfield

Strata 1P1 N=5 Strata 1P1 N=4


Dallas Dallas
Houston Fort Worth
Fort Worth Houston
San Antonio Northside
Northside

Strata 2A N=5 Strata 2A N=5


Beaumont Beaumont
La Joya La Hoya
McAllen McAllen
Socorro Mission Consolidated
Weslaco Socorro

Strata 2B N=5 Strata 2B N=5


Eanes Del Valle
Leander Donna
Nederland Edcouch-Elsa
Port Neches-Groves Flour Bluff
San Marcos San Marcos

Strata 2P1 N=4 Strata 2P1 N=2


Austin Austin
El Paso Ysleta
Ysleta
Corpus Christi

Strata 3A N=6 Strata 3A N=6


Brownsville Brownsville
Clear Creek Clear Creek
Midland Harligen
Tyler Midland
United Tyler
Wichita Falls United

Strata 3B N=10 Strata 3B N=9


Alvin Alvin
Brazosport College Station
College Station Copperas Cove
Copperas Cove Dickson
Dickson Friendswood
La Marque Longview
Longview Los Fresnos
Los Fresnos Pearland
Pearland Texas City
Texas City

Strata 4A N=15 Strata 4A N=11


Alice Beeville
Bay City Brownwood
Denison Corsicana
Eagle Pass Denison
Granbury Granbury
Jacksonville Jacksonville
Kingsville Levelland
Lufkin Plainview
Plainview Roma
Roma San Angelo
San Felipe-Del Rio Sherman
San Angelo
Sherman
Victoria
Waco

Strata 4B N=15 Strata 4B N=9


Bellville Alpine
Bridgeport Bridgeport
Brooks Brooks
Commerce Commerce
Cotulla Coldspring Oakhurst
Crockett Cuero
Crystal City Hearne
Cuero Robinson
Devine Whitesboro
Diboll
Lyford
Madisonville Consolidated
Muleshoe
Palacios
Robinson

Participation of Border School Districts

In order to enable further analysis of substance use among students living on the Texas-Mexico
border, school districts along the border were encouraged to participate in the 1998 Texas School
Survey. The survey was offered at no cost to border districts, and data was collected from a
broadly defined 28-county area. Subsequent analyses will focus on this larger area and a more
strictly defined 13-county border region.
Districts in this designated special study area were encouraged to survey all eligible students. A
total of 40 districts across 16 counties participated, a list of which is found in Appendix C. A
total of 102,224 students were surveyed as part of this special assessment, consisting of 42,319
elementary students and 59,905 secondary students. In addition, 14 of the border districts
surveyed were also included in the state survey sample. Since data were collected for all students
in these 14 districts, a sample of students was selected for inclusion in the state analyses.
Allocation of Surveys among Districts

The state survey sample was designed to collect data from a minimum sample of about 5,555
students per grade, however, many districts chose to survey more than the minimum number of
students specified in the state sampling plan. Some extremely small districts received somewhat
more than a strict proportional allocation because, while the data were only needed from one or
two students per grade, the survey was administered to the entire classroom. Similarly, in a few
extremely large (urban) districts, fewer students were needed for accuracy than would result from
a true proportional allocation. All surveys submitted from a cooperating district were included in
the sample. Accordingly, in the final analyses, the data were weighted to provide an accurate
proportional allocation.
Thus, although it had been estimated that the state sample would include at minimum 50,000
students, it actually included 91,168 elementary students and 158,616 secondary students (See
Table 5). This significantly improves the accuracy of estimates.

TABLE 5. Number of Surveys Included in State Sample

Total Number of Number Percent


Non-blank Useable Rejected* Rejected
Surveys
Secondary 165,731 158,616 7,115 4.3%
Elementary 92,858 91,168 1,690 1.8%

Total 258,589 249,784 8,805 3.4%

*Surveys were rejected because the responses indicated exaggeration or the survey could not be
matched to a sampled school and grade.

Allocation of Surveys among Classrooms and Campuses

Once the number of surveys to be administered in each district was established, the next step was
to determine the number of classrooms to be surveyed per grade. This was achieved by dividing
the number of questionnaires per grade (ascertained for each district using proportional
population calculations) by the average number of students per class---20 for grades four through
six, 22 for grades seven through twelve. The result of this computation indicated the total number
of classes to be surveyed. These classes were selected so that as many different campuses as
possible were in the final sample. Ideally, the classrooms surveyed were evenly distributed
across all campuses in the district. If there were more campuses containing a given grade than
classrooms needed, then a simple random selection procedure was used to determine which
campuses would be sampled. In general, once a campus was selected, all relevant grades at that
campus were surveyed. Therefore, campus selection was not independent between grades.
TABLE 6. Survey Distribution by Grade

Grade Number of Usable Percentage


Surveys
Elementary 4th 28,554 31.3%
5th 32,274 35.4%
6th 30,340 33.3%
91,168 100%

Secondary 7th 31,188 19.7%


8th 29,892 18.8%
9th 31,304 19.7%
10th 24,729 15.6%
11th 22,445 14.2%
12th 18,758 11.8%
158,616 100%

Selection of Classrooms within Campuses

After the total number of classrooms to be surveyed in each grade at each campus was
determined, it was necessary to identify specific classrooms. This selection procedure was
performed by district personnel based on a set of guidelines provided by PPRI (illustrated in
Appendix D). Members of district staff, under the direction of the Drug Free Schools
representative, were asked to make a list by grade (according to teacher’s last name or some other
convenient method) of all classes held during a selected class period.
Some school districts sampled all students in all or some of the grades. In these districts, the
methodology outlined above did not apply to the grades sampled at 100 percent. The Houston
ISD and Austin ISD used a list of all students from which to conduct a random sample of the
students. Therefore, there are no campuses and classrooms sampled.
Survey Administration Procedures

Districts selected for inclusion in the state sample were notified about the project via letter and
were sent a descriptive brochure, illustrated in Appendix E. State sample districts that planned to
administer a local drug and alcohol survey had virtually no procedural changes resulting from
their involvement in the statewide project. In those districts that surveyed grades four through
twelve, sufficient data was collected from all relevant campuses to meet the data collection needs
of the statewide survey. These districts benefited from their inclusion in the state survey project
because they were not charged for the surveys that became part of the state database. The larger
number of surveys from these districts were weighted down so that their contribution to the final
sample was in correct proportion.
In those instances where state sample districts were collecting local data for an incomplete
combination of grades, or where they were not conducting local surveys at all, the campus and
classroom selection procedures described above were applied. Arrangements for giving the
survey were established on an individual basis with these districts. Since those not doing local
surveys did not stand to gain directly from having the survey administered in their district, an
effort was made to be as accommodating as possible. PPRI was able to arrange survey
administration in the selected schools and classes by school personnel.
Houston ISD and Austin ISD district used a computer-drawn random sample of all students. On
each campus where the students are located, the students are requested to go to a specified room
where the survey is conducted. Once in the room, the survey is conducted as it would be in a
classroom in the other districts.
Relevant personnel in the selected districts and campuses were provided with complete
instructions and materials necessary to administer the survey (see Appendix F). Classrooms were
selected randomly by PPRI based on information from a computer printout from the district or
Campus Information Form. Teachers in selected classrooms were given a script to read so that all
students would receive a standardized set of instructions. Teachers were also asked to complete a
Classroom Identification Form that provided data on the number of students that should have
taken the survey but were absent, and the number that were present but failed to complete the
survey. This information was useful for computing error estimates. After the surveys were
administered in each classroom, they were sealed in an envelope along with the Classroom
Identification Form. The envelopes from all participating classrooms were collected and returned
to PPRI.
Data Entry and Analyses

As noted earlier, the format of the survey instruments enabled them to be scanned optically.
Upon receipt at PPRI, the instruments were logged in, coded, and scanned by staff or trained
personnel.
Exaggerated Responses

Because the Texas School Survey data are based entirely upon respondents’ description of their
own behavior, it is inevitable that some students will under- or over-report their use of drugs or
alcohol, and to the extent possible PPRI attempted to identify and eliminate data from those
respondents. Two checks were incorporated into the data analysis program to identify
exaggerators. First, both elementary and secondary students were asked about their use of a false
drug call “cosma.” Data from students claiming to have used this substance were considered
suspect and dropped from the analyses.
Second, checks were run to identify any students claiming extremely high levels of drug and
alcohol use. Unbelievable high substance use for elementary students was defined as the use of
five or more substances on 11 or more occasions in the past year or over a lifetime. Secondary
students were defined as exaggerators based on the following criteria: (1) students reported that
they had five or more drinks of two or more beverages every day; (2) students reported that they
had consumed three or more alcoholic beverages every day; or (3) students reported that they
used four or more drugs (other than cigarettes, alcohol, or steroids) eleven or more times in the
past month. As with those cases in which students reported using “cosma,” data from students
reporting exaggerated use were also dropped from the analyses. Less than two percent (1.8%)
of the total elementary sample exaggerated. The percentage of secondary school students who
exaggerated (4.3%) was more than twice that of elementary students.
Unreported Grade Levels

When students failed to report their grade level, it was impossible to determine unequivocally in
which grade these students’ data should be analyzed. When a grade level was missing, an
estimate of the grade was made based on the students’ age and the data were retained. Table 6
identifies the range of students' ages and the corresponding grade levels that were assigned. If
both grade and age were missing, the data were dropped from the analyses.

TABLE 6. Age-Based Grade Assignments


.
Age Elementary Age Secondary
Grade Level Grade Level
9 4th Grade 12 7th Grade
th
10 5 Grade 13 8th Grade
11 6th Grade 14 9th Grade
15 10th Grade
16 11th Grade
17 or older 12th Grade

Quality Control Measures

To ensure the quality of the statewide survey data, a number of internal checks were put into
place to guide survey processing. First, a quality control analyst oversaw the implementation of
all pre- and post-analysis quality control procedures. As the following paragraphs describe, many
aspects of PPRI’s plan for quality control were embedded in automated procedures. However,
there is no replacement for human oversight. The quality control analyst monitored and tracked
the processing of each district’s surveys from the initial mailing through the production of the
final state report. Responsibilities included ensuring that surveys were properly coded and
scanned and checking for anomalies in the final table of results.
In addition to the safeguards resulting from careful project oversight, there were also a number of
procedural checks against error. For example, there was a possibility, however remote, that after
the bindings of a set of survey instruments were cut, the instruments could be dropped or
otherwise placed out of order. If this occurred, it is conceivable that some pages of data could
have been read into the incorrect computer record. To resolve this problem, each instrument used
in the 1998 survey was printed with a five-digit “litho-code” number. With this coding process,
every page of a given instrument is printed with the same scannable number, but a unique number
is assigned to every instrument. By using the litho-code, when each page of an instrument is
scanned it will automatically be read into the correct computer record. In this way, even if the
pages from different instruments were shuffled together and read randomly, all data derived from
the same instrument would automatically be read to the same data record.
Litho-coding also enabled PPRI to confirm that data from every survey instrument read was
associated with the correct district. Survey instruments were mailed to participating districts in
consecutive order. By recording the beginning and ending instrument numbers going to each
district, PPRI was able to check the litho-codes scanned for a given district. In this way, any
stacks of data that could potentially have been inadvertently mislabeled could be easily identified.
Programming checks were also incorporated into the data analysis program by cross-analysis.
That is, the same data was run in several different ways using existing programs, and program
outputs were then compared for consistency. Confidence is high that these quality control
features ensured valid and reliable survey findings.
Weights, Standard Errors, and Confidence Intervals

Weights were applied to each case based on the strata (i.e., Urban Class I though IV), district, and
campus. The weights were applied so that the aggregation of students in each campus, district,
and strata reflected their proportions in the actual district, campus, and strata populations. The
formulae use to determine these weights are presented in Appendix G.
Standard errors and confidence intervals were estimated for each grade and the aggregation. The
formulae used are presented in Appendix H. The table of standard errors and confidence intervals
for 30 day and lifetime use of substances by grades are presented in Appendix I.
Item Response Analysis

As with any survey, there were potential threats to the validity of the conclusions drawn from the
data. Therefore it was important to examine the ways in which students' were responding to the
questionnaire. Following the collection and TCADA approval of the data, all of the items on the
survey were analyzed to assess the integrity of the data. We were specifically interested in
exploring potential misinterpretation of questions, dishonest responses, and inattention to the
survey questions and instructions.
Separate analyses were conducted for the total sample of elementary and secondary school survey
responses. Additional analyses, exploring potential ethnic and grade-level differences were also
conducted for the statewide secondary instrument.
Overall, the vast majority of students in both elementary and secondary schools appeared to have
provided valid responses to the 1998 Texas Schools Survey of Substance Use. Few Students were
classified as giving exaggerated responses. Likewise, any inconsistency that occurred was
generally most likely due to inattention to survey instructions and questions, misinterpretation of
the questions, or fatigue. Specific findings of the item analyses are highlighted below. A detailed
discussion of the item analyses for both instruments is provided in Appendix J.
Elementary Survey

 Some students used the "Never heard of" and "Never used" response options
interchangeably.
 When items concerning use of all substances were examined, generally less than
1.00% of the responses were inconsistent with initial reports of the most recent use.
 Students who responded inconsistently about substance use were more likely to have
initially reported no use and then acknowledged use on a later question, than to have
cited use and recanted the use later in the survey.
 Questions at the end of the survey were somewhat more likely to be left unanswered
than those at the beginning.
 Students began answering most items that contained questions about multiple drugs,
however, they routinely neglected to finish the item and answer questions about the
final few drugs on the list.
Secondary Survey

 The largest percentages of inconsistent responses were most likely due to the
survey's use of different terms for the same category substances across questions
(i.e., cigarettes versus tobacco products, or spray paint versus inhalants).
 Other inconsistencies may be attributable to different interpretations of "use". Some
students appear to interpret use in an answer as "regular use", whereas others seem
to cite "use" when they may mean that they have "tried" a substance.
 Very few students who reported substance use in the past 30 days early in the survey
subsequently denied use of the substance in later questions about the past 30 days.
 In contrast to the elementary students, secondary students (across all grades) were
generally more likely to report use of a substance and later deny it, than visa versa.
 Asian and Caucasian students were more likely to respond consistently than students
from other ethnic backgrounds.
 Students were more likely to leave questions at the end of the survey unanswered
than those at the beginning.
 Some groups of questions were largely ignored by fairly large percentages of
respondents.
Conclusion

The Texas School Survey has become a valuable policy tool for both state and local educators
and policy-makers. The survey, performed every two years, provides timely and relevant
information about current drug and alcohol use patterns among young people enrolled in the
Texas’ public schools. Furthermore, longitudinal analysis can provide insight into changes in
drug and alcohol prevalence over time. As was noted in the introduction, every state survey
culminates in a TCADA publication providing an overview of findings to date. Data is also
available for independent analysis by policy-makers and academicians.
APPENDIX A

Survey Instruments

The Survey Instruments has been omitted.


APPENDIX B

Survey Modifications

The Survey Modifications has been omitted.


APPENDIX C

Participating Border Districts

Independent School Districts Along Texas/Mexico Border


Participating in the 1998 Texas School Survey of Substance Use

 Alpine ISD (022-901) El/Sec


 Brooks ISD (024-901) El/Sec
 Benavides ISD (066-901) El/Sec
 Brackett ISD (136-901) El/Sec
 Brownsville ISD (031-901) El/Sec
 Comstock ISD (233-903) El/Sec
 Dell City ISD (115-903) SEC ONLY
 Donna ISD (108-902) El/Sec
 Edcouch-Elsa ISD (108-903) NOT INCLUDED (El/Sec)
 Edinburg ISD (108-904) El/Sec
 Fabens ISD (071-903) EL ONLY
 Freer ISD (066-903) El/Sec
 Harlingen ISD (031-903) El/Sec
 Hidalgo ISD (108-905) El/Sec
 Iraan-Sheffield ISD (186-903) El/Sec
 Jim Hogg ISD (124-901) El/Sec
 La Feria ISD (031-905) El/Sec
 La Joya ISD (108-912) El/Sec
 Laredo ISD (240-901) El/Sec
 Los Fresnos CISD (031-906) El/Sec
 Marfa ISD (189-901) El/Sec
 McAllen ISD (108-916) El/Sec
 Mercedes ISD (108-907) El/Sec
 Mission CISD (108-908) El/Sec
 Pharr-San Juan-Alamo ISD (108-909) El/Sec
 Point Isabel ISD (031-909) SEC ONLY
 Presidio ISD (189-902) El/Sec
 Progreso ISD (108-910) El/Sec
 Raymondville ISD (245-903) El/Sec
 Rio Grande City ISD (214-901) NOT INCLUDED (El/Sec)
 Rio Hondo ISD (031-911) El/Sec
 Roma ISD (214-903) El/Sec
 San Diego ISD (066-902) El/Sec
 San Perlita ISD (245-904) SEC ONLY
 Santa Rosa ISD (031-914) El/Sec
 Socorro ISD (071-909) El/Sec
 Tornillo ISD (071-908) El/Sec
 United ISD (240-903) El/Sec
 Valley View ISD (108-916) El/Sec
 Ysleta ISD (071-905) El/Sec
APPENDIX D

Classroom Selection Guidelines

Texas School Survey of Substance Use

Teacher Lists for Random Sample Selection

For elementary

 List all the teachers' names that teach in grades 4, 5 and 6;


 List the grade level that each teaches; and
 List the number of students per class.

For secondary

 Choose either a class period in which all students are present or a subject all students
must be enrolled;
 List all teachers for that class period or that subject; and
 List the number of students per grade level in each of those classes.

Note:
If a subject has been chosen rather than a particular class period, and where a single
teacher teaches multiple classes, a classroom identifier of some sort (e.g., the period)
should be included in the column under the heading of “period”.

PPRI will randomly pick entire classrooms until we reach the target numbers per grade
level for the sample. Then, we'll send you the survey materials along with a Master List
of those classes randomly selected to be surveyed.

Helpful Hint:
Electronic submissions (i.e., as an attachment to an email) are encouraged for speedy
turnaround, as the random sample selection is a computer program.

Remember:
Sixth graders are considered elementary students for the purposes of this survey.
APPENDIX E

Descriptive Brochure
The Descriptive Brochure has been omitted.
APPENDIX F

Survey Administration Guide

The Survey Administration Guide has been omitted.


APPENDIX G

Computation of Sample Weights


Computation of Sample Weights has been omitted.
APPENDIX H

Computation of Standard Errors

Standard Error Computations

The standard errors of the percentages were estimated via the SUDAAN software package. The
variance estimation techniques in SUDAAN are specifically designed to address the complexity
of the sample design in finite population sampling. The variance estimators generally available
for these complex designs do not have closed form solutions and, therefore, require iterative
approaches in computations that would yield solutions as close to the true value as possible.
SUDAAN is perhaps the first software package available for this purpose and is gaining wide
popularity in survey sampling applications.

In applying SUDAAN to the current survey which consisted of a two-stage stratified


sampling design, the input SAS data set requires the following re-coding of the
stratification variable for the certainty strata, 41 and 42 (i.e., for districts sampled with
probability one):

if strata=41 or strata=42 then do;


strata=distid;
distid=class;
end;
(Here: strata= stratum identifier, distid= district identifier, and class= classroom
identifier). Next, the data set needs to be sorted as follows:
proc sort data=___;
by strata distid class;
run;

The SUDAAN code needs to incorporate the following:


proc crosstab data=" " filetype=sas design=WR;
nest strata distid;

The specified design WR uses Taylor series variance estimation methods 2 in computing
standard errors of percentages in the 2-stage stratified sampling design of this survey.

2
See SUDAAN User’s Manual (Volume 1), page 3-4.
APPENDIX I

Standard Error and Confidence


Interval Tables

The Standard Error and Confidence Interval Tables are in


additional files.
APPENDIX J

Item Response Analyses

ITEM RESPONSE ANALYSES:

1998 TEXAS SCHOOL SURVEY OF SUBSTANCE USE

OVERVIEW

The 1998 Texas School Survey of Substance Use was conducted by the Public
Policy Research Institute (PPRI) in conjunction with the Texas Commission on Alcohol
and Drug Abuse (TCADA). Following collection and TCADA approval of the data,
responses were analyzed to assess the integrity of the data. Specifically, the possibility of
students’ lack of truthfulness in responding, misinterpretation of items and/or response
options, and inattention to the survey task were acknowledged as potential threats to the
validity of conclusions drawn from the data.
Separate analyses were conducted for the total sample of elementary and
secondary school survey responses. In addition, potential grade-level and ethnic
differences in responding were examined for the statewide secondary instrument. The
analyses, and the results they yielded, are described below for each survey.

ELEMENTARY SCHOOL SURVEY INSTRUMENT

The elementary version of the 1998 Texas School Survey of Substance Use was
administered to more than 90,000 students within Texas school districts. Respondents
were approximately equally distributed across grade 4 (31.3%), grade 5 (35.4%), and
grade 6 (33.3%). Blank surveys, surveys that could not be matched to a sampled district
or grade, and those from individuals identified as “exaggerators” were omitted from the
data base, yielding a total sample of 91,168. After describing what constituted
“exaggerated responses,” the discussion below focuses upon “Never Heard Of/Never
Used” responses, response inconsistency, and missing data patterns.

Exaggerated Responses. “Exaggerators” were defined as individuals who: (i)


reported any use of the nonexistent drug COSMA; and/OR (ii) said that they had used
each of 5 or more substances 11+ times in either the past year or their lifetime. Less than
two percent (1.8%) of the total elementary sample exaggerated. Although these
responses could have possibly resulted from inattention to the survey task, it was believed
that they most likely reflected untruthfulness. Therefore, as mentioned above,
exaggerators’ surveys were omitted from the sample.

“Never Heard Of” Responses. It was assumed that students who were providing
valid responses to the survey items would admit to having heard of social substances—
e.g., those that are legal, used within public domains, advertised in the media, etc.
Among the substances about which respondents were questioned in this survey, social
substances would likely include tobacco products, beer, wine coolers, wine, and liquor.
Table E1 depicts the total sample percentages of “Never Heard Of” responses to Items
#12-#16 and #18 regarding these substances, as well as inhalants and marijuana.

Across the items presented in the table, respondents were most likely to have
heard of beer, wine, and cigarettes and least likely to have heard of wine coolers, liquor,
and marijuana. This consistency in the ordering of “Never Heard Of” substances
suggests that the respondents were answering truthfully. It is somewhat surprising that
cigarettes were more likely to elicit a “Never Heard Of’ response than beer and wine, and
also that wine coolers were the least known substance. However, these responses may
reflect societal trends rather than untruthful answers. Smoking bans have made cigarettes
less prominent in public and wine coolers are not as widely advertised as they were
several years ago. The fairly large percentage of “Never Heard Of’ responses regarding
liquor in the lifetime and past year questions may be due to a lack of understanding of
what is meant by the term.

“Never Heard Of” responses were also compared to “Never Used” answers, given
that analyses of 1996 survey data indicated that some students tended to interchange
these two response options across questions. At issue was whether or not to count
discrepancies between them as inconsistent response patterns. The “NEVER HEARD”
columns in Table E2 contain the percentages of individuals who gave this response
regarding lifetime use (Item #12) and subsequently responded that they had “Never
Used” the substance. The “NEVER USED” columns present the percentages of students
who answered this to Item #12 and, in the later item, claimed that they had “Never Heard
Of” it.
As illustrated in Table E2, a greater percentage of discrepant responses in Items
#13 and #14 occurred among those who had first said that they had “Never Heard Of’ a
substance, compared to an initial response of “Never Used.” The former were not
considered to be inconsistent responses for two reasons. First, it could be argued that,
after respondents read Item #12, they HAD now “heard” of the substance. Second, if
someone had truly “Never Heard Of” a substance, then it must also be the case that they
have “Never Used” it.

Responses of having “Never Used” the substance in their lifetime and subsequent
claims to have “Never Heard Of” also were not defined as inconsistent response patterns.
On one hand, the occurrences of this pattern of responding were relatively small--less
than 2.0% for each of the items and substances depicted in Table E2. In addition, Items
#15, #16, and #18 did not include “Never Used” as a response option. Perhaps
respondents perceived the “Never Heard Of’ option as the one most consistent with their
previous response. Overall, there was again evidence that a small percentage of the
sample tended to use the two response options interchangeably and it was decided to
focus on true inconsistencies—e.g., admission, then denial, of substance use.

Response Inconsistency. Inconsistent response patterns were identified by


crosstabulating responses regarding lifetime use of each substance (Item #12) with
responses about: (i) the number of times the substance was used since school began in the
fall (#13); (ii) age of first use (#14); (iii) how many close friends use (#15); (iv) if they
had ever been offered the substance (#16); and (v) how dangerous it is for kids to use the
substance (#18). These items were asked of all substances. In addition, responses about
lifetime usage (#12) of beer, wine coolers, wine, and liquor were compared to the
respective answers about the number of times they had consumed two or more drinks at
one time during the past year (#21). Responses to lifetime use of inhalants (#12h) were
also compared to answers regarding the different forms of inhalants in Item #11.

Table E3 depicts the manner in which true inconsistency was defined for each
comparison. As can be seen, inconsistent answers included (i) initial responses of “No
Use/Never Heard” followed by indications of use or awareness of the substance; and (ii)
initial responses of use and subsequent answers of “No Use” or having “Never Heard Of”
the substance. It is also clear from Table 3 that the number of inconsistent response
patterns varied across the items that were compared to Question #12 responses.
Consequently, it is not possible to determine which item was the greatest source of
inconsistency.

Questions Asked About All Substances. As mentioned above, Questions #12-


#16, as well as #18 were asked about all substances. The percentages of responses within
this set of questions that were inconsistent with the answer given to #12 appear in Table
E4. In general, there was minimal inconsistency in response patterns. Regardless of
substance, less than one percent of the total sample responded inconsistently for four of
the five questions. A greater percentage of people provided answers to Item #14 that
were inconsistent with those given to #12. However, the number of possible inconsistent
response patterns is greatest for this item. Moreover, the percentages were still small,
ranging from .47% (marijuana) to 2.77% (beer), with an approximate median of 1.63%.

In order to determine the direction of the inconsistency, the percentages in Table


E4 were broken down into two parts. The columns labelled “NEVER” in Table E5 give
the percentages of respondents who reported to the lifetime use question that they had
“Never Heard Of” (or, where relevant, that they had “Never Used” a substance) and
subsequently indicated usage. The Table 5 columns labelled “SOME” contain the
percentages of students who reported some lifetime use, but then responded that they had
“Never Heard Of” (or “Never Used” when this response option existed) the substance.
It could be expected that respondents who were answering untruthfully would
most likely be those who initially admitted usage, then denied it when questioned about a
substance in more detail. However, with the exception of Item #14, the percentages of
inconsistencies following this direction did not exceed .17% and were typically well
below this number. In fact, those who responded inconsistently were more likely to have
initially reported no usage. If answers to the first item about a substance can be asumed
to be most valid, then the sources of inconsistency are most likely inattention to the
survey task or misinterpretation of the question.

Additional Alcohol Items (#20 a-d). Compared to the questions asked of all
substances, the items asking about the number of times an individual had consumed two
or more alcoholic beverages in one sitting showed greater inconsistency, but the
percentages of discrepant responses were approximately only 2.0-2.5% (See Table E6).
The greatest source of inconsistency was individuals who had responded that they had
“Never Used” the beverage in the lifetime question and subsequently responded that they
had consumed two or more beverages during one sitting once during the past year. This
could reflect inattention due to fatigue, given that Item #20 occurs very near the end of
the survey. Another possible explanation for this inconsistency is that rather than
interpreting the “Never Used” response option in Item #12 as meaning “Never Tried,”
respondents are more loosely perceiving the term as meaning “Never Regularly Used.”
Thus, they would have initially reported that they were not a regular “user,” but had
consumed two or more alcoholic beverages during one sitting.

Additional Inhalant Items (#11 a-f). Inconsistency in response patterns occurred


most frequently for inhalants. Table E7 illustrates the percentage of inconsistent
responses resulting from comparing each form of inhalant in Item #11 to responses about
lifetime use of inhalants in #12h. The percentage of inconsistent respondents was
between 5.0% and 6.0% (See Column 1) for most substances, with inconsistency in
“Paint Thinner” responses exceeding this range (6.06%) and “Whiteout” responses below
this range (4.45%). In contrast to the questions asked about all substances, those who
responded inconsistently were more likely to have proclaimed substance use in Item #11
and subsequently respond that they had “Never Heard Of” or “Never Used” inhalants
(See Column 3 relative to Column 2 in Table E7).

While these inconsistencies could possibly be attributed to untruthful responses to


one of these items, there are again other possible explanations. One is that, despite the
fact that the substances in Question #11 were explicitly defined as inhalants, some
elementary students still did not understand the meaning of the global term. Another
possible explanation is that respondents were not reading the items with close attention.
Missing Data. The average percentage of respondents who failed to give any
response across all items was 7.48%, with a standard deviation equal to 3.41. The extent
of missing data ranged from .22% (Item #1) to 14.71% (Item #19d).

The percentages of responses missing across items were further examined in order
to determine the extent to which the pattern of missing data was systematic rather than
random. As might be expected due to fatigue and/or boredom, the percentage of missing
responses tended to increase as the item position approached the end of the questionnaire.
The relationship between the percent of missing data on an item and the item’s ranked
position within the survey is graphically illustrated in the Figure 1 scatterplot. The
Spearman rho correlation coefficient between these two variables equalled .499 and was
statistically significant ( = .01; two-tailed test).

However, the relationship between the percent of missing data on an item was not
entirely due to its position within the questionnaire. Figure 2 is a Line Graph of the
ranked position of an item plotted against the respective percentage of missing data.
Arrows inserted within this chart indicate the points at which an item that presents a
sequence of sub-items begins—e.g., those that ask students to respond about each
substance listed. An asterisk indicates the sub-item that concludes the list. As Figure 2
illustrates, respondents appear to be willing to begin each of these questions, but become
bored or fatigued as they move through these multi-part items. Thus, while there is a
relationship between the percentage of missing data on an item and its position in the
survey, there is also a persistent increase of missing data within multi-part items.

This pattern could be particularly problematic for the substance use items, given
that these substances are listed in the same order for each of these types of questions.
Respondents may be under-reporting their use of harsher substances due to fatigue.
Alternatively, they may have responded earlier in the questionnaire that they do not use a
particular substance and then omitted all subsequent questions about it. Unfortunately,
there is no way to determine which—if either—of these explanations validly accounts for
the observed pattern of missing data.

SECONDARY SCHOOL SURVEY INSTRUMENT

The secondary school version of the 1998 Texas School Survey of Substance Use
was administered to over 150,000 students in grades 7-12. Almost twenty percent of the
total sample were in each of grades 7 (19.7%), 8 (18.8%), and 9 (19.7%). The
percentages in grades 10 and 11 were approximately equal (15.6% and 14.2%,
respectively). Grade 12 was represented by 11.8% of the total sample.

As with the elementary school sample, blank surveys, surveys that could not be
matched by district or grade, and those from individuals identified as “exaggerators”
were omitted from this data base. The total number of useable surveys was therefore
158,616. The definition of a secondary school “exaggerated” response is first described
below. Response inconsistency and missing data patterns are then discussed.

Exaggerated Responses. “Exaggerators” were defined as individuals who


responded in any of the following ways: (i) use of the nonexistent drug COSMA; (ii)
excessive use of multiple alcoholic beverages; or (iii) use of five or more drugs
(excluding steroids) 11+ times. The percentage of secondary school students who
exaggerated (4.3%) was more than twice that of elementary students.

Response Inconsistency. Inconsistent response patterns were again identified by


cross-tabulating responses regarding most recent use of each substance with several other
items. Within initial total sample analyses, responses to Questions #21, #22, and #23
were cross-tabulated with answers about the respective substances listed in #17 or #18.
Because the list of substances about which survey participants were queried in Items #17
and #18 did not include inhalants, all comparisons involving this type of drug were based
upon students’ responses across Items #16a-k. (The most recent use of any of the eleven
specific inhalants was considered the student’s initial overall inhalant response.) In
addition, most recent usage responses were compared to those for past 30 day use for the
substances listed in Item #19. (The former responses were pooled across tobacco and
alcohol substances, given that these more global terms appeared in #19). Finally, initial
answers regarding cigarette and alcoholic beverage use were compared to additional
questions asked about these particular substances.

Table S1, which includes the list of questions for which comparisons were made,
depicts the manner in which response inconsistency was defined for each comparison.
As with the elementary version of the survey, responses counted as inconsistent included
both: (i) initial responses to #17 and #18 of having “Never Heard Of/Never Used” the
substance followed by indications of use; and (ii) admission of having used a substance
even one time with subsequent answers of “Never Heard Of/Never Used” that particular
alcoholic beverage or drug. Data were first examined for the total secondary school
sample, followed by grade and ethnic subsamples.

Total Secondary School Sample. The total percentages of individuals who


provided subsequent responses that were inconsistent with their first response about
recent usage of a substance are given in Table S2. Several of the largest entries in this
table are likely due to the fact that inconsistency for some substances was calculated by
pooling across response options. As mentioned above, this was necessary because of
differences across questions in using specific versus global substance labels. For
example, the greatest amount of inconsistency across subsequent questions tended to
involve inhalant responses, but recall that a respondent’s initial answer was pooled across
several substances. It is possible that individuals did not recall all eleven of their
previous responses. Alternatively, respondents may have tried a small number of
substances, but do not consider themselves to be regular users. Finally, participants may
have responded to subsequent questions regarding inhalants with respect to only those
substances listed as examples of inhalants (i.e., whiteout, glue, gas).

In addition to those for inhalants, the percentages of responses regarding past 30


day use that were inconsistent with earlier reports of most recent use were relatively high
for tobacco (5.31%) and alcohol (4.78%). Again, as mentioned, comparisons for those
two categories of substances were based upon responses to two or more substances in
Item #17. For example, an individual who said that either beer, wine coolers, wine, or
liquor had been consumed in the past month—but that alcohol had not—was considered
to have responded inconsistently. The percentages given for “tobacco” and “alcohol”
may therefore be somewhat inflated, given possible differences in interpretation between
these global terms and the more specific one used in Item #17.

However, not all of the relatively large percentages in Table S2 can be attributed
to the use of different labels for substances across questions. Particularly problematic are
the entries for marijuana in Column 1 (6.20%) and the percentages for the social
substances when most recent use responses were compared to those regarding frequency
of use (Item #23). Values for the latter ranged from 4.67% (smokeless tobacco) to 8.43%
(wine). Apparent inconsistencies were thus further examined in order to determine their
direction and nature.

Table S3 displays the percentages of inconsistent responses divided according to


direction. More specifically, participants who first said that they had “Never Heard
Of/Never Used” a substance and then indicated use (see NEVER Columns) were
distinguished from those who initially reported that the substance had been used at least
once in their lifetime [or in the past 30 days from Items #19 and #24] and subsequently
answered inconsistently (see Columns labelled SOME). As illustrated in this table,
inconsistent responses regarding social substances arising from answers to Item #22
tended to be more frequently given by students who reported never using a substance, but
then gave an age of first use. It is possible that survey participants were interpreting
“use” differently across the two questions. When responding to the item about most
recent use (#17/#18), they appear to have been responding with respect to regularity of
use. In the latter question, “use” seems to have been interpreted as having ever tried a
social substance. The direction of inconsistency in responses to Question #22 for
stronger substances tended to be slightly, but not uniformly, in the opposite direction
from that described above.

The direction of the inconsistent responses for the remaining items in Table S3
tended to be initial reports of usage followed by denial. This pattern held for all of the
harsher substances (i.e., those listed below alcohol), as well as both tobacco products,
when responses to Item #23 were analyzed. Within each alcoholic beverage, percentages
broken down into the two directions of inconsistency were more comparable across Items
#23 and #25. However, relatively large percentages of participants who had initially
admitted use of a particular alcoholic beverage responded in #26 that they “Never Drink
this Beverage.” This inconsistency could have been due to unwillingness to admit
drinking 5 or more beverages at one time. However, it was apparent during data analysis
that many of the participants who responded inconsistently were those who had reported
some use during their lifetime and—rather than responding in #26 that they “Never Drink
5+” beverages at one time—said that they “Never Drink this Beverage.”

In fact, across the questions in the survey, relatively substantial numbers of


“lifetime” users subsequently reported no use of the respective substance. It could be
argued that such apparent inconsistencies could again be due to respondents’
interpretations of response options. That is, individuals who have tried a substance only
one or a few times do not consider themselves regular “users” and therefore say that they
“Never Use.” The entries in Table S4 give the percentages of inconsistent repsonses for
selected items when such lifetime/subsequent no use responses were omitted.
Comparison of this table with Table S2 reveals that, across items and substances, the
majority of the entries substantially decrease in value.

It was also apparent during data analysis that there was some tendency among
those who initially reported no use of a substance to later report usage at the lowest level
listed as a response option—e.g., “Less than Once a Year” in Item #23. This could also
be due to interpreting the term “used” within the context of regularity of use. Individuals
who do not regularly “use” respond in #17/#18 that they never use, even though they
have, on rare occasions, experimented with a substance. Table S5 further omits from the
values in S4 (for Items #23, #25, and #26) students who responded in this manner. The
percentages of inconsistent responses are relatively low—ranging from .26% (steroids) to
3.92% (wine).
Finally, it is worthwhile to specifically consider the consistency with which
students who initially responded past 30 day use—i.e., those who are likely to be more
actively engaging in substance usage. Returning to the items in Table S3 that specifically
address 30 day use (i.e., #19 and #24), the percentage of respondents who initially
reported use and subsequently responded inconsistently appears to be relatively high for
marijuana (5.64%). However, included in this figure are individuals who had said that
their most recent use was “Lifetime” or “Since School Began in the Fall,” and later
indicated past 30 day usage.

Table S6 focuses exclusively upon participants who initially reported 30 day use
and subsequently indicated no use. In general, the percentages of inconsistent responses
were very low. Only .56% of the participants who used marijuana in the past 30 days
denied usage when specifically questioned about the past month. The largest entry in the
table was for cigarettes, with 3.23% inconsistently responding to Item #24 relative to
Item #17.

Grade Comparisons. Inconsistencies in responses—relative to Item #18—were


examined across grades for Item #23 (frequency of use) and #19 (past 30 day use), with
the latter focusing upon marijuana, cocaine, and crack. Table S7 displays inconsistencies
regarding past 30 day use. Across grades, respondents were more likely to respond
inconsistently about marijuana use than about cocaine or crack. The columns labelled
“30” give the percentages of participants who initially reported, and subsequently denied,
use in the past 30 days. These percentages are small. The greatest number of
inconsistent responses, across grades and substances, were from individuals who first
reported use in their lifetime or since school began and then also reported use in the past
30 days in Item #19 (see columns labelled “No 30”). Students in grades 9 and 10 were
somewhat more likely to respond inconsistently when specifically questioned about past
30 day use.
Table S8 gives the percentages resulting from comparisons between Items
#17/#18 and Item #23. In general—across grades and substances—individuals were
more likely to initially report usage and then deny it than vice versa. While the
percentages of inconsistencies remain fairly constant across grades for most of the social
substances, the values for marijuana increase with grade level.

Ethnic Comparisons. The analyses described above for the six grade levels were
also conducted across ethnic groups. As shown in Table S9, very small percentages of
respondents first reported past 30 day use and then said “Never Heard Of/Used” when
directly asked about use in the past month (see “30” columns). In general, even smaller
numbers initially reported no use and then subsequently indicated that they had used the
substance at least once in the previous 30 days. The greatest instances of inconsistency
again arose when respondents had initially reported use during their lifetime or since
school began and then also said that they had used a substance at least once in the past
month.
As illustrated in both Table S9 and Table S10, Asian Americans and Caucasians
tended—in general—to respond more consistently than did respondents of the other
ethnicities. African and Native Americans displayed relatively large percentages of
inconsistent responses with respect to beer. Across ethnic groups, inconsistencies were
more common among the social substances.

Missing Data. The average percentage of respondents who failed to give any
response across all items was 10.25%, with a standard deviation equal to 6.12. The
extent of missing data ranged from .37% (Item #1) to 30.25% (Item #29d).

As with the elementary version of the survey, the percent of students who did not
provide a response tended to increase as the position of the item approached the end of
the questionnaire. The Spearman rank-order correlation was .45, a value statistically
significant at the .01 level (two-tailed test). However, as illustrated in Figure 3, the
relationship between missingness and item position was not a strictly linear one.
Secondary students were less likely than elementary students to routinely omit substances
low on the list of multi-part items.

In addition, there are groups of questions that students are simply ignoring. Of
the three highest peaks in the Figure 3 Line Graph, the first (reading left to right) reflects
the set of questions regarding extracurricular activities. The second group of questions
(for which the percentages of missing data were greatest overall) pertained to methods of
ingesting marijuana. The final noticeably high point in the graph reflects the percentages
of missing data on the questions about sources of information about drugs. With the
possible exception of the items about marijuana ingestion, the questions displaying the
greatest amount of missing data are fairly innocuous. Perhaps they are being ignored
because they are irrelevant to the student.

Finally, the pattern of missing data within the current survey was compared to that
in the 1996 survey. The average percent missing in 1996 (Mean=9.06%; SD=5.05) was
comparable to that for the 1998 survey. However, percentages of missing responses in
the previous survey did not reach values as extreme as those in 1998, with a maximum
value of 20.8% (Item #29f). As can be seen by comparing Figure 4 to Figure 3, there
tended to be fewer missing values throughout the 1996 survey. [Note: The break in the
1996 line chart is due to the fact that these values were unavailable.]

For the most part, the groups of items with the largest percentages of missing data
tended to be similar for both surveys. Among the highest peaks in Figure 4 are again
those for extra-curricular activities (Items ranked #12-#22) and sources from which drug
information had been obtained (Items near the end of the survey). However, there were
also relatively high percentages of missing data in 1996 for questions addressing
substances used by close friends and the danger of alcohol and drugs.

There was a slightly higher correlation between the percentage of missing data
and item position within the survey in 1996. The value of the Spearman rank-order
correlation coefficient between these two variables was .66. This value was statistically
significant at the .01 level (two-tailed test).

SUMMARY
In sum, the vast majority of students in both elementary and secondary schools
appeared to have provided valid responses to the 1998 Texas School Survey of Substance
Use. Few students were classified as giving exaggerated responses and inconsistency in
responding was generally most likely due to inattention to the survey task,
misinterpretation of the question, or fatigue. The major specific findings of the item
analyses for each survey are listed below.
Elementary

 Only 1.8% were classified as exaggerators.


 Some students appear to use “Never Heard Of” and “Never Used” response
options interchangeably.
 When items asked about all substances were examined, generally less than
1.00% of the responses were inconsistent with initial reports of most recent
usage.
 Those who responded inconsistently were more likely to have initially
reported no use of a substance and subsequent use than vice versa.
 Questions were somewhat more likely to be left unanswered as their position
in the survey approached the end.
 Participants were willing to begin an item that contained multiple parts, but
routinely omitted substances that came late in the list.

Secondary

 4.3% of the respondents were classified as exaggerators.


 The largest percentages of inconsistent responses were most likely due to the
use of different labels for substances (i.e., global versus specific) across
questions.
 Other inconsistencies within the total secondary school sample may be
attributable to differential interpretations of the term “use.” While in some
questions students appear to be responding with respect to regularity of use,
they are at other times giving answers that refer to whether or not they have
ever tried the substance.
 Very few participants who initially report use of a substance in the past 30
days subsequently deny usage when specifically questioned about the past 30
days.
 In general, across grades, individuals were more likely to report usage and
then deny it than vice versa.
 Overall, Asian Americans and Causasians tended to respond more consistently
than did respondents of other ethnic backgrounds.
 Participants were somewhat more likely to leave questions unanswered later
in the survey.
 Some groups of questions are being ignored by fairly large percentages of
respondents.

You might also like