grounded theory which uses multi-staged data collection Data Collection Methods Documents Historical Literature review

Meta-analysis Diaries Content Analysis Secondary Data (data mining) Observations Interpretive Ethnographic Participant observer Case study Survey Questionnaire Interview Standardized Scales/Instruments phenomenological studies which studying subjects over a period of time through developing relationships with them and reporting findings based on research "experiences." case studies which use various data to investigate the subject over time and by activity. Each research method has it's strengths and weaknesses. When designing a research study it is important to decide what the outcome (data) the study will produce then select the best methodology to produce that desired information. Data Collection Techniques There are two sources of data. Primary data collection uses surveys, experiments or direct observations. Secondary data collection may be conducted by collecting information from a diverse source of documents or electronically stored information. U.S. census and market studies are examples of a common sources of secondary data. This is also referred to as "data mining." Key Data Collection Techniques Surveys Questionnaires Panel Questionnaire Designs Interviews Experimental Treatments

The time when the observations or measurements of the dependent variable occur; and 4. Which groups are measured and how. Treatment Group: The portion of a sample or population that is exposed to a manipulation of the independent variable is known as the treatment group. For example, youth who enroll and participate in recreation programs are the treatment group, and the group to which no recreation services are provided constitutes the control group. Validity Issues There are two primary criteria for evaluating the validity of an experimental design. Internal validity. Determines whether the independent variable made a difference in the study? Can a cause-andeffect relationship be observed? To achieve internal validity, the researcher must design and conduct the study so that only the independent variable can be the cause of the results (Cozby, 1993). External validity, refers to the extent to which findings can be generalized or be considered representative of the population. Confounding Errors Errors: are conditions that may confuse the effect of the independent variable with that of some other variable(s). 1. Premeasurement and interaction errors 2. Maturation errors 3. History errors 4. Instrumentation errors 5. Selection bias errors 6. Mortality errors

3.

Writing an Introduction EXPERIMENTAL DESIGNS In any research proposal the researcher should avoid the word 1. True Designs "investigation." This word is perceived in a negative sense. Experimental 2. Quasi Designs The key components of a good introduction include True designs 3. Ex Post Facto Designs 1. a description of the purpose of the study, True Designs - Five Basic Steps to Experimental Research 2. identification of any sponsoring agency, Design Quasi designs 3. a statement regarding confidentiality, 1. Survey the literature for current research related to your 4. a description of how sample or respondents were study. selected, and Other Field Methods 2. Define the problem, formulate a hypothesis, define basic 5. an explanation of the results and their applications. Nominal Group Technique terms and variables, and operationalize variables. Experimental Treatments 3. Develop a research plan: Experimental designs are the basis of statistical significance. Delphi a. Identify confounding/mediating variables that may An example of the fundamentals of an experimental design is contaminate the experiment, and develop methods to control or shown below. Multimethods Approach minimize them. A researcher is interested in the effect of an outdoor recreation program (the independent variable, experimental treatment, or Combination of methods shown b. Select intervention variable) on behaviors (dependent or outcome variables) of youth-at-risk. a research design (see Chapter 3). c. Randomly select subjects and randomly assign them to In this example, the independent variable (outdoor recreation program) is expected to effect a change in the dependent variable. groups. Source: Issac & Michael, 1985; Leedy, 1985; Dandekar, 1988; Thomas &Even with a well designed study, an question remains, how can the researcher be confident that the changes in behavior, if any, Nelson, 1990. d. Validate all instruments used. were caused by the outdoor recreation program, and not some other, intervening or extraneous variable ? An experimental design Qualitative and Quantitative Research Methodologies e. Develop data collection procedures, conduct a pilot study, does not eliminate intervening or extraneous variables; but, it attempts to account for their effects. Quantitative research methods include: and refine the instrument. Experiments: random treatment assignments and quasi Experimental Control f. State the null and alternative hypotheses and set the experiments using nonrandomized treatments. Experimental control is associated with four primary statistical significance level of the study. factors (Huck, Cormier, & Bounds, 1974). Surveys: which are cross-sectional or longitudinal 4. Conduct the research experiment(s). 1. The random assignment of individual subjects to 5. Analyze all data, conduct appropriate statistical tests and comparison groups; Qualitative research methods include: report results. 2. The extent to which the independent variable can be ethnographies which are observations of groups Quasi Designs manipulated by the researcher;

The primary difference between true designs and quasi designs is that quasi designs do not use random assignment into treatment or control groups since this design is used in existing naturally occurring settings. Groups are given pretests, then one group is given a treatment and then both groups are given a post-test. This creates a continuous question of internal and external validity, since the subjects are self-selected. The steps used in a quasi design are the same as true designs. Ex Post Facto Designs An ex post facto design will determine which variables discriminate between subject groups. Steps in an Ex Post Facto Design 1. Formulate the research problem including identification of factors that may influence dependent variable(s). 2. Identify alternate hypotheses that may explain the relationships. 3. Identify and select subject groups. 4. Collect and analyze data Ex post facto studies cannot prove causation, but may provide insight into understanding of phenomenon. OTHER FIELD METHODS/GROUP TECHNIQUES Nominal Group Technique (NGT) The NGT is a group discussion structuring technique. It is useful for providing a focused effort on topics. The NGT provides a method to identify issues of concern to special interest groups or the public at large. Ewert (1990) noted that the NGT is a collective decision-making technique for use in park and recreation planning and management. The NGT is used to obtain insight into group issues, behaviors and future research needs. Five Steps of the NGT 1. Members of the group identify their individual ideas in writing, without any group discussion; 2. Each member lists his/her own ideas and then rankorders them, again without any group discussion; 3. A facilitator gives each participant an opportunity to state his/her ideas ( one item per person at a time, in round-robin fashion) until all ideas are exhausted; 4. As a group, participants discuss and consolidate ideas into a list; and 5. Finally, members vote to select priority ideas. The final list of ideas becomes the focus of further research and discussion. These ideas can also be used to generate a work plan for a formal strategic planning process, a basis for a survey or interview, or the development of a scale. Source: (Mitra & Lankford, 1999) Delphi Method The delphi method was developed to structure discussions and summarize options from a selected group to: avoid meetings, collect information/expertise from individuals spread out over a large geographic area, and

save time through the elimination of direct contact. Although the data may prove to be valuable, the collection process is very time consuming. When time is available and respondents are willing to be queried over a period of time, the technique can be very powerful in identifying trends and predicting future events. The technique requires a series of questionnaires and feedback reports to a group of individuals. Each series is analyzed and the instrument/statements are revised to reflect the responses of the group. A new questionnaire is prepared that includes the new material, and the process is repeated until a consensus is reached. The reading below is a research study that used the delphi technique and content analysis to develop a national professional certification program.

advisory) may have one or more of these features, but not in the same combination as those of focus group interviews. Behavior/Cognitive Mapping Cognitive and spatial mapping information provides a spatial map of: current recreation use, the most significant recreation resources, and the approximate number of visitors to the recreation areas. All types of recreation activities and travel involve some level of environmental cognition because people must identify and locate recreation destinations and attractions. Cognitive mapping allows recreation resource managers the opportunity to identify where users and visitors perceive the best recreation areas are located. It is important to understand user perceptions in order to manage intensive use areas in terms of maintenance, supervision, budgeting, policy development and planning. Cognitive maps grid the research site into zones. The zones identify existing geographic, climatic, landscape, marine resources, and recreation sites. The grids allow respondents to indicate primary recreation sites, and then a composite is developed to identify high impact areas. Researchers collect data at recreation areas (beach, campground, marina, trailhead, etc.) by interviewing visitors and recreationists. During the data collection process, random sites, days, times, and respondents (every nth) should be chosen to increase the reliability and generalizability of the data. Observations Observational research is used for studying nonverbal behaviors (gestures, activities, social groupings, etc.). Sommer & Sommer (1986) developed the list shown below to assist in observation research. 1. Specify the question(s) of interest (reason for doing the study). 2. Are the observational categories clearly described? What is being observed and why? 3. Design the measurement instruments ( checklists, categories, coding systems, etc.). 4. Is the study designed so that it will be 'Valid (i.e., does it measure what it is supposed to measure, and does it have some generalizability)? 5. Train observers in the use of the instruments and how to conduct observational research. 6. Do a pilot test to (a) test the actual observation procedure and (b) check the reliability of the categories of observation using at least two independent observers. 7. Revise the procedure and instruments in light of the pilot test results. If substantial changes are made to the instrument, run another pilot test to make sure changes will work under the field conditions. 8. Collect, compile, and analyze the data and interpret results. Casual observation is normally done like unstructured interviews. During the early stages of a research project, casual

Sample Research Article: Job Competency Analyses of Entry-Level Resort and Commercial Recreation Professionals Focus Groups Richard Krueger (1988), describe the focus group as a special type of group in terms of purpose, size, composition, and procedures. A focus group is typically composed of seven to twelve participants who are unfamiliar with each other and conducted by a trained interviewer. These participants are selected because they have certain characteristics in common that relate to the topic of the focus group. The researcher creates a permissive environment in the focus group that nurtures different perceptions and points of view, without pressuring participants to vote, plan, or reach consensus. The group discussion is conducted several times with similar types of participants to identify trends and patterns in perceptions. Careful and systematic analysis of the discussions provide clues and insights as to how a product, service, or opportunity is perceived. A focus group can be defined as a carefully planned discussion designed to obtain perceptions on a defined area of interest in a permissive, nonthreatening environment. It is conducted with approximately seven to twelve people by a skilled interviewer. The discussion is relaxed, comfortable, and often enjoyable for participants as they share their ideas and perceptions. Group members influence each other by responding to ideas and comments in the discussion. CHARACTERISTICS OF FOCUS GROUPS Focus group interviews typically have four characteristics: 1. Identify the target market (people who possess certain characteristics); 2. Provide a short introduction and background on the issue to be discussed; 3. Have focus group members write their responses to the issue(s); 4. Facilitate group discussion; 5. Provide a summary of the focus group issues at the end of the meeting. Other types of group processes used in human services (delphic, nominal, planning, therapeutic, sensitivity, or

observation allows the researcher(s) to observe subjects prior to designing questionnaires and/or interview formats. Types of Observation Studies Participant observer Windshield surveys Case study Documents (also called Secondary Data or Data Mining) Data mining is commonly used in both qualitative and quantitative research. Secondary data provides data which provides a framework for the research project, development of research question(s), and validation of study findings. Frequently used sources of secondary data are: U.S. Census - Extensive demographic data including age, sex, distribution, education, ethnicity, migration patterns, service industry, etc. Bureau of Labor Statistics - Extensive information on such things as employment, unemployment, types of employment, income, etc. National Center for Health & Information Vital rates such as births, State Department of Health deaths, health, etc. marriage and divorce rates, etc. State Employment Departments - Number employed by industry, projected levels of employment growth, available jobs skills and skill shortages Federal Land Management - National Parks, historic sites, scenic areas, forests by acres, budget and visitation rates. State Highway Departments - Miles and condition of highways, bike lanes, and streets, capital and maintenance costs of highways Law Enforcement Agency - Number and types of motor vehicles, types of crimes and violations, number of police officers by county and city, law enforcement Outdoor Recreation - Number and type of parks, number and type of Agency/Dept. campgrounds, location, and rates for parks, lakes, rivers, etc. Welfare/Human Services - Number of families on various types of Department assistance such as Aid to families with Dependent Children, Social Security, and SSI. Number of alcohol and drug abuse counselors, number of family counselors. Number and cases of child abuse, spouse abuse, desertions, child adoptions rate. Newspapers - Scanning local newspapers is an excellent means to become better acquainted with a community and its principal actors as well as the issues that have been of greatest local concern Content Analysis Content analysis systematically describes the form or content of written and/or spoken material. It is used to quantitatively studying mass media. The technique uses secondary data and is considered unobtrusive research. The first step is to select the media to be studied and the research topic. Then develop a classification system to record the information. The techniques can use trained judges or a computer program can be used to sort the data to increase the reliability of the process.

Content analysis is a tedious process due to the requirement that each data source be analyzed along a number of dimensions. It may also be inductive (identifies themes and patterns) or deductive (quantifies frequencies of data). The results are descriptive, but will also indicate trends or issues of interest. The reading below is a research study that used the delphi technique and content analysis to develop a national professional certification program.

Sample Research Article: Job Competency Analyses of Entry-Level Resort and Commercial Recreation Professionals Meta-Analysis Meta-analysis combines the results of studies being reviewed. It utilizes statistical techniques to estimate the strength of a given set of findings across many different studies. This allows the creation of a context from which future research can emerge and determine the reliability of a finding by examining results from many different studies. Researchers analyze the methods used in previous studies, and collectively quantify the findings of the studies. Meta-analysis findings form a basis for establishing new theories, models and concepts. Thomas and Nelson (1990) detail the steps to meta-analysis: 1. Identification of the research problem. 2. Conduct of a literature review of identified studies to determine inclusion or exclusion. 3. A careful reading and evaluation to identify and code important study characteristics. 4. Calculation of effect size. Effect size is the mean of the experimental group minus the mean of the control group, divided by the standard deviation of the control group. The notion is to calculate the effect size across a number of studies to determine the relevance of the test, treatment, or method. 5. Reporting of the findings and conclusions. Historical Research Historical research in leisure studies may focus on: biographies of park and recreation professionals (Joseph Lee, Jane Adams, etc.), public, non-profit and private institutions (public parks and recreation, federal land management agencies), professional movements (playgrounds, leisure education), and related concepts (professionalism, certification and licensure, play). Historical research is also referred to as analytical research. Common methodological characteristics include a research topic that addresses past events, review of primary and secondary data, techniques of criticism for historical searches and evaluation of the information, and synthesis and explanation of findings. Historical studies attempt to provide information and understanding of past historical, legal, and policy events. Five basic procedures common to the conduct of historical research were identified by McMillan & Schumacher (1984).

They provide a systematic approach to the process of historical research. Step 1: Define the problem, asking pertinent questions such as: Is the historical method appropriate? Are pertinent data available ? Will the findings be significant in the leisure services field? Step 2: Develop the research hypothesis (if necessary) and research objectives to provide a framework for the conduct of the research. Research questions focus on events (who, what, when, where), how an event occurred (descriptive), and why the event happened (interpretive ). This contrasts with quantitative studies, in which the researcher is testing hypotheses and trying to determine the significance between scores for experimental and control groups or the relationships between variable x and variable y. Step 3: Collect the data, which consists of taking copious notes and organizing the data. The researcher should code topics and subtopics in order to arrange and file the data. The kinds of data analysis employed in historical research include (based on McMillan & Schumacher, 1984): a. Analysis of concepts. Concepts are clarified by describing the essential and core concepts beginning from the early developmental stages. Clarification allows other researchers to explore the topic in other fashions. b. Editing or compilation of documents, to preserve documents in chronological order to explain events. For ex- ample, an edition of Butler's park standards, the National Recreation and Park Association's first minutes, or letters from early pioneers in the field preserves the documents for future researchers. c. Descriptive narration tells the story from beginning to end in chronological order, utilizing limited generalizations and synthesized facts. d. Interpretive analysis relates one event to another event. The event is studied and described within a broader con- text to add meaning and credibility to the data. For example, an examination of the development of a local jurisdiction's ability to dedicate land for parks may be related to the urbanization and loss of open space in our communities. e. Comparative analysis examines similarities and differences in events during different time periods-for example, the budget-cutting priorities and procedures of the Proposition 13 era of the early 1980s in parks and recreation as compared to the budget-cutting priorities and procedures of today. f. Theoretical and philosophical analysis utilizes historical parallels, past trends, and sequences of events to suggest the past, present, and future of the topic being researched. Findings would be used to develop a theory or philosophy of leisure. For example, an analysis of public recreation agency goals and objectives of previous eras can be used to describe the future in the context of social, political, economic, technological, and cultural changes in society.

Step 4: Utilizing external and internal criticism, the re- search should evaluate the data. Sources of data include documents (letters, diaries, bills, receipts, newspapers, journals/magazines, films, pictures, recordings, personal and institutional records, and budgets), oral testimonies of participants in the events, and relics ( textbooks, buildings, maps, equipment, furniture, and other objects). Step 5: Reporting of the findings, which includes a statement of the problem, review of source material, assumptions, research questions and methods used to obtain findings, the interpretations and conclusions, and a thorough bibliographic referencing system. Multimethod Approach The multimethod approach encourages collecting, analyzing and integrating data from several sources and the use of a variety of different types of research methods.

Resources. When the population is large, a sample survey has a big resource advantage over a census. A well-designed sample survey can provide very precise estimates of population parameters - quicker, cheaper, and with less manpower than a census. Generalizability. Generalizability refers to the appropriateness of applying findings from a study to a larger population. Generalizability requires random selection. If participants in a study are randomly selected from a larger population, it is appropriate to generalize study results to the larger population; if not, it is not appropriate to generalize. Observational studies do not feature random selection; so it is not appropriate to generalize from the results of an observational study to a larger population. Causal inference. Cause-and-effect relationships can be teased out when subjects are randomly assigned to groups. Therefore, experiments, which allow the researcher to control assignment of subjects to treatment groups, are the best method for investigating causal relationships.

from population parameters. Only probability sampling methods permit that kind of analysis. Non-Probability Sampling Methods Two of the main types of non-probability sampling methods are voluntary samples and convenience samples.  Voluntary sample. A voluntary sample is made up of people who self-select into the survey. Often, these folks have a strong interest in the main topic of the survey. Suppose, for example, that a news show asks viewers to participate in an on-line poll. This would be a volunteer sample. The sample is chosen by the viewers, not by the survey administrator. Convenience sample. A convenience sample is made up of people who are easy to reach.

AP Statistics Tutorial: Data Collection Methods To derive conclusions from data, we need to know how the data were collected; that is, we need to know the method(s) of data collection. Methods of Data Collection There are four main methods of data collection.  Census. A census is a study that obtains data from every member of a population. In most studies, a census is not practical, because of the cost and/or time required.  Sample survey. A sample survey is a study that obtains data from a subset of a population, in order to estimate population attributes.  Experiment. An experiment is a controlled study in which the researcher attempts to understand causeand-effect relationships. The study is "controlled" in the sense that the researcher controls (1) how subjects are assigned to groups and (2) which treatments each group receives. In the analysis phase, the researcher compares group scores on some dependent variable. Based on the analysis, the researcher draws a conclusion about whether the treatment ( independent variable) had a causal effect on the dependent variable.  Observational study. Like experiments, observational studies attempt to understand causeand-effect relationships. However, unlike experiments, the researcher is not able to control (1) how subjects are assigned to groups and/or (2) which treatments each group receives. Data Collection Methods: Pros and Cons Each method of data collection has advantages and disadvantages.

AP Statistics Tutorial: Survey Sampling Methods Sampling method refers to the way that observations are selected from a population to be in thesample for a sample survey. Population Parameter vs. Sample Statistic The reason for conducting a sample survey is to estimate the value of some attribute of a population.  Population parameter. A population parameter is the true value of a population attribute.  Sample statistic. A sample statistic is an estimate, based on sample data, of a population parameter. Consider this example. A public opinion pollster wants to know the percentage of voters that favor a flat-rate income tax. The actual percentage of all the voters is a population parameter. The estimateof that percentage, based on sample data, is a sample statistic. The quality of a sample statistic (i.e., accuracy, precision, representativeness) is strongly affected by the way that sample observations are chosen; that is., by the sampling method. Probability vs. Non-Probability Samples As a group, sampling methods fall into one of two categories.  Probability samples. With probability sampling methods, each population element has a known (nonzero) chance of being chosen for the sample.  Non-probability samples. With non-probability sampling methods, we do not know the probability that each population element will be chosen, and/or we cannot be sure that each population element has a non-zero chance of being chosen. Non-probability sampling methods offer two potential advantages - convenience and cost. The main disadvantage is that non-probability sampling methods do not allow you to estimate the extent to which sample statistics are likely to differ

Consider the following example. A pollster interviews shoppers at a local mall. If the mall was chosen because it was a convenient site from which to solicit survey participants and/or because it was close to the pollster's home or business, this would be a convenience sample. Probability Sampling Methods The main types of probability sampling methods are simple random sampling, stratified sampling, cluster sampling, multistage sampling, and systematic random sampling. The key benefit of probability sampling methods is that they guarantee that the sample chosen is representative of the population. This ensures that the statistical conclusions will be valid.  Simple random sampling. Simple random sampling refers to any sampling method that has the following properties. • • •

The population consists of N objects. The sample consists of n objects. If all possible samples of n objects are equally likely to occur, the sampling method is called simple random sampling.

There are many ways to obtain a simple random sample. One way would be the lottery method. Each of the N population members is assigned a unique number. The numbers are placed in a bowl and thoroughly mixed. Then, a blind-folded researcher selects n numbers. Population members having the selected numbers are included in the sample. Stratified sampling. With stratified sampling, the population is divided into groups, based on some characteristic. Then, within each group, a probability sample (often a simple random sample) is selected. In stratified sampling, the groups are called strata. As a example, suppose we conduct a national survey. We might divide the population into groups or strata, based on geography - north, east, south, and west.

Then, within each stratum, we might randomly select survey respondents. Cluster sampling. With cluster sampling, every member of the population is assigned to one, and only one, group. Each group is called a cluster. A sample of clusters is chosen, using a probability method (often simple random sampling). Only individuals within sampled clusters are surveyed. Note the difference between cluster sampling and stratified sampling. With stratified sampling, the sample includes elements from each stratum. With cluster sampling, in contrast, the sample includes elements only from sampled clusters. Multistage sampling. With multistage sampling, we select a sample by using combinations of different sampling methods. For example, in Stage 1, we might use cluster sampling to choose clusters from a population. Then, in Stage 2, we might use simple random sampling to select a subset of elements from each chosen cluster for the final sample. Systematic random sampling. With systematic random sampling, we create a list of every member of the population. From the list, we randomly select the first sample element from the first k elements on the population list. Thereafter, we select every kth element on the list.

101 Honda buyers, 100 Toyota buyers, and 100 GM buyers. Thus, all possible samples of size 400 did not have an equal chance of being selected; so this cannot be a simple random sample. The fact that each buyer in the sample was randomly sampled is a necessary condition for a simple random sample, but it is not sufficient. Similarly, the fact that each buyer in the sample had an equal chance of being selected is characteristic of a simple random sample, but it is not sufficient. The sampling method in this problem used random sampling and gave each buyer an equal chance of being selected; but the sampling method was actually stratified random sampling. The fact that car buyers of every brand were equally represented in the sample is irrelevant to whether the sampling method was simple random sampling. Similarly, the fact that population consisted of buyers of different car brands is irrelevant. AP Statistics Tutorial: Bias in Survey Sampling In survey sampling, bias refers to the tendency of a sample statistic to systematically over- or under-estimate a population parameter. Bias Due to Unrepresentative Samples A good sample is representative. This means that each sample point represents the attributes of a known number of population elements. Bias often occurs when the survey sample does not accurately represent the population. The bias that results from an unrepresentative sample is called selection bias. Some common examples of selection bias are described below.

overestimated voter support for Alfred Landon. The Literary Digest experience illustrates a common problem with mail surveys. Response rate is often low, making mail surveys vulnerable to nonresponse bias.

Voluntary response bias. Voluntary response bias occurs when sample members are self-selected volunteers, as in voluntary samples. An example would be call-in radio shows that solicit audience participation in surveys on controversial topics (abortion, affirmative action, gun control, etc.). The resulting sample tends to overrepresent individuals who have strong opinions.

Random sampling is a procedure for sampling from a population in which (a) the selection of a sample unit is based on chance and (b) every element of the population has a known, non-zero probability of being selected. Random sampling helps produce representative samples by eliminating voluntary response bias and guarding against undercoverage bias. All probability sampling methods rely on random sampling. Bias Due to Measurement Error A poor measurement process can also lead to bias. In survey research, the measurement process includes the environment in which the survey is conducted, the way that questions are asked, and the state of the survey respondent. Response bias refers to the bias that results from problems in the measurement process. Some examples of response bias are given below.

This method is different from simple random sampling since every possible sample of nelements is not equally likely. Test Your Understanding of This Lesson Problem An auto analyst is conducting a satisfaction survey, sampling from a list of 10,000 new car buyers. The list includes 2,500 Ford buyers, 2,500 GM buyers, 2,500 Honda buyers, and 2,500 Toyota buyers. The analyst selects a sample of 400 car buyers, by randomly sampling 100 buyers of each brand. Is this an example of a simple random sample? (A) Yes, because each buyer in the sample was randomly sampled. (B) Yes, because each buyer in the sample had an equal chance of being sampled. (C) Yes, because car buyers of every brand were equally represented in the sample. (D) No, because every possible 400-buyer sample did not have an equal chance of being chosen. (E) No, because the population consisted of purchasers of four different brands of car. Solution The correct answer is (D). A simple random sample requires that every sample of size n (in this problem, n is equal to 400) have an equal chance of being selected. In this problem, there was a 100 percent chance that the sample would include 100 purchasers of each brand of car. There was zero percent chance that the sample would include, for example, 99 Ford buyers,

Undercoverage. Undercoverage occurs when some members of the population are inadequately represented in the sample. A classic example of undercoverage is the Literary Digest voter survey, which predicted that Alfred Landon would beat Franklin Roosevelt in the 1936 presidential election. The survey sample suffered from undercoverage of low-income voters, who tended to be Democrats. How did this happen? The survey relied on a convenience sample, drawn from telephone directories and car registration lists. In 1936, people who owned cars and telephones tended to be more affluent. Undercoverage is often a problem with convenience samples.

Leading questions. The wording of the question may be loaded in some way to unduly favor one response over another. For example, a satisfaction survey may ask the respondent to indicate where she is satisfied, dissatisfied, or very dissatified. By giving the respondent one response option to express satisfaction and two response options to express dissatisfaction, this survey question is biased toward getting a dissatisfied response. Social desirability. Most people like to present themselves in a favorable light, so they will be reluctant to admit to unsavory attitudes or illegal activities in a survey, particularly if survey results are not confidential. Instead, their responses may be biased toward what they believe is socially desirable.

Nonresponse bias. Sometimes, individuals chosen for the sample are unwilling or unable to participate in the survey. Nonresponse bias is the bias that results when respondents differ in meaningful ways from nonrespondents. The Literary Digest survey illustrates this problem. Respondents tended to be Landon supporters; and nonrespondents, Roosevelt supporters. Since only 25% of the sampled voters actually completed the mail-in survey, survey results

Sampling Error and Survey Bias A survey produces a sample statistic, which is used to estimate a population parameter. If you repeated a survey many times, using different samples each time, you would get a different sample statistic with each replication. And each of the different

sample statistics would be an estimate for the same population parameter. If the statistic is unbiased, the average of all the statistics from all possible samples will equal the true population parameter; even though any individual statistic may differ from the population parameter. The variability among statistics from different samples is called sampling error. Increasing the sample size tends to reduce the sampling error; that is, it makes the sample statistic less variable. However, increasing sample size does not affect survey bias. A large sample size cannot correct for the methodological problems (undercoverage, nonresponse bias, etc.) that produce survey bias. The Literary Digest example discussed above illustrates this point. The sample size was very large - over 2 million surveys were completed; but the large sample size could not overcome problems with the sample - undercoverage and nonresponse bias. AP Statistics Tutorial: Experiments In an experiment, a researcher manipulates one or more variables, while holding all other variables constant. By noting how the manipulated variables affect a response variable, the researcher can test whether a causal relationship exists between the manipulated variables and the response variable. Parts of an Experiment All experiments have independent variables, dependent variables, and experimental units.  Independent variable. An independent variable (also called a factor) is an explanatory variable manipulated by the experimenter. Each factor has two or more levels, i.e., different values of the factor. Combinations of factor levels are called treatments. The table below shows independent variables, factors, levels, and treatments for a hypothetical experiment. Vitamin C 0 mg 0 mg 250 mg 500 mg

Dependent variable. In the hypothetical experiment above, the researcher is looking at the effect of vitamins on health. The dependent variable in this experiment would be some measure of health (annual doctor bills, number of colds caught in a year, number of days hospitalized, etc.). Experimental units. The recipients of experimental treatments are called experimental units. The experimental units in an experiment could be anything - people, plants, animals, or even inanimate objects.

pill in drug research. The drug is effective only if participants who receive the drug have better outcomes than participants who receive the sugar pill. Blinding. Of course, if participants in the control group know that they are receiving a placebo, the placebo effect will be reduced or eliminated; and the placebo will not serve its intended control purpose. Blinding is the practice of not telling participants whether they are receiving a placebo. In this way, participants in the control and treatment groups experience the placebo effect equally. Often, knowledge of which groups receive placebos is also kept from people who administer or evaluate the experiment. This practice is called double blinding. It prevents the experimenter from "spilling the beans" to participants through subtle cues; and it assures that the analyst's evaluation is not tainted by awareness of actual treatment conditions.

In the hypothetical experiment above, the experimental units would probably be people (or lab animals). But in an experiment to measure the tensile strength of string, the experimental units might be pieces of string. When the experimental units are people, they are often called participants; when the experimental units are animals, they are often called subjects. Characteristics of a Well-Designed Experiment A well-designed experiment includes design features that allow researchers to eliminate extraneous variables as an explanation for the observed relationship between the independent variable(s) and the dependent variable. Some of these features are listed below.  Control. Control refers to steps taken to reduce the effects of extraneous variables (i.e., variables other than the independent variable and the dependent variable). These extraneous variables are called lurking variables. Control involves making the experiment as similar as possible for experimental units in each treatment condition. Three control strategies are control groups, placebos, and blinding.

Vitamin E

Treatment Treatment Treatment 1 2 3

400 Treatment Treatment Treatment mg 4 5 6

 In this hypothetical experiment, the researcher is studying the possible effects of Vitamin C and Vitamin E on health. There are two factors - dosage of Vitamin C and dosage of Vitamin E. The Vitamin C factor has three levels - 0 mg per day, 250 mg per day, and 500 mg per day. The Vitamin E factor has 2 levels - 0 mg per day and 400 mg per day. The experiment has six treatments. Treatment 1 is 0 mg of E and 0 mg of C, Treatment 2 is 0 mg of E and 250 mg of C, and so on.

Control group. A control group is a baseline group that receives no treatment or a neutral treatment. To assess treatment effects, the experimenter compares results in the treatment group to results in the control group. Placebo. Often, participants in an experiment respond differently after they receive a treatment, even if the treatment is neutral. A neutral treatment that has no "real" effect on the dependent variable is called a placebo, and a participant's positive response to a placebo is called the placebo effect. To control for the placebo effect, researchers often administer a neutral treatment (i.e., a placebo) to the control group. The classic example is using a sugar

Randomization. Randomization refers to the practice of using chance methods (random number tables, flipping a coin, etc.) to assign experimental units to treatments. In this way, the potential effects of lurking variables are distributed at chance levels (hopefully roughly evenly) across treatment conditions.  Replication. Replication refers to the practice of assigning each treatment to many experimental units. In general, the more experimental units in each treatment condition, the lower the variability of the dependent measures. Confounding Confounding occurs when the experimental controls do not allow the experimenter to reasonably eliminate plausible alternative explanations for an observed relationship between independent and dependent variables. Consider this example. A drug manufacturer tests a new cold medicine with 200 participants - 100 men and 100 women. The men receive the drug, and the women do not. At the end of the test period, the men report fewer colds. This experiment implements no controls at all! As a result, many variables are confounded, and it is impossible to say whether the drug was effective. For example, gender is confounded with drug use. Perhaps, men are less vulnerable to the particular cold virus circulating during the experiment, and the new medicine had no effect at all. Or perhaps the men experienced a placebo effect. This experiment could be strengthened with a few controls. Women and men could be randomly assigned to treatments. One treatment could receive a placebo, with blinding. Then, if the treatment group (i.e., the group getting the medicine) had sufficiently fewer colds than the control group, it would be

reasonable to conclude that the medicine was effective in preventing colds.

Quantitative and qualitative data are two types of data. [edit]Qualitative data Qualitative data is a categorical measurement expressed not in terms of numbers, but rather by means of a natural language description. In statistics, it is often used interchangeably with "categorical" data. For example: favorite color = "blue" height = "tall" Although we may have categories, the categories may have a structure to them. When there is not a natural ordering of the categories, we call these nominal categories. Examples might be gender, race, religion, or sport. When the categories may be ordered, these are called ordinal variables. Categorical variables that judge size (small, medium, large, etc.) are ordinal variables. Attitudes (strongly disagree, disagree, neutral, agree, strongly agree) are also ordinal variables, however we may not know which value is the best or worst of these issues. Note that the distance between these categories is not something we can measure. [edit]Quantitative data Quantitative data is a numerical measurement expressed not by means of a natural language description, but rather in terms of numbers. However, not all numbers are continuous and measurable. For example, the social security number is a number, but not something that one can add or subtract. For example: favorite color = "450 nm" height = "1.8 m" Quantitative data always are associated with a scale measure. Probably the most common scale type is the ratio-scale. Observations of this type are on a scale that has a meaningful zero value but also have an equidistant measure (i.e., the difference between 10 and 20 is the same as the difference between 100 and 110). For example, a 10 year-old girl is twice as old as a 5 year-old girl. Since you can measure zero years, time is a ratio-scale variable. Money is another common ratioscale quantitative measure. Observations that you count are usually ratio-scale (e.g., number of widgets). A more general quantitative measure is the interval scale. Interval scales also have a equidistant measure. However, the doubling principle breaks down in this scale. A temperature of 50 degrees Celsius is not "half as hot" as a temperature of 100, but a difference of 10 degrees indicates the same difference in temperature anywhere along the scale. The Kelvin temperature scale, however, constitutes a ratio scale because on the Kelvin scale zero indicates absolute zero in temperature, the complete absence of heat. So one can say, for example, that 200 degrees

Kelvin is twice as hot as 100 degrees Kelvin. Types of Questionnaires 1)Structured non disguised questionnaire 2)Structured disguised questionnaire 3)Non structured non disguised questionnaire 4)Non structured disguised questionnaire 1)Structured non disguised questionnaire • Questions arelisted in a pre-arranged order • Respondents are told about the purpose of collecting information 2)Structured- disguised questionnaire • Questions arelisted in a pre-arranged order • Respondents are not told about the purpose of conducting survey 3)Non structured non disguisedquestionnaire • Questions are not structured. • Researcher is free to ask questions in any sequence he/she wants. • Respondents are told about the purpose of collecting information 4)Non structured disguised questionnaire • Questions are not structured • Researcher is free to ask questions in any sequence he/she wants. • Respondents are not told about the purpose of conducting survey. Types of Questions 1)Closed ended questions:--• In the closed ended type of questions, the respondent is asked to select from a fixed list of replies. • Repondent has to choose any one of the options given or multiple options • This facilitates coding and helps in quantifying the answer to the question Different Types of Questions in Questionnaire Design

The following is a list of the different types of questions in questionnaire design: 1. Open Format Questions Open format questions are those questions that give your audience an opportunity to express their opinions. In these types of questions, there are no predetermined set of responses and the person is free to answer however he/she chooses. By including open format questions in your questionnaire, you can get true, insightful and even unexpected suggestions. Qualitative questions fall under the category of open format questions. An ideal questionnaire would include an open format question at the end of the questionnaire that would ask the respondent about suggestions for changes or improvements. Example of an Open Format Question

2. Closed Format Questions Closed format questions are questions that include multiple choice answers. Multiple choice questions fall under the category of closed format questions. These multiple choices could either be in even numbers or in odd numbers. By including closed format questions in your questionnaire design, you can easily calculate statistical data and percentages. Preliminary analysis can also be performed with ease. Closed format questions can be asked to different groups at different intervals. This can enable you to efficiently track opinion over time. Example of an Open Format Question

Outsource marketing questionnaire design to Outsoure2india and get access to proficient and professional services within a fast turnaround time. If your organization wishes to collect data from a large audience, well-formatted questionnaires can be your means to collect data. Outsource2india, a pioneer in outsourcing has years of experience in designing effective questionnaires. We have a team of qualified marketing questionnaire design experts who are skilled in designing ideal questionnaires. Apart from designing questionnaires, we also have expertise in devising questions that can help you get the answers that you require. There are different types of questions that can be put forth to a large audience. The key to getting the right data depends on the questions that are asked. We have knowledge and expertise in the different types of questions in questionnaire design.

3. Leading Questions Leading questions are questions that force your audience for a particular type of answer. In a leading question, all the answers would be equally likely. An example of a leading question would be a question that would have choices such as, fair, good, great, poor, superb, excellent etc. By asking a question and then giving answers such as these, you will be able to get an opinion from your audience. Example of an Open Format Question

4. Importance Questions In importance questions, the respondents are usually asked to rate the importance of a particular issue, on a rating scale of 15. These questions can help you grasp what are the things that hold importance to your respondents. Importance questions can also help you make business critical decisions. Example of an Open Format Question

8. Rating Scale Questions In rating scale questions, the respondent is asked to rate a particular issue on a scale that ranges from poor to good. Rating scale questions usually have an even number of choices, so that respondents are not given the choice of an middle option. Example of an Open Format Question

3. Hypothetical Questions Hypothetical questions are questions that are based on speculation and fantasy. An example of a hypothetical question would be “If you were the CEO of ABC organization what would be the changes that you would bring?” Questions such as theses, forces the respondent to give his or her ideas on a particular subject. However, these kinds of questions will not give you consistent or clear data. Hypothetical questions are mostly avoided in questionnaires.

5. Likert Questions Likert questions can help you ascertain how strongly your respondent agrees with a particular statement. Likert questions can also help you assess how your customers feel towards a certain issue, product or service. Example of an Open Format Question

9. Buying Propensity Questions Buying propensity questions are questions that try to assess the future intentions of customers. These questions ask respondents if they want to buy a particular product, what requirements they want to be addressed and whether they would buy such a product in the future. Example of an Open Format Question

6. Dichotomous Questions Dichotomous questions are simple questions that ask respondents to just answer yes or no. One major drawback of a dichotomous question is that it cannot analyze any of the answers between yes and no. Example of an Open Format Question

7. Bipolar Questions Bipolar questions are questions that have two extreme answers. The respondent is asked to mark his/her responses between the two opposite ends of the scale. Example of an Open Format Question

Questions to be avoided in a questionnaire The following is a list of questionnaires to be avoided when preparing a questionnaire. 1. Embarrassing Questions Embarrassing questions are questions that ask respondents details about personal and private matters. Embarrassing questions are mostly avoided because you would lose the trust of your respondents. Your respondents might also feel uncomfortable to answer such questions and might refuse to answer your questionnaire. 2. Positive/ Negative Connotation Questions Since most verbs, adjectives and nouns in the English language have either a positive or negative connotations, questions are bound to taken on a positive or negative question. While defining a question, strong negative or positive overtones must be avoided. Depending on the positive or negative connotation of your question, you will get different data. Ideal questions should have neutral or subtle overtones.

Sign up to vote on this title
UsefulNot useful