This action might not be possible to undo. Are you sure you want to continue?

# Research Approaches and Methods of Data Collection

RAMLEE ISMAIL, PHD FACULTY OF MANAGEMENT AND ECONOMIC

**Ways of Categorizing Research Approaches
**

• Experimental versus descriptive

– experimental identifies cause and effect – descriptive describes some phenomenon

**• Quantitative versus qualitative
**

– quantitative collects numerical data – qualitative collects non-numerical data like pictures, clothing worn, interview statements

**Variables in Quantitative Research
**

• Categorical versus quantitative

– categorical varies by type or kind (e.g., gender) – quantitative varies by degree or amount (e.g., reaction time)

**Variables in Quantitative Research (cont'd)
**

• Independent versus dependent

– independent variable (IV) is presumed to cause changes in another variable – dependent variable (DV) is changed because of another variable; it measures the effect of the independent variable – example – effect of anxiety (IV) on memory (DV)

**Variables in Quantitative Research (cont'd)
**

• Independent versus dependent

– extraneous variables are those that compete with the independent variable in explaining the DV.

**Variables in Quantitative Research (cont'd)
**

• Mediating and moderating variables

– a mediating variable occurs between two other variables in a causal chain (e.g., anxiety causes distraction (mediating variable) which affects memory) – moderating variables qualify a causal relationship as dependent on another variable - e.g., the impact of anxiety on memory depends on level of fatigue (moderating variable)

Experimental Research • Cause & effect – definition is different from common use – refers to a probabilistic relationship between an IV and a DV – attempt to identify what would have happened if IV not administered .

Experimental Research (cont'd) • Criteria for identifying a causal relation – cause (IV) must be related to the effect (DV) (relationship condition) – changes in IV must precede changes in DV (temporal order condition) – no other explanation must exist for the effect .

The Psychological Experiment • Definition of the psychological experiment – objective observation – of phenomenon that are made to occur – in a strictly controlled situation in which one or more factors are varied and others are kept constant .

Advantages of the Experimental Approach • Causal inference – experimental approach is best method for inferring causation – causal description refers to identifying the consequences of manipulating an IV – causal explanation refers to explaining the mechanisms through which the relationship exists .

• Ability to manipulate variables • Control Advantages of the Experimental Approach (cont'd) – extraneous variables are controlled by holding them constant or by using random assignment and matching .

Disadvantages of the Experimental Approach • Does not test the effects of nonmanipulated variables – many potential independent variables cannot be directly manipulated • Artificiality – refers to potential problems in generalizing findings from laboratory settings to the “real world” • Inadequate method of scientific inquiry .

possibly more artificiality .Experimental Research Settings • Field experiments – advantage – may be easier to generalize findings. disadvantage – less control of extraneous variables • Laboratory experiments – more control than field experiments.

Experimental Research Settings (cont'd) • Internet experiments – advantages • • • • access to diverse population bring experiment to participant large sample and thus greater power direct assessment of motivational confounding • cost savings .

Experimental Research Settings (cont'd) • Internet experiments – disadvantages • • • • multiple submissions lack of control self-selection dropout .

Nonexperimental Quantitative Research • Primary goal is to provide an accurate description of a situation or phenomenon or to describe the size and direction of relationships among variables • Types – correlational study – natural manipulation research – cross-sectional and longitudinal studies .

Correlational Study • Measures the degree of relationship between two variables • Used for prediction .

inability to determine causality – third variable problem – relationship between two variables is due to a separate. variable .Correlational Study (cont'd) • Primary limitation . unmeasured.

Correlational Study (cont'd) • Primary limitation .inability to determine causality – path analysis – method of testing relationships among variables by seeing how well they fit some theoretical model • direct effects – when a variable directly impacts another • indirect effects –effect occurs through mediating variable .

.g. a comparison of psychological functioning (DV) of people living near twin towers versus farther away (IV) • Because variables not directly controlled extraneous variables could be a problem .Natural Manipulation Research • Variables of interest are not directly manipulated – e.

comparing IQ scores of several different age groups – potential problem .g.Cross-sectional and Longitudinal Studies • Cross-sectional studies assess groups of participants at one point in time – e..age-cohort effects .

g..Cross-sectional and Longitudinal Studies (cont'd) • Longitudinal studies assess the same participants over a period of time – e. measuring changes in IQ for the same participants over several years – disadvantages – attrition and cost • Cohort-sequential studies represent a combination of cross-sectional and longitudinal designs – different age groups are tested longitudinally .

Qualitative Research • Definition – interpretative – multimethod – conducted in natural setting • Strengths – description of individuals with common identify – develop theoretical understanding of phenomena .

Qualitative Research (cont'd) • Weaknesses – difficult to generalize findings – possible lack of agreement among researchers – objective hypothesis testing not used .

Major Methods of Data Collection • • • • • Tests Questionnaires Interviews Focus groups Observation – naturalistic versus laboratory – types vary depending on involvement of participant .

Major Methods of Data Collection (cont'd) • Existing or secondary data .

UPSI .Problem Identification and Hypothesis Formation Ramlee Ismail FPE.

eyewitness identification • Past research • Theory – goal function-summarize and integrate – tool function-guide research .Sources of Research Ideas • Everyday life • Practical issues—e.g..

Issues Affecting Research Ideas • Bias • Must be capable of being scientifically investigated .

Review of the Literature • Purpose of the literature review – identify – provide – identify – identify – provide report if the topic has been researched design ideas methodological problems special needs information for the research .

Review of the Literature (cont'd) • Doing the search – Books – Journals – Electronic databases • PsycINFO – Internet resources .

expense.Feasibility of the Study • Must consider time. ethical and other issues to determine if conducting the study is possible and practical .

An interrogative sentence that states the relationship between two variables – Criteria for good research problems • Variables should express a relation • Stated in question form • Capable of empirical testing .Formulating the Research Problem • Definition .

What effect does the environment have on learning ability? – better – What effect does the amount of exposure to words have on the speed with which they are learned? .Formulating the Research Problem (cont'd) • Specificity of the research question – too vague .

Formulating Hypotheses • Definition -.the best prediction or a tentative solution to a problem • Criterion--Must be capable of being refuted or confirmed (testability) • Types – research – null .

Ethics .

Research Ethics: What Are They • Definition-A set of guidelines to assist the researcher in conducting ethical research • Relationship Between Science and Society – government funding of scientific research – congressional influence on which studies are funded – corporate funding of scientific research .

Research Ethics: What Are They (cont'd) • Professional Issues – scientific Misconduct • faking data • other less serious issues as failing to present data or changing the design to meet pressure from a funding source • developing an institutional culture of ethical behavior best way of combating this. • Treatment of Research Participants .

Ethical Dilemmas • Definition—deciding if the benefit of the research is greater than the cost to the participants – primary consideration—welfare of the participant .

.

Ethical Dilemmas (cont'd) • Role of the IRB – review research protocols to assess ethical acceptability of a study – use of the decision-plane model for making decisions .

2. 1997. (From “Hedgehogs.1 A decision-plane model representing the costs and benefits of research studies.Figure 4.) . foxes and the evolving social contract in science: Ethical challenges and methodological opportunities” by R.Rosnow. pp. Reprinted by permission of the author.L. Psychological Methods. Copyright by the American Psychological Association. 345–356.

Ethical Guidelines • Respect for persons and their autonomy – adhered to by obtaining a person’s informed consent .

Ethical Guidelines (cont'd) • Beneficence and nonmaleficence – goal of research studies-minimum harm and maximum benefit – IRB Proposal review categories in making this decision • Exempt studies • Expedited review • Full board review .

Ethical Guidelines (cont'd) • Justice – asks the question – “who should receive the benefits of the research and who should bear its burdens? .

Ethical Guidelines (cont'd) • Trust – involves maintaining trust between researcher and participant – use of deception or failing to maintain confidentiality may violate trust • Fidelity and Scientific Integrity .

APA Ethical Standards for Research • APA code of ethics—10 guiding principles to direct behavior of researchers .

APA Ethical Standards for Research (cont'd) • Issues to consider when conducting research – institutional approval must be obtained – informed consent • dispensing with it informed consent • minors—need to obtain their assent • passive versus active consent .

APA Ethical Standards for Research (cont'd) • Issues to consider when conducting research – deception—refers to deceit • active—deception by commission • passive—deception by omission • may cause participants to distrust psychologists • alternatives such as role playing are inadequate • studies that involve invasion of privacy and/or may harm the participants .

APA Ethical Standards for Research (cont'd) • Issues to consider when conducting research – Debriefing—postexperimental interview • dehoaxing – debriefing the participants about any deception that was used • desensitizing – eliminating any undesirable influence that the experiment might have had on the participant .

anonymity and the concept of privacy • privacy--controlling others access to information about yourself • when and under what circumstances others get your information .APA Ethical Standards for Research (cont'd) • Issues to consider when conducting research – Coercion and freedom to decline participation – Confidentiality.

APA Ethical Standards for Research (cont'd) • Issues to consider when conducting research – Confidentiality. anonymity and the concept of privacy • you must be able to decline giving their information to others • anonymity—keeping participants identify unknown • confidentiality—not revealing or connecting the participant to the information obtained .

Ethical Issues in Electronic Research • Informed consent – no clear distinction between what is public and what is private over the internet – how to obtain informed consent • Privacy – hackers can obtain the data – but data can be encrypted .

Ethical Issues in Electronic Research (cont'd) • Debriefing – extra attention must be given due to possibility of participants not completing study – ways to maximize probability of debriefing • have participant provide an email address • provide a ‘leave the study’ radio button • incorporate a debriefing page into the program so it is delivered directly to the participant .

Ethical Issues in Preparing the Research Report • Principles to follow – justice—who will be the author(s) – scientific integrity—accurate and honest reporting • Writing the research report – presentation should be honest an written with integrity – avoid plagiarism—not giving another person credit for their work .

Ethics of Animal Research • Distinction between animal welfare and animal rights – animal welfare involve improving lab conditions and reducing the number of animals used – animal rights is the belief that nonhuman animals have similar rights to humans – APA guidelines deal with animal welfare .

Ethics of Animal Research (cont'd) • APA guidelines for the care and use of animals in research .

Measuring Variables and Sampling .

Variable and Measurement • Variable – a condition or characteristic that can take on different values or categories • Measurement – the assignment of symbols or numbers to something according to a set of rules .

Scales of Measurement • Nominal Scale – use of symbols to classify or categorize – e.. using numbers to categorize gender • Ordinal Scale – rank-order scale of measurement – e. finishing order in a race – equal distances on scale not necessarily equal on dimension being measured .g.g..

.g. weight..Scales of Measurement (cont'd) • Interval Scale – same properties of ordinal plus equal distances between adjacent numbers – e.g. temperature on Fahrenheit scale • Ratio Scale – highest scale of measurement – same properties of other scales plus absolute zero point – e. height .

Psychometric Properties of Good Measurement • Reliability – refers to the consistency or stability of the scores of your measurement instrument • Validity – refers to the extent to which your measurement procedure is measuring what you think it is measuring and whether you have interpreted your scores correctly .

Psychometric Properties of Good Measurement (cont'd) • A measure must be reliable in order to be valid but a reliable measure is not necessarily valid .

Types of Reliability • Test-Retest Reliability – consistency of individual scores over time – same test administered to individuals two times – correlate scores to determine reliability – how long to wait between tests? .

Types of Reliability (cont'd) • Equivalent-Forms Reliability – consistency of scores on two versions of test – each version of test given to different groups of individuals .

Types of Reliability (cont'd) • Internal Consistency Reliability – consistency with which items on a test measure a single construct – involves comparing individual items within a single test – coefficient alpha is common index .

Types of Reliability (cont'd) • Interrater Reliability – degree of agreement between two or more observers – interobserver agreement is the percentage of times different raters agree .

Validity • Validity refers to the accuracy of the inferences. interpretations. intelligence or happiness) • Do operational definitions accurately represent construct we are interested in? .g.. • Involves the measurement of constructs (e. or actions made on the basis of test scores.

Methods Used to Collect Evidence of Validity • Content-Related Evidence (content validity) – validity assessed by experts • do items appear to measure construct of interest? • were any important content areas omitted? • were any unnecessary items included? .

Methods Used to Collect Evidence of Validity (cont'd) • Evidence Based on Internal Structure – how well do individual items relate to the overall test score or other items on the test – factor analysis – statistical procedure used to determine the number of dimensions present in a set of items .

Methods Used to Collect Evidence of Validity (cont'd) • Evidence Based on Relations to Other Variables – criterion-related validity • predictive validity – using scores obtained at one time to predict the scores on a criterion at a later time • concurrent validity – degree to which scores obtained at one time correctly relate to the scores on a known criterion obtained at the same time .

Methods Used to Collect Evidence of Validity (cont'd) • Evidence Based on Relations to Other Variables – convergent validity – extent to which test scores relate to other measures of the same construct – discriminant validity – extent to which your test scores do not relate to other test scores measuring different constructs .

Methods Used to Collect Evidence of Validity (cont'd) • Evidence Based on Relations to Other Variables – known groups validity evidence – extent to which groups that are known to different from one another actually differ on the construct being developed .

Sampling Methods • Sample – a set of elements selected from a population • Population – the full set of elements from which the sample was selected .

Sampling Methods (cont'd) • Sampling – process of drawing elements from population to form a sample – representative sample – equal probability method of selection method (EPSEM) • Statistic – a numerical characteristic of sample data .

Sampling Methods (cont'd) • Parameter – a numerical characteristic of population data • Sampling error – the difference between the value of the sample statistic and the value of the population parameter • Sampling frame – a list of all the elements in a population .

Random Sampling Techniques • Simple Random Sampling – choosing a sample in a manner in which everyone has an equal chance of being selected – sampling “without replacement” is preferred – random numbers generators simplify the process .

Random Sampling Techniques (cont'd) • Stratified Random Sampling – random samples drawn from different groups or strata within the population • proportional stratified sampling involves insuring that each subgroup in sample is proportional to the subgroups in the population .

Random Sampling Techniques (cont'd) • Cluster Random Sampling – involves random selection of groups of individuals (clusters) – one-stage cluster sampling involves randomly selection clusters and using all individuals within – two-stage cluster involves randomly choosing individuals within each chosen cluster .

and include that person in your sample • also include each kth element in your sample .Random Sampling Techniques (cont'd) • Systematic Sampling – Involves three steps • determine the sampling interval (k) – population size divided by desired sample size • randomly select a number between 1 and k.

college students • Quota Sampling – identifying quotas for individual groups and then using convenience sampling to select participants within each group .Nonrandom Sampling Techniques • Convenience Sampling – using research participants that are readily available – e..g.

– particularly useful in identifying participants from a difficult to find population .g.Nonrandom Sampling Techniques (cont'd) • Purposive Sampling – involves identifying a group of individuals with specific characteristics – e. college freshmen who have been diagnosed with ADHD • Snowball Sampling – technique in which research participants identify other potential participants..

Random Selection and Random Assignment • Random selection involves selecting participants for research – purpose is to obtain a representative sample • Random assignment involves how participants are assigned to conditions within the research – purpose is to create equivalent groups to allow for investigation of causality .

Determining Sample Size • If less than 100 use entire population • Larger sample sizes make it easier to detect an effect or relationship in the population • Compare to other research studies in area • Larger sample sizes are needed if population is – heterogeneous .

Determining Sample Size (cont'd) • Larger sample sizes are needed if population is – heterogeneous – you have multiple groups – if you want increased precision – when you expect a small effect .

Determining Sample Size (cont'd) • Larger sample sizes are needed if population is – when you use less efficient methods of sampling – for some statistical techniques – if you expect a low response rate .

.Sampling in Qualitative Research • Qualitative research focuses on indepth study of one or a few cases. It is common to mix several different methods. • Several different sampling methods are available.

.

Research Validity .

Research Validity • Research validity refers to the correctness or truthfulness of an inference that is made from the results of a research study. – Four Major Types of Validity • • • • Statistical conclusion validity Construct validity Internal validity External validity .

Statistical Conclusion Validity • Do independent and dependent variables covary? • Inferential statistics allow us to establish this type of validity • Small sample size is a threat to statistical conclusion validity .

Construct Validity • Definition—extent to which we can infer higher-order constructs for our operations • Constructs are used for – research participants – independent variable – dependent variable – experimental setting .

Threats to Construct Validity • Inadequate explanation of the construct • Construct confounding • Mono-operation bias • Confounding constructs with level of constructs • Treatment-sensitive factorial structure • Reactive self-report changes • Reactivity to the experimental .

Threats to Construct Validity (cont'd) • • • • • Experimenter effects Novelty and disruption effects Compensatory equalization Compensatory rivalry Treatment diffusion .

Threats to Construct Validity (cont'd) • Reactivity to the Experimental Situation • Refers to research participants’ motives and perceptions influencing their response to the DV – motive and perceptions influenced by the demand characteristics of the experiment – primary motive--positive selfpresentation – condition producing positive selfpresentation motive .

Threats to Construct Validity (cont'd) • Implication for research – intertreatment interaction – intratreatment interaction • Experimenter Effect • Experimenter’s motive of supporting the study hypothesis can lead to bias .

Threats to Construct Validity (cont'd) • Ways experimenter may bias the study – experimenter attributes – experimenter expectancies • • • • effect on experimenter—recording bias effect on research participant-mediated of expectancies handling in animal research .

Threats to Construct Validity (cont'd) • Ways experimenter may bias the study – experimenter expectancies • nonverbal communication in human studies • magnitude of expectancies—can exist in animal and human research and can be greater than the IV .

Internal Validity • Definition-.accuracy of the inference that the independent variable caused the effect observed in the dependent variable • Primary threat is confounding extraneous variables • Eliminate the confounding influence of extraneous variables by holding their influence constant .

any event that occurs between the pretest and posttest that can produce the outcome – differential history occurs in multigroup design when event has differential impact on groups • Maturation – internal changes of research participants that occur over time .Threats to Internal Validity • History .

g.. if human observers change measurement because they become bored or fatigued • Testing – occurs when the influence of taking the pretest affects the posttest .Threats to Internal Validity (cont'd) • Instrumentation – changes in the measurement of the dependent variable – e.

Threats to Internal Validity (cont'd) • Regression Artifact – the tendency for extreme scores to be closer to average at posttest – potential problem if participants with extreme scores at pretest are selected for study • Attrition – drop out rate of research participants – a potential threat in two group designs where differential attrition occurs .

Threats to Internal Validity (cont'd) • Selection – potential threat in a two group design when different selection procedures are used • Additive and Interactive effects – produced by the combined effect of two or more threats .

settings. outcomes and times • A failure to generalize can result from several factors: – lack of random selection – chance variation – failure to identify interactive effects of independent variables .External Validity • Generalizing across people. treatment variations.

Types of External Validity • Population validity – do results generalize from sample to target population? • Ecological validity – do results of study generalize to different setting? – common criticism of laboratory experiments .

Types of External Validity (cont'd) • Temporal validity – do results generalize across time? • seasonal variation • cyclical variation • Treatment variation validity – do results generalize across variations in treatment? .

dependent variables? .Types of External Validity (cont'd) • Outcome validity – do results generalize to other. related.

Relationship between Internal and External Validity • Relationship between internal and external validity is often inverse • Factors that increase our ability to establish cause and effect tend to decrease our ability to generalize • External validity is established through replication .

Relationship between Internal and External Validity (cont'd) • Emphasis of internal or external validity depends on whether or not a causal relationship has been established .

Control Techniques in Experimental Research .

Goal of Experimentation • Identify the causal effect of the independent variable – must have internal validity to do this – internal validity requires control of confounding variables to eliminate differential influence .

Randomization • A control technique to equate groups of participants – accomplished by ensuring that every member has an equal chance of being assigned to any group .

Randomization (cont'd) • Random assignment—randomly assigning participants to treatment groups – provides maximum insurance that groups are equal – equates groups because every person has an equal chance of being assigned to each group .

Randomization (cont'd) • Random assignment—randomly assigning participants to treatment groups – accomplishes this by randomly distributing the extraneous variables over the treatment groups .

Matching • Use of any of a variety of techniques to equate participants in the treatment groups on specific variables • Advantages of matching – controls for the variables on which participants are matched – increases the sensitivity of the experiment .

Matching Techniques • Holding variables constant – disadvantages • restricts the population size • restricts generalization to the type of participants in the study • Building the extraneous variable into the research design – should be used only when you are interested in the effect of the extraneous variable .

Matching Techniques (cont'd) • Yoked control—matches participants on the basis of the temporal sequence of administering an event • Matching by equating subjects – Precision control—match case by case • disadvantages – identifying the variables on which to match – matching participants increases as the number of variables on which to match increases – some variables difficult to match .

Matching Techniques (cont'd) • Matching by equating subjects – Frequency distribution control—match on the overall distribution of the selected var. • disadvantage – combination of variables may be mismatched .

Counterbalancing • Used to control order effects and carryover effects • Counterbalancing procedures – randomized counterbalancing • sequence of conditions is randomly determined for each participant – intrasubject counterbalancing • participants take treatments in more than one order • may not be feasible with long treatment sequences .

Counterbalancing (cont'd) • Counterbalancing procedures – complete counterbalancing • all possible sequences of treatment conditions are used • participants randomly assigned to sequence • rarely used with more than 3 conditions because number of possible sequences (N!) is too large .

for sequences enumerated – each treatment condition must appear an equal number of times in each ordinal position and.Counterbalancing (cont'd) • Counterbalancing procedures – incomplete counterbalancing • most commonly used technique • not all possible sequences are used • criteria . – each treatment condition must precede and be followed by every other condition an equal number of times .

(n-1).Counterbalancing (cont'd) • Counterbalancing procedures – incomplete counterbalancing • sequences determined by the form 1. • controls for linear sequencing effects but not for nonlinear carry-over effects . 2. n. 3. 4 etc.

Control of Participant Effects • Double-blind Placebo Model • Deception • Control of participant interpretation – retrospective verbal report – concurrent verbal reports • sacrifice groups • concurrent probing • think-aloud technique .

**Control of Experimenter Effects
**

• Control of recording errors

– use multiple data recorders – have participants make responses on a computer

**• Control of attribute errors
**

– use the same experimenter in all treatment conditions unless the treatment condition interacts with attributes

**Control of Experimenter Effects (cont'd)
**

• Control of experimenter expectancies

– Blind technique - researcher is unaware of participant’s treatment condition – Partial blind technique - researcher is unaware of participant’s treatment condition for a portion of the research – Automation

Experimental Research Design

Introduction

• Research design—the outline, plan, or strategy used to answer the research question • Purpose of research design

– control for unwanted variation – suggests how data will be statistically analyzed

**Weak Experimental Research Designs
**

• One-Group Posttest-Only Design

– rarely useful because no pretest or control group – almost all threats to internal validity apply – is useful only when specific background information exists on the dependent variable

Figure 8.1 One-group posttest-only design.

Figure 8. .2 One-group pretest-posttest design.

Weak Experimental Research Designs (cont'd) • One-Group Pretest-Posttest Design – most threats to internal validity exist – to infer causality must identify and demonstrate that internal validity threats do not exist .

Weak Experimental Research Designs (cont'd) • Posttest-Only Design with Nonequivalent Groups – no assurance of equality of groups because they were not randomly assigned – may confound selection with treatment effect .

.The dashed line indicates nonequivalent groups.Figure 8.3 Posttest-only design with nonequivalent groups.

Strong Experimental Research Designs • Improved internal validity achieved by eliminating rival hypotheses – with control techniques – with a control group—group that does not get the independent variable or gets some standard value • serves as source of comparison to experimental group • controls for rival hypothesis .

Between-Participants Designs • Posttest-Only Control-Group Design – random assignment to groups creates equivalence – use of control group eliminates most threats to internal validity – weaknesses of design • does not guarantee equivalence of groups – particularly with small sample size • no pretest to assess equivalence .

.4 Posttest-only control-group design.Figure 8.

Between-Participants Designs (cont'd) • Pretest-Posttest Control-Group Design – pretest added to posttest-only controlgroup design – advantages of including a pretest • can assess the effects of randomization – insure that groups are equivalent on dependent variable prior to introduction of experimental conditions • can assess the effects of additional variables that may interact with independent variable .

6 Pretest-posttest control-group design. .Figure 8.

Between-Participants Designs (cont'd) • Pretest-Posttest Control-Group Design – advantages of including a pretest • determine if ceiling effect has occurred • allows use of analysis of covariance to statistically control for pretest differences • allows researcher to assess the change in dependent variable from pretest to posttest • potential weakness – may not generalize to situations with no pretest .

Within-Participants Designs • Participants included in all conditions (also known as repeated measures designs) • Counterbalancing necessary to eliminate linear sequencing effects .

Within-Participants Designs (cont'd) • Within-participants posttest-only design – advantages • increased sensitivity because effects of individual differences are controlled • fewer research participants needed – disadvantages • difficult for participants • potential problem of differential carryover effects .

Factorial Designs • A design that includes two or more independent variables • A main effect exists when one independent variable has an effect on the dependent variable .

9 Factorial design with two independent variables. .Figure 8.

when the effect of one independent variable depends on another • When displayed graphically an interaction yields non-parallel lines .Factorial Designs (cont'd) • An interaction occurs when two or more independent variables have an interactive effect on the dependent variable..e. i.

10 Tabular representation of data from experiment on driving performance.Figure 8. .

11 Line graph of cell means. .Figure 8.

Factorial Designs (cont'd) • Mixed model factorial designs – uses a combination of withinparticipants and between-participants .

Strengths and Weaknesses of Factorial Designs • Advantages of using factorial designs include – more than one independent variable allows for more precise hypotheses – control of extraneous variables by including as an independent variable – ability to determine the interactive effect of two or more independent variables .

Strengths and Weaknesses of Factorial Designs (cont'd) • Disadvantages of factorial designs – using more than two independent variables may be logistically cumbersome – higher-order interactions are difficult to interpret .

• Examination of prior research literature can guide choice of design • Many factors to consider – use of control group – number of comparison groups – pretest(s) – within-participants or betweenparticipants – number of independent and dependent variables Choice/Construction of the Appropriate Experimental Design .

Procedure for Conducting an Experiment .

Institutional Approval • Institutional Animal Care and Use Committee (IACUC) – reviews research protocols for studies using nonhuman animals – determines if proposed procedures are ethical .

**Institutional Approval (cont'd)
**

• Institutional Review Board (IRB)

– reviews research with human participants – primary concern is participant welfare

• has informed consent been obtained? • do potential benefits of study outweigh risks to participants?

Research Participants

• Animals

– albino variant of brown rat is most commonly used – the Animal Welfare Act regulates care and housing – the Guide for the Care and Use of Laboratory Animals provides a guide to using animals appropriately

**Research Participants (cont'd)
**

• Human participants

– convenience sample of college students often used in psychological research

• may not be representative of target population • some research requires special populations (e.g., school children)

**– using the Internet to recruit research participants is becoming more common
**

• advantage of providing access to a larger and more diverse sample than otherwise possible

**Research Participants (cont'd)
**

• Human participants

– using the Internet to recruit research participants is becoming more common

• sample may not be representative of target population

– important to report how participants were selected and assigned to research conditions

Sample Size

• How many research participants should be included in the research?

– practicality must be balanced with the increased power that accompanies a large sample

Power

• Definition – the probability of correctly rejecting the null hypothesis • Power of at least .80 is desired • As sample size increases, power increases • Factors that influence power: alpha level, sample size, and effect size

.

computers.Power (cont'd) • Apparatus and/or Instruments – How will independent variable be manipulated and how will dependent variable be measured? – depends on nature of research – may involve active participation by researcher or a particular type of apparatus (e.g. MRI) ..

Power (cont'd) • Apparatus and/or Instruments – Behavioral Research Methods. Instruments. and Computers is a good source to identify research instruments .

Power (cont'd) • Procedure – detailed description of how experiment will be conducted – important to allow for future replication • Scheduling of research participants – consider issues of availability and anticipate drop out rates with human participants .

Power (cont'd) • Consent to participate – informed consent necessary unless waived by IRB – consent form should include the following elements: • basic information about the study – where it will be conducted. duration • details about procedure of study including possible risks • any potential benefits that might be derived .

g. a statement indicating that participant can refuse to answer any question without penalty • for sensitive topics (e.. abuse) extra information for individuals who need assistance . depression.Power (cont'd) • Consent to participate – consent form should include the following elements: • the type of compensation provided and penalty for early withdrawal • if using questionnaire.

Power (cont'd) • Consent to participate – consent form should include the following elements: • participants must be informed that they can withdraw from study at any time without penalty • participants must be informed as to how the records and data obtained will be kept confidential .

Power (cont'd) • Instructions – should be precise but not overly complex – warmup trials can assess understanding of instructions • Data collection .

particularly important if deception is used – educational – methodological • to establish effectiveness of independent variable or deception – sense of satisfaction .Debriefing or Postexperimental Interview • Functions – ethical • attempt to return participants to preexperimental state.

Debriefing or Postexperimental Interview (cont'd) • Functions – sense of satisfaction • feeling in participants that their participation has been beneficial to science and society .

Debriefing or Postexperimental Interview (cont'd) • How to debrief – face to face generally preferred – begin by asking if participant has questions – question participant to determine if all aspects of study were clear – if deception was used .

Debriefing or Postexperimental Interview (cont'd) • How to debrief – if deception was used • attempt to determine if participant discerned true nature of study • explain the purpose of using deception – ask participant not to reveal details of experiment to other participants .

Debriefing or Postexperimental Interview (cont'd) • Is debriefing effective? – ethical and methodological likely to be fulfilled if procedures are followed. educational less likely .

Pilot Study • A brief run-through of the entire experiment with a few participants prior to the actual collection of data • Serves several purposes: – establishes clarity of instructions – provides evidence that independent variable is being manipulated adequately – familiarizes researcher with the procedure .

Quasi-Experimental Designs .

Introduction • Quasi-experimental design—an experimental type design that does not eliminate all threats to internal validity • Causal inferences are made by ruling out rival hypothesis – by identification and study of the threats – by including design elements as pretests or other control groups .

Introduction (cont'd) • Causal inferences are made by ruling out rival hypothesis – by coherent pattern matching—making a complex prediction that few rival hypotheses can explain .

Nonequivalent Comparison Group Design • The most common quasiexperimental design • Participants not randomly assigned to groups • Threats frequently reveal themselves in the outcome .

2 Nonequivalent comparison group design.Figure 10.) . (Note: The dashed line indicates the lack of random assignment.

.

Outcomes with Rival Hypotheses • Outcome I: Increasing Treatment and Control Groups – greater increase in treatment condition – could be caused by a number of rival hypotheses • selection-maturation • selection-history .

Copyright © Rand McNally Publishing Company. edited by M.D.Cook & D.3 Increasing treatment and control groups. 1976.) . (From “The design and conduct of quasi-experiments and true experiments in field settings” by T.Dunnette.T.D.Campbell. in Handbook of Industrial and Organizational Psychology.Figure 10.

Dunnette.Cook & D.Figure 10.D. edited by M. (From “The design and conduct of quasiexperiments and true experiments in field settings” by T. in Handbook of Industrial and Organizational Psychology.4 First increasing treatment effect.T.Campbell.D.) . Copyright © Rand McNally Publishing Company. 1976.

Outcomes with Rival Hypotheses (cont'd) • Outcome II: First Increasing Treatment Effect – no change in control group. treatment condition starts higher and increases – selection-history is a –plausible alternative explanation .

Outcomes with Rival Hypotheses (cont'd) • Outcome III: Second Increasing Treatment Effect – no change in control group. treatment condition starts lower and increases .

Figure 10.D.Cook and D. in Handbook of Industrial and Organizational Psychology. Copyright © Rand McNally Publishing Company.Campbell.) . 1976. (From “The design and conduct of quasi-experiments and true experiments in field settings” by T.5 Second increasing treatment effect. edited by M.T.Dunnette.D.

Outcomes with Rival Hypotheses (cont'd) • Outcome III: Second Increasing Treatment Effect – selection-regression is possible rival explanation • Outcome IV: Crossover Effect – rival hypotheses unlikely with this type of result .

D.) .Dunnette. Copyright © Rand McNally Publishing Company. (From “The design and conduct of quasi-experiments and true experiments in field settings” by T.D.Campbell.T.6 Crossover effect. 1976. in Handbook of Industrial and Organizational Psychology.Figure 10.Cook and D. edited by M.

Ruling out Threats to the Nonequivalent Comparison Group • Matching – selection-regression effects may occur when using extreme groups • Statistical control techniques .

Causal Inference from Nonequivalent Comparison Group Design • To increase internal validity – Do not let participants self-select into groups – Minimize pretest differences in groups .

Interrupted Time-Series Design • A quasi-experimental design in which a treatment effect is assessed by comparing the pattern of pre.and posttest scores for a single group of research participants .

Figure 10.8 Interrupted time-series design. .

**Interrupted Time-Series Design (cont'd)
**

• Use of multiple pretest and posttest measurements demonstrates reliability of effect; improvement over one-group pretest-posttest design • Most appropriate statistical test is autoregressive integrated moving average (ARIMA)

**Interrupted Time-Series Design (cont'd)
**

• Primary weakness – no control of history effects

**Regression Discontinuity Design
**

• Used to determine if the special treatment some individuals receive has any effect • Characteristics of the design

– all individuals are pretested – individuals who score above some cutoff score receive the treatment – all individuals are posttested – discontinuity in the regression line indicates a treatment effect

Figure 10.14 Regression discontinuity experiment with no treatment effect. (From Shadish,W.R., Cook,T.D., & Campbell,D.T., 2002, Experimental and quasiexperimental designs for generalized causal inference. Copyright 2002.Houghton Mifflin Co.Used with permission.)

Figure 10.15 Regression discontinuity experiment with an effective treatment. (From Shadish,W.R., Cook,T.D., & Campbell,D.T., 2002, Experimental and quasiexperimental designs for generalized causal inference. Copyright 2002. Houghton Mifflin Co.Used with permission.)

• Assignment must be based on the cutoff score • Assignment cannot be a nominal variable as gender, or drug user or nonuser • Cutoff score should be at the mean • Experimenter should control group assignment

Requirements of the Regression Discontinuity Design

curvilinear. etc.) should be known • Participants must be from the same population Requirements of the Regression Discontinuity Design (cont'd) .• Relationship (linear.

Single-Case Research Designs .

Introduction • Single-case designs--use only one participant or one group of participants • Research in psychology began with the intensive study of single organisms – Pavlov – Ebbinghaus – Fisher’s introduction of ANOVA – Skinner continued single-case research .

Introduction (cont'd) • Research in psychology began with the intensive study of single organisms – single-case designs became more acceptable with the growth in research in behavior therapy .

Single-Case Designs • Are time series designs – but time-series design do not eliminate the history threat—so they must be altered • Assessment of a treatment effect is based on the assumption that the pattern of pretreatment responses would continue in the absence of the treatment • Simplest type of single-case design is .

ABA and ABAB Designs • Baseline is behavior without treatment • Demonstration of treatment effectiveness requires return to baseline • May not be desirable to end on baseline and so an ABAB design may be used • Multiple-baseline design can address failure to reverse due to carryover .

Figure 11. .1 ABA design.

ABA and ABAB Designs (cont'd) • Understand the distinction between withdrawal of treatment and reversal .

Interaction Design • Tests the combined effects of two treatments • Must use both sequences to test the combined influence over the effect of just one variable .

.4 Single-participant interaction design.Figure 11.

Interaction Design (cont'd) • Disadvantage—interaction effect can be demonstrated only if each variable does not cause a maximum increment in performance .

Multiple Baseline Design • Treatment condition is successively administered to several participants. outcomes. or settings • Treatment effect demonstrated by a change in behavior only when treatment is given • Requires independence of behaviors to demonstrate an effect .

Figure 11. .5 Multiple-baseline design.

Changing-Criterion Design • Participants behavior is gradually shaped by changing the criterion for success • Factors to consider in using this design – length of treatment—long enough for the behavior to stabilize – size of criterion change-large enough to notice a change – number of treatment phases—at least .

.T1 through T4 refer to four different phases of the experiment.7 Changing-criterion design.Figure 11.

Methodological Considerations • Baseline—must be stable – absence of trend or in the direction opposite of what is expected from the treatment – little variability • Change only one variable at a time .

Methodological Considerations (cont'd) • Length of phases – possibility of extraneous variables creeping in with long phases – carry-over effect—may require short phases – cyclic variations—maybe need to incorporate the cycle in all phases .

Criteria for Evaluating Change • Experimental criterion – replication – nonoverlap of treatment and baseline phases .

Criteria for Evaluating Change (cont'd) • Therapeutic criterion—clinical significance – researchers often use social validation— does it produce a change in the client’s daily functioning • social comparison—compare behavior with nondeviant peers • subject evaluation—do others who interact with the client see a change .

Survey Research .

to test theoretical models. and to describe and predict behavior . or beliefs • Surveys often used to assess changes in attitudes over time. opinions. activities.Survey Research • Nonexperimental method using interviews or questionnaires to assess attitudes.

Survey Research (cont'd) • To insure high external validity random samples should be used .

Steps in Conducting Survey Research • Plan and design the survey • Construct and refine the survey instrument • Collect the survey data • Enter and “clean” the data • Analyze & interpret the data .

Cross-sectional and Longitudinal Designs • Cross-sectional studies involve collecting data in a single. brief time period • Longitudinal studies involve collecting data at more than one point in time – panel studies – type of longitudinal design in which the same individuals are surveyed multiple times over time – trend study – same survey questions are .

Methods of Data Collection • Face-to-face or personal interview – advantages include ability to clear up ambiguities and higher completion rate – disadvantage .expense • Telephone interview – less expensive than face to face and comparable data .

Methods of Data Collection (cont'd) • Mail questionnaires – low cost but low return rate • Group-administered questionnaire .

Methods of Data Collection (cont'd) • Electronic survey – e-mail & Web-based – advantages of electronic surveys • • • • cost instant access to wide audience download to spreadsheet flexible in layout—especially web-based survey .

Methods of Data Collection (cont'd) • Electronic survey – disadvantages of electronic surveys • privacy and anonymity • sample may not be representative of population .

Constructing and Refining a Survey Instrument • Principle 1. Write Items to Match the Research Objectives – conduct literature review – write items that will yield reliable and valid data .

Write Items That Are Appropriate for the Respondents to be Surveyed – use easy-to-understand language based on reading level. culture etc .Constructing and Refining a Survey Instrument (cont'd) • Principle 2.

Constructing and Refining a Survey Instrument (cont'd) • Principle 3. Simple Questions • Principle 4. Write Short. Avoid Loaded or Leading Questions – a loaded term is one that produced an emotional response – a leading question suggests to the respondent how they should respond .

Constructing and Refining a Survey Instrument (cont'd) • Principle 5. Avoid Double Negatives . Avoid Double-Barreled Questions – double-barreled questions ask two or more things in a single question • Principle 6.

Constructing and Refining a Survey Instrument (cont'd) • Principle 7. Determine Whether Closed-Ended or Open-Ended Questions are Needed – open-ended better if researcher is unsure what respondent is thinking or variable is ill-defined – closed-ended are easier to code and provide more standardized data .

Constructing and Refining a Survey Instrument (cont'd) • Principle 7. Determine Whether Closed-Ended or Open-Ended Questions are Needed – mixed-question format uses a combination of both open and closedended questions .

Constructing and Refining a Survey Instrument (cont'd) • Principle 8. Construct Mutually Exclusive and Exhaustive Categories – mutually exclusive means that the categories do not overlap – exhaustive items include all possible responses .

Constructing and Refining a Survey Instrument (cont'd) • Principle 9. Consider the Different Types of Closed-Ended Response Categories – rating scales • multichotomous (more than two choices) usually preferred – ability to measure direction and strength of attitude • distance between each descriptor should be the same .

Constructing and Refining a Survey Instrument (cont'd) • Principle 9. Consider the Different Types of Closed-Ended Response Categories – binary forced choice • participant chooses one of pair of attitudes • typically not recommended – rankings – checklists .

Constructing and Refining a Survey Instrument (cont'd) • Principle 10. Use Multiple Items to Measure Complex or Abstract Constructs – semantic differential – scaling method in which participants rate an object on a series of bipolar rating scales – Likert scaling .

Constructing and Refining a Survey Instrument (cont'd) • Principle 11. Make Sure the Questionnaire is Easy to Use From Beginning to End – ordering of questions • positive and interesting questions first • demographic questions last – limit the number of contingency questions – questionnaire length .

Make Sure the Questionnaire is Easy to Use From Beginning to End – response bias • social desirability bias occurs when participants respond I a way to make themselves look good – avoid by insuring anonymity .Constructing and Refining a Survey Instrument (cont'd) • Principle 11.

Make Sure the Questionnaire is Easy to Use From Beginning to End – response bias • response set – tendency to respond in a specific way – use even number of response categories on rating scale – include multiple question types .Constructing and Refining a Survey Instrument (cont'd) • Principle 11.

Pilot Test the Questionnaire Until It Is Perfected .Constructing and Refining a Survey Instrument (cont'd) • Principle 12.

convenience sample is acceptable • If generalization to population is needed a random sampling method should be used .Selecting Your Survey Sample From the Population • If primary goal is to explore relationship between variables rather than generalization.

Qualitative and Mixed Methods Research .

.

Table 13.2 (continued) Twelve Major Characteristics of Qualitative Research .

Research Validity in Qualitative Research • Validity of qualitative research is often criticized – often because of perceived researcher bias – reflexivity and negative case sampling are techniques that should be used to avoid research bias .

Research Validity in Qualitative Research (cont'd) • Descriptive validity – the factual accuracy of the researcher’s account – using multiple investigators (investigator triangulation) helps to insure descriptive validity .

Research Validity in Qualitative Research (cont'd) • Interpretive validity – extent to which the researcher has accurately portrayed the viewpoints of participants – participant feedback and low-inference descriptors .

Research Validity in Qualitative Research (cont'd) • Theoretical validity – degree to which theoretical explanation fits data – strategies for achieving • • • • extended fieldwork theory triangulation pattern matching peer review .

Research Validity in Qualitative Research (cont'd) • Internal validity – is observed relationship causal? – interested in causality in a particular context (ideographic causation) – strategies to achieve • researcher-as-detective • methods triangulation • data triangulation .

and times • naturalistic generalization • theoretical generalization . settings.Research Validity in Qualitative Research (cont'd) • External validity – the ability to generalize the findings to other people.

Four Major Qualitative Research Methods • Phenomenology – description of conscious experience of phenomenon – primary method of data collection--indepth interviews • extract phrases and statement that pertain to phenomenon • interpret and give meaning to phrases and statements • write narrative describing the phenomenon .

Four Major Qualitative Research Methods (cont'd) • Ethnography – description and interpretation of culture of group of people • cultures can be micro or macro – primary data collection method-participant observation – requires entry and acceptance by group – must guard against reactive effect – collect information by observing and listening .

Four Major Qualitative Research Methods (cont'd) • Ethnography – data analysis • Identify themes and patterns of behavior – write narrative report .

organization or event – types of case studies • intrinsic case study • instrumental case study • collective case study .Four Major Qualitative Research Methods (cont'd) • Case Study Research – intensive description and analysis of a person.

Four Major Qualitative Research Methods (cont'd) • Grounded Theory – methodology for generating and developing a theory that is grounded in data – key characteristics of good grounded theory • • • • theory should fit the real world data theory should be clear and understandable theory should have generality theory should be able to be applied to produce results .

Four Major Qualitative Research Methods (cont'd) • Grounded Theory – most common methods of data collection are interviews and observations – data analysis includes • open coding • axial coding • selective coding .

Mixed Research • The research approach in which both quantitative and qualitative methods are used • Questions to be answered when using a mixed design – Should you primarily use one methodology or treat them equally? – Should phases of study be conducted concurrently or sequentially? .

4 The mixed methods design matrix.Figure 13. .

Descriptive Statistics .

Descriptive Statistics • The goal of descriptive statistics is to describe sample data • Can be contrasted with inferential statistics where the goal is to make inferences about populations from sample data .

Frequency Distributions • A listing of values in a data set along with their frequency .

.

Graphic Representations of Data • Bar graph – used with categorical variables – height of bar represents frequency of category – bars should not touch • Histogram – used with quantitative variables – no space between bars .

.Figure 14.2 A bar graph of undergraduate major.

Figure 14.3 Histogram of starting salary. .

Graphic Representations of Data (cont'd) • Line graphs – also used with quantitative variables – particularly useful for interpreting interactions • Scatterplots – depicts relationship between two quantitative variables .

.5 Line graph of results from pretest–posttest control group design studying effectiveness of social skills treatment.Figure 14.

.6 A scatterplot of starting salary by college GPA.Figure 14.

Measures of Central Tendency • Provide a single value that is typical of the distribution of scores – mode • most frequently occurring value • least useful measure of central tendency – median • middle score when numbers are in ascending or descending order .

Measures of Central Tendency (cont'd) • Provide a single value that is typical of the distribution of scores – Mean • arithmetic average • most commonly used measure of central tendency .

Measures of Variability • Provides a numerical value indicating the amount of variation in a group of scores – range • highest score minus lowest score • rarely used as a measure of variability – variance • average deviation of the data values from their mean in squared units .

Measures of Variability (cont'd) • Provides a numerical value indicating the amount of variation in a group of scores – standard deviation • square root of variance • roughly the average amount that individual scores deviate from the mean .

Measures of Variability (cont'd) • Provides a numerical value indicating the amount of variation in a group of scores – standard deviation and the normal curve – z-scores • standardized values transformed from raw scores • mean of z-distribution is always zero. standard deviation always 1 .

.00 indicates a raw score that is one standard deviation unit above the mean • in a normal distribution. a z-score of +1. e. the proportion of scores occurring between any two points can be determined .Measures of Variability (cont'd) • Provides a numerical value indicating the amount of variation in a group of scores – z-scores • indicates how far above or below a raw score is from its mean in standard deviation units.g.

.8 Areas under the normal distribution.Figure 14.

Examining Relationships Among Variables • Unstandardized difference between means – a comparison of mean differences between levels of a categorical independent variable • Standardized difference between means – effect size • Cohen’s d is a common measure of effect size – mean difference is divided by standard deviation .

sign indicates direction • positive correlation indicates that the two variables vary together in the same direction.Examining Relationships Among Variables (cont'd) • Correlation Coefficient – numerical representation of the strength and direction of relationship between two variables • value ranges from +1.0.0 to -1. absolute value indicates strength of relationship. negative correlation means that they move in opposite directions .

Examining Relationships Among Variables (cont'd) • Correlation Coefficient – Pearson correlation (r) used with two quantitative variables. only appropriate if data is related in a linear fashion – partial correlation is a technique that involves examining correlation after controlling for one or more variables – a scatterplot can be used to judge the strength and direction of a correlation .

Figure 14. .10 Correlations of different strengths and directions.

Regression Analysis • Statistical technique designed to predict dependent variable based on one or more predictor values – simple regression involves the use of one independent or predictor variable – multiple regression involves two or more independent or predictor variables – prediction is made using the regression equation .

predicted change in the dependent variable (Y) given a one unit change in the independent variable (X) .Regression Analysis (cont'd) • Statistical technique designed to predict dependent variable based on one or more predictor values – prediction is made using the regression equation • y-intercept .point where regression line crosses y-axis • regression coefficient .

Regression Analysis (cont'd) • Statistical technique designed to predict dependent variable based on one or more predictor values – prediction is made using the regression equation • partial regression coefficient .

Contingency Tables • Table used to examine relationship between two categorical variables • Cells may contain frequencies or percentages .

.

Inferential Statistics .

Inferential Statistics • Inferential statistics involve using sample data to make inferences about populations – a statistic is a numerical index based on sample data – a parameter is a numerical characteristic of a population .

Sampling distributions • A sampling distribution is a theoretical distribution of values of a statistic consisting of every possible sample of a given size from a population – standard error – the standard deviation of a sampling distribution – test statistic – statistic that follows a known sampling distribution and is used in significance testing .

sample mean to estimate population mean) ..Estimation • A branch of inferential statistics involved in estimating population parameters – point estimation – use value of sample statistic as estimate of the value of population parameter (e.g.

.g.Estimation (cont'd) • A branch of inferential statistics involved in estimating population parameters – interval estimation • confidence interval – includes a range of numbers that will contain the population parameter with a certain degree of certainty. e. 95% confidence intervals include a range of values that will contain the population parameter 95% of the time .

Hypothesis Testing • Branch of inferential statistics used when testing the predicted relationship between variables – null hypothesis .a statement regarding the population parameter – typically that no relationship exists between the independent and dependent variables .

Hypothesis Testing (cont'd) • Branch of inferential statistics used when testing the predicted relationship between variables – alternative hypothesis – states that there is a relationship between independent and dependent variables .

Hypothesis Testing (cont'd) • Steps of hypothesis testing – state the null and alternative hypotheses – begin by assuming that the null hypothesis is true (that the independent variable has no effect) – determine the standard for rejecting the null hypothesis (i.e.. identify the level of significance) .

Hypothesis Testing (cont'd) • Steps of hypothesis testing – calculate the test statistic (e. reject the null hypothesis – calculate effect size indicators to determine practical significance . t-test) – make a decision – if result of test statistic is unlikely to occur by chance (that is. if the p value is less than the alpha level).g..

Hypothesis Testing (cont'd) • Directional alternative hypotheses – predicts the direction of an effect – increases statistical power – cannot reject null if effect is opposite of prediction .

.

**Hypothesis Testing (cont'd)
**

• Hypothesis testing errors

– Type I error occurs when the researcher incorrectly rejects the null hypothesis – Type II error occurs when the researcher fails to reject a false null hypothesis

**Hypothesis Testing (cont'd)
**

• Hypothesis testing errors

– reducing the alpha level reduces the risk of a Type I error but increases the risk of a Type II error – researchers are usually more concerned about Type I errors

**Hypothesis Testing in Practice
**

• The basic steps of hypothesis testing are used with a number of different research designs and statistical techniques • The t Test for Correlation Coefficients

– used to determine whether an observed correlation coefficient is statistically significant – null hypothesis assumes that correlation =0

**Hypothesis Testing in Practice (cont'd)
**

• One-way Analysis of Variance (ANOVA)

– compares two or more group means – null assumes that all population means are equal; alternative is that at least two are different

**Hypothesis Testing in Practice (cont'd)
**

• One-way Analysis of Variance (ANOVA)

– if null is rejected, post-hoc tests needed to determine which groups are different (if more than two groups are compared)

• post-hoc tests allow multiple comparisons without inflating risk of a Type I error • common post-hoc tests include Tukey’s HSD, Neuman-Keuls, and Bonferroni

Hypothesis Testing in Practice (cont'd) • Analysis of Covariance (ANCOVA) – extension of ANOVA – includes a quantitative independent variable as a “covariate” – increased power over ANOVA .

Hypothesis Testing in Practice (cont'd) • Two-way ANOVA – includes two categorical independent variables – tests three null hypotheses • main effects for each independent variable • interaction • a significant interaction generally takes precedence over main effects .

Hypothesis Testing in Practice (cont'd) • One-Way Repeated Measures ANOVA – similar to one-way ANOVA but independent variable is within participants • The t test for Regression Coefficients – tests the significance of regression coefficients obtained in regression analysis – semi-partial correlation squared (sr2) – amount of variance in the dependent variable explained by a single .

Hypothesis Testing in Practice (cont'd) • Chi-Square Test for Contingency Tables – tests the relationship observed in a contingency table – two categorical variables – null hypothesis states that there is no relationship between the two variables .

Hypothesis Testing and Research Design • The following tables list the appropriate statistical analyses to be used with research designs discussed in the text .

.

.

.

Preparing the Research Report for Presentation or Publication .

The APA Format • On every page put the page number and a header in the upper right hand corner of the manuscript • Page 1 – Running head – Title – Author(s) name(s) and institutional affiliations .

The APA Format (cont'd) • Page 2 – Abstract in only one paragraph .

.The APA Format (cont'd) • Next series of continuous pages – Introduction – Method • • • • Participants Apparatus or instruments Procedure other relevant sections such as the study design. etc. therapeutic technique.

The APA Format (cont'd) • Next series of continuous pages – Results – Discussion – References--starts a new page – Footnotes—on a separate page – Tables – Figures .

Preparation of the Research Report • Writing style – orderly presentation of ideas – smoothness and economy of expression – avoid plagiarism! .

Preparation of the Research Report (cont'd) • Language – specificity-choose words that are accurate and free of bias when referring to participants – labels-respect preferences of participants – participation—use the active voice and descriptive terms such as participant .

Preparation of the Research Report (cont'd) • Language – specific issues • gender—avoid ambiguity in sex identity • sexual orientation—avoid labeling with offensive tone • racial and Ethnic identity—ask participants about their preferred designation • disabilities—avoid language that equates a person with their disability • age—avoid open-ended definitions .

Preparation of the Research Report (cont'd) • Editorial style – italics-use infrequently – abbreviations-use sparingly .

. • Level 4.Preparation of the Research Report (cont'd) • Editorial style – headings • Level 1. boldface.indented.indented side heading. italicized. bold. upper and lowercase letters. • Level 3. • Level 2.centered main boldface in upper and lowercase letters.flush left. bold. lowercase paragraph heading ending with a period. in lowercase paragraph heading ending with a period.

indented. in lowercase paragraph heading ending with a period – quotations • fewer than 40 words.Preparation of the Research Report (cont'd) • Editorial style – headings • Level 5. italicized. insert into text • 40 words or more – freestanding block without quotations marks .

Preparation of the Research Report (cont'd) • Editorial style – numbers-use words for numbers less than 10 – physical measurements-use metric – presentation of statistical resultsprovide enough information to allow the reader to corroborate the results – tables-use only when they can convey and summarize the data more economically and clearly than can a discussion .

Preparation of the Research Report (cont'd) • Editorial style – figures-use when they convey a concept more effectively than can a table – figure captions-a brief description of the content – figure preparation – should be computer generated with professional graphics software – reference citations• use the author-date citation method .

Preparation of the Research Report (cont'd) • Editorial style – reference citations• e.g. title. year of publication. “McConnell (2006) examined the relationship…” or “Past research has demonstrated… (McConnell. publishing data and any other information necessary to identify the reference • see samples in text .. 2006)” – reference list • include name of author.

Preparation of the Research Report (cont'd) • Editorial style – preparation of manuscript • Times New Roman typeface. 12-point font • double space entire manuscript • margins should be at least 1 inch .

Preparation of the Research Report (cont'd) • Editorial style – ordering of manuscript pages • • • • • • • title page abstract text of the manuscript references footnotes tables figures .

Preparation of the Research Report (cont'd) • For further information • http://www.org/ .apastyle.

Submission of the Research Report for Publication • Send to the editor of the selected journal • Include a cover letter stating you are submitting the manuscript • Editor will send the manuscript out for review .

Submission of the Research Report for Publication (cont'd) • In several months you will have the manuscript returned with the reviewers’ comments and the editors decision – reject – accept – revise and resubmit .

Presenting Research Results at Professional Conferences • Oral presentation – include the following • • • • what was studied why you studied how it was studied what was found and any implications .

Presenting Research Results at Professional Conferences (cont'd) • Poster presentation – prepare a visual presentation that is large enough to be read at a distance of about 10 feet .

THANK YOU! .