Research Design Choices and Causal Inferences

Professor Alexander Settles

Approaches and strategies of research design
‡ Quantitative Strategies
± Exploratory Studies
‡ In-depth interviews ‡ Focus Group studies

± Descriptive Studies
‡ Survey research ‡ Relationship studies

± Causal Studies
‡ Experimental ‡ Comparative

Approaches and strategies of research design
‡ Qualitative Strategies
± Explanatory Studies
‡ Case Studies ‡ Ethnographic studies

± Interpretive studies
‡ Grounded theory

± Critical Studies
‡ Action research

Gathering Qualitative Data ‡ ‡ ‡ ‡ ‡ Observation Studies Participation Studies Interviewing Studies Archival studies Media Analysis .

What Tools Are Used in Designing Research? .

Language of Research Clear conceptualization of concepts Success of Research Shared understanding of concepts .

What Is Research Design? Blueprint Plan Guide Framework .

What Tools Are Used in Designing Research? MindWriter Project Plan in Gantt chart format .

Design in the Research Process .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

Degree of Question Crystallization Exploratory Study ‡ Loose structure ‡ Expand understanding ‡ Provide insight ‡ Develop hypotheses Formal Study ‡ Precise procedures ‡ Begins with hypotheses ‡ Answers research questions .

photographs Psychological testing Case studies Ethnography Expert interviews Document analysis .Approaches for Exploratory Investigations ‡ ‡ ‡ ‡ ‡ ‡ ‡ Participant observation Film.

Desired Outcomes of Exploratory Studies ‡ Established range and scope of possible management decisions ‡ Established major dimensions of research task ‡ Defined a set of subsidiary questions that can guide research design ‡ Developed hypotheses about possible causes of management dilemma ‡ Learned which hypotheses can be safely ignored ‡ Concluded additional research is not needed or not feasible .

Commonly Used Exploratory Techniques ‡ Secondary Data Analysis ‡ Experience Surveys ‡ Focus Groups .

Experience Surveys ‡ What is being done? ‡ What has been tried in the past with or without success? ‡ How have things changed? ‡ Who is involved in the decisions? ‡ What problem areas can be seen? ‡ Whom can we count on to assist or participate in the research? .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

Data Collection Method ‡ Monitoring ‡ Communication .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

The Time Dimension ‡ Cross-sectional ‡ Longitudinal .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

The Topical Scope Statistical Study ‡ Breadth ‡ Population inferences ‡ Quantitative ‡ Generalizable findings Case Study ‡ Depth ‡ Detail ‡ Qualitative ‡ Multiple sources of information .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

The Research Environment ‡ Field conditions ‡ Lab conditions ‡ Simulations .

Descriptors of Research Design
Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects

Purpose of Study

Descriptors

Research Environment Topical Scope

Time Dimension

Purpose of the Study
Reporting Descriptive

Casual Explanatory

Causal Predictive

Descriptive Studies
‡ ‡ ‡ ‡ ‡ Who? What? How much? When? Where?

Descriptive Studies ‡ Description of population characteristics ‡ Estimates of frequency of characteristics ‡ Discovery of associations among variables .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

Experimental Effects Ex Post Facto Study ‡ After-the-fact report on what happened to the measured variable Experiment ‡ Study involving the manipulation or control of one or more variables to determine the effect on another variable .

Causation and Experimental Design ‡ Control/Matching ‡ Random Assignment .

Causal Studies ‡ Symmetrical ‡ Reciprocal ‡ Asymmetrical .

Understanding Casual Relationships Property Behavior Response Disposition Stimulus .

Asymmetrical Casual Relationships Stimulus-Response PropertyDisposition PropertyBehavior Disposition-Behavior .

± ± ± A change in work rules leads to a higher level of worker output. ‡ Property-disposition .Types of Asymmetrical Causal Relationships ‡ Stimulus-response . ± Age and sports participation. Social class and opinions about taxation. . Moral values and tax cheating. ± Social class and family savings patterns.An event or change results in a response from some object. Gender attitudes toward social issues. ± ± ± ‡ Disposition-behavior ± ± ± ‡ Property-behavior -An existing property causes a specific behavior. Age and attitudes about saving. ± Stage of the family life cycle and purchases of furniture.An existing property causes a disposition. Job satisfaction and work output. Opinions about a brand and its purchase. A price increase results in fewer unit sales. A change in government economic policy restricts corporate financial decisions.

Evidence of Causality ‡ Covariance between A and B ‡ Time order of events ‡ No other possible causes of B .

Descriptors of Research Design Perceptual Awareness Question Crystallization Data Collection Method Experimental Effects Purpose of Study Descriptors Research Environment Topical Scope Time Dimension .

Participants¶ Perceptional Awareness ‡ No Deviation perceived ‡ Deviations perceived as unrelated ‡ Deviations perceived as researcher induced .

Descriptors of Research Design Category The degree to which the research question has been crystallized The method of data collection The power of the researcher to produce effects in the variables under study The purpose of the study Options ‡ Exploratory study ‡ Formal study ‡ Monitoring ‡ Communication Study ‡ Experimental ‡ Ex post facto ‡ Reporting ‡ Descriptive ‡ Causal-Explanatory ‡ Causal-Predictive ‡ Cross-sectional ‡ Longitudinal ‡ Case ‡ Statistical study ‡ Field setting ‡ Laboratory research ‡ Simulation ‡ Actual routine ‡ Modified routine The time dimension The topical scope²breadth and depth²of the study The research environment The participants¶ perceptional awareness of the research activity .

influencing interpretative schemas and theoretical development. Choice of methods does not depend exclusively on links to research aims. D. Research competence thus involves addressing coherently the organizational. evidential. 3. A method is not merely a technique for snapping reality into focus. and personal factors relevant to an investigation . ethical. 1. 2. choices of method frame the data windows through which phenomena are observed. A. & Bryman. Contextualizing methods choice in organizational research. (2007).Readings Review ‡ Buchanan. political. A. historical.

.

. Partisan conclusions.Organizational Properties ‡ ‡ ‡ ‡ Negotiated objectives. Layered permissions. Politics of publishing.

.Conclusions ‡ attributes of the organizational research setting or context. ‡ theoretical and audience-related issues in translating evidence into practice. and ‡ personal preferences and biases with regard to choice of method. ‡ the research tradition or history relevant to a particular study. ‡ the inevitable politicization of the organizational researcher¶s role. ‡ constraints imposed by a growing concern with research ethics.

D. and selection of unbiased measurement alternatives. F. Measuring and testing change in strategic management research ‡ Reliability assumptions of change variables. correlations between the change variable and its initial measure. & Fairbank. ‡ Found that the typical approach used to measure and test change (as a simple difference between two measures of the same variable) is usually inappropriate and could lead to inaccurate findings and flawed conclusions. (2002). J. D.‡ Bergh. .

and ± correlation with the initial component variable.Recommendations ‡ Researchers to screen their data in terms of: ± reliability of component measures ± correlation between the component variables ± equality of component variable variances. .

Validity and Reliability Part 2 .

.Validity ‡ Determines whether the research truly measures that which it was intended to measure or how truthful the research results are.

Reliability ‡ The extent to which results are consistent over time and an accurate representation of the total population under study is referred to as reliability and if the results of a study can be reproduced under a similar methodology. then the research instrument is considered to be reliable. .

This criterion is an assessment of whether a measure appears. ‡ Construct Validity . either now or in the future. Does a measure stack up with other variables the way we expect it to? .Methods to Insure Validity ‡ Face Validity.which concerns the extent to which a measure is related to other measures as specified by theory or previous research. ‡ Criterion-related validity applies to instruments than have been developed for usefulness as indicator of specific trait or behavior. ‡ Content validity concerns the extent to which a measure adequately represents all facets of a concept. on the face of it. to measure the concept it is intended to measure.

.the extent to which different interviewers or observers using the same measure get equivalent results.Reliability ‡ Test-Retest Reliability ‡ Inter-Item Reliability ± use of multiple items to measure a single concept. ‡ Inter-observer Reliability .

Graphic comparison .

external audits . peer reviews. scores. instruments used. research design ‡ Qualitative studies: ± ways that researchers have devised to establish credibility: member checking.How can validity be established? ‡ Quantitative studies: ± measurements. triangulation. thick description.

How can reliability be established? ‡ Quantitative studies? ± Assumption of repeatability ‡ Qualitative studies? ± Reframe as dependability and confirmability .

Reliability Validity Valid Reliable Not Valid Not Reliable Not Valid .

Methods to insure V&R ‡ Triangulation ± multiple methods or sources ‡ Controlling for bias ± Selection bias ± Observer bias ± Questionnaire bias ± Omitted-variable bias ± Confirmation bias .

Internal Validity ‡ High internal validity means that changes in the dependent variable were caused by²not merely related to or correlated with²the treatment. ‡ High internal validity can be attained only in an ideal true experimental design. . or when steps have been taken to eliminate confounding factors or influences.

and by changing or adding independent variables.Internal Validity Internal validity can be increased by: ‡ Repeating an experiment many times. ‡ Switching the control and experimental groups ‡ Controlling confounding influences. .

Internal Validity Controlled experiments have the virtue of producing results with high internal validity. but they have the liability of low external validity .

External Validity External validity is the extent to which the cause and effect relationship can be generalized to other groups that were not part of the experiment. .

If external validity is low.External Validity External validity is critically important in policy and program evaluation research because we would like our research findings to be generalizable. then the results are not generalizable. .

.Threats to Validity History confound ² Any independent variable other than the treatment variable. that occurs between the pre-test and the post-test.

people are growing older. and are getting more experienced. (Problem in longitudinal studies in particular) .Threats to Validity Maturation confound ² Refers to the fact that as the experiment is conducted.

Threats to Validity Testing confound ² Refers to different responses that an experimental subject may have to questions they are asked over and over again. .

Threats to Validity Instrumentation confound ² Results from changing measurement instruments.) . (Example: use of different people to collect data. you have changed the measurement instrument. if there is no interrate reliability.

.Threats to Validity Regression to the mean ² Groups tend to increasingly resemble the mean over time.

Threats to Validity Selection bias confound ² When self-selection by individuals into groups is possible. . a selection bias can result.

.Threats to Validity Diffusion of treatment confound ² Occurs when a control group cannot be prevented from receiving the treatment in an experiment.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.