You are on page 1of 8

II. Design and Planning Phase Types of Research According to Motive: 1.

Basic/ Library Research - Intended to increase knowledge for the purpose of knowing or learning the truth 2. Applied/ Action Research - Scientific investigation conducted that will provide evidence-based data and seek solution to problems According to Level of Investigation: 1. Exploratory what are the variables ? 2. Non-experimental are variables related ? 3. Experimental Is the relationship causal? Classification of Research Qualitative o Systematic o Subjective approach o Describe life experiences and give meaning o Research Methods: PHENOMENOLOGICAL-describes the lived experience of individuals about phenomenon through description and analysis. GROUNDED THEORY- concerned with analysis of data leading to the development of a substantive theory. It describes the process on how the theory is applied. ETHNOGRAPHIC-systematic collection, description and analysis of data to develop a theory of a culture/ group of people. HISTORICAL-narrative description or analysis of events that occurred in the past. PHILOSOPHICAL INQUIRY- involves using intellectual analysis to clarify meanings, make values, manifest, identify ethics and study the nature of knowledge. CRITICAL SOCIAL THEORY- aims to integrate theory and practice, understand social organization of everyday practice TRIANGULATION- use of multiple methods in the study of the same phenomena. Quantitative o Formal o Objective o Systematic process o Uses numericals to obtain info. o Describe, examine relationships, det. Cause and effect between variables.

Research Methods: DESCRIPTIVE-provides an accurate portrayal or account/ char. of a particular individual, situation or group. CORRELATIONAL- involves systematic investigation of relationship between 2 or more variables. COMPARATIVE- used to describe differences in variables in 2 or more groups in a natural setting. EX POST FACTO-independent variable is not manipulated either because it is inherently unmanipulable or because it occurred in the past.

QUASI-EXPERIMENTAL-causal relationship between selected variables are examined through manipulation of independent variable but without control or randomization. EXPERIMENTAL-objective, systematic, controlled investigation for the purpose of predicting and controlling phenomena.

Research Design - Is the plan, structure and strategy of an investigation Methodology - Totality of how the study is carried out. It includes the design, sample, setting, instruments, intervention, and procedure and data analysis. How to classify Research Design 1. 2. 3. 4. 5. 6. 7. Degree of problem conceptualization Method of data collection Researchers control of variables Purpose of the study Time Dimension Topical Scope Research Setting

Descriptive Design To gain more info abt. Characteristics within a particular field of study. Purpose: to provide picture of a situation that naturally happens. It may be used for developing a theory, identifying problems with current practice No manipulation involve ( no DV and IV used for causality ) Types: o Typical Descriptive Design o Comparative Descriptive Design - Examine / describe differences of variables in 2 or more groups that occur naturally in the setting. o Longitudinal Design - -Examines the changes in the same subjects over an extended period. Requires a long term commitment. Mortality can be high and lead to decrease in validity of findings. o Cross-sectional Descriptive Design - Used to examine groups of subjects simultaneously in various stage of development. Stages are part of the process that will progress across time. o Trend Design - Examines changes in the general population in relation to a particular phenomenon. Different sample of subjects are selected from the same population at preset time interval and each selected time data are collected from that a particular sample. o Case Study Design - Involve intensive exploration of a single unit study, such as: Person, family, group, community or institution, or a very small # of subjects who are examined intensively.

Quasi Experimental - Examine the causality (Effects of IV to DV ) Threats to validity are controlled by: o Selection of subjects o Manipulation of the treatment o Reliable measurement of the DV Pre test and Post test design o One Group Pre test and Post test design o Two Group (Study Group and Control Group) pre test and Post test design Post Test design only with a comparison group o Used when a pretest is not possible o Has number of threats to validity o Sometimes referred as pre-experimental design

Only the study group has a post test

True Experimental - provide the greatest amount of control. 3 elements : o Control of experimental situation o Manipulation of the IV o Random sampling Types: o Factorial Experimental design complex, multivariate experimental design The two or more characteristics, tx, or events are independently varied in a single study. Design to determine multicausality Ex : 2x2 factorial design o Randomized Clinical Trial Has been used in the field of medicine since 1945 Uses large number of subjects to test the effects of the tx With the study grp from the control grp. o Solomon four group design consists of two experimental groups and two control groups. One control group and experimental grp will be pre-tested and other groups will not.

Repeated measures design (cross over design ) Subjects exposed to different conditions ( age, weight, psychological state,etc. ) Subjects are randomly assigned to different ordering of treatment. Classical Experiment Design Involves one experimental and one control group

Sampling Technique 1. Probability Random sampling Involves random selection of subjects or elements of the population Types: i. Simple Random Sampling Each element has an equal chance or probability of being chosen but only those that fall in the sampling frame will be included in the actual study. ii. Stratified random sampling Divide the group into homogenous sub-groups or strata and get representative from each sub-group iii. Cluster sampling Successive selection of sample as a group of elements iv. Systematic sampling Selection of sample insequence according to pre-determined modality or system v. Multi-stage Nationwide coverage Involves several stages in drawing samples from the population 2. Non probability sampling method : respondents are selected in a non-random way Samples may not represent the total population

Types: i. Convenience sampling/ accidental sampling poor approach, little opportunity to control biases Subjects are included in the study because they happened to be in the right place at the right time until the sample size is reached Inexpensive, accessible, and usually requires readily-available subject. ii. Quota sampling selection of sample is made by the researcher who has given a specified number to represent the population iii. Purposive sampling/ Judgmental sampling handpicked sampling technique which is based on qualities or purpose of the study iv. Snowball/ Network Sampling consist of identical subjects who meet the inclusion criteria and are referred by other individuals

Slovins Formula: = Purposes: Information seeking Obtain the respondents answer to the problem Evaluation/appraisal Provide basis for prediction/expected outcome ( 1 + ()2 ( Sources Professional journals Book references Researcher made

Guidelines in developing the instruments Suitable for the functions Based on theoretical /conceptual framework Should be able to collect the needed answer to the problem Should be valid and reliable Free of bias Free of built in clues Should directly find the answer to the hypothesis

Factors that can occur during the measurement process Transient personal factors-fatigue, attention span, mood, mental set, motivation Situation factors-condition of the room, distractions Variations in the Administration of the measurements (i.e interview) Processing of data-error in coding and categorization

Questionnaire - Composed of questions answered by the respondents Criteria for Constructing: Must have a cover letter addressed to the respondent Directions must be clearly stated to facilitate answering questions Take the objectives and variables as starting point

Check the content and type of questions to be used Use appropriate scale and verbal interpretations Avoid, leading ambiguous and irrelevant questions Formulate one or more questions that will provide the information needed. Observe proper sequencing of the questions Use simple everyday language Check the format of the questionnaire Translate the questions in the language, understood by the respondents

Advantages: Simple, inexpensive, easy to distribute Can easily be tabulated Can be tested for reliability and validity Offers simple procedure for exploring new topic

Disadvantages: Asdas 1. Truth Value a. Quantitive i. Validity-Measures what intends to measure b. Qualitative i. Credibility-other people can recognize the experience when confronted with it after having only read it 2. Applicability a. Quantitative i. External validity- generalizability of findings b. Qualitative i. Fittingness- findings can fit into contexts outside the study situation and when its audience views it findings are meaningful and applicable in terms of their experiment 3. Consistency a. Quantitative i. Reliability - yields the same or comparable results b. Qualitative i. Auditability - another researcher can clearly follow the decision trail used by the investigator or could arrive at same or comparable conclusions. 4. Neutrality a. Quantitative i. Objectivity- freedom from bias b. Qualitative i. Confirmability- achieved when auditability, truth, value and applicability are established. Unable to yield in depth explanation Responses maybe omitted Information obtained are limited Some items maybe misunderstood

Validity - Refers to the ability of data gathering instrument to measure what is supposed to measure and to obtain data relevant to what is being measured. Content Validity - The extent to which the instrument represent the factors under study Face Validity - Determined by inspecting the items to see if the instrument contains important items that measure the variables in the content area Construct Validity - Degree to which a measuring instrument measures a specific hypothetical trait or construct. Known group technique - Basic approach to establishing construct validity in which the instrument is administered to several groups known to differ on certain construct and looking if there are statistical differences. Criterion-Related Validity - Relationship of the measuring instrument to some already know external-criterion or other valid instrument Predictive Validity - Predicts how well the person will do in the future Concurrent Validity - Measure of how well the instrument correlates with another instrument that is known to be valid Reasons for invalidity and unrealibility Character is exploratory Weak design which can make untrustworthy conclusions Unwarranted generalizations which are made on the basis of inadequate sample Inappropriate statistical methods Unsound conclusions

Pilot study and pre-testing of instruments Refers to the trial test of the instrument developed for testing the hypothesis It checks reliability and validity of the instrument developed for the study Preliminary small scale trial before the actual study is conducted.

Threats to internal validity Selection bias - Exist when study results are attributed to the experimental treatment History - Occurs when some event besides the experimental treatment takes place during the course of the study and affects or influences the dependent variable Maturation - Takes place when changes within the subject occurs during experimental study Testing - Influence of the pre-test which already projects the result of post test scores. Instrumentation change - Existence between pre-test and post test results caused by change in the accuracy of instruments or rating Mortality - Occurs when difference exist between subject drop out rates of the experimental and nonexperimental group.

External Validity - Degree to which study results can be generalized and applied to other population setting Threats to External Validity Hawthorne effect - Occurs when study participants respond in certain manner or obvious change of behavior because they are aware that they are being observed. Halo effect - Tendency of the researcher to rate the subject high or low because of the impression he has on the subject Reactive effect - Occurs when subject have been sensitized to the treatment by taking the pretest.

Reliability - Consistency, stability, occuracy and dependability with which the instrument measures what it is to measure.

Sources of unreliable data/instrument Deficiency in the instrument iself Inconsistency between the individuals taking the instrument.

Testing Reliability Test-Retest - Test is administered to the subject and after a period of time it is administered again. It is reliable when results are consistent and essentially the same on both sides Equivalent Test - Two forms of test are developed using the same specifications but requiring separate samples Split-Half Method - Carried out at the of scoring the results. Results from each half is compared. Item Analysis - Each item should discriminate between the subjects Reliability Coefficient - Correlation between 2 measurements that are obtained in the same manner.

Methods of Testing Reliability Test of stability 1. 2. Best indicator of the reliability of the instruments. A stable instrument is one that can be repeated over and over on the same research subject and will produce the same result. Reliability is usually expressed as a number called coefficient. A high coefficient (1.0) indicates high reliability Test-Retest - Classic test of stability. Repeated measurement over time using the same instrument on the same subject is expected to produce the same result. Test of equivalence - Attempts to determine if similar test given at the same time yield the same result. Alternate form- test of equivalence using alternate forms of paper and pencil test consisting of two sets of similar questions designed to measure the same trait Inter Rater reliability-used for testing equivalence when design calls for observation Test of internal consistency - Degree to which the subparts of an instrument are all measuring the same attribute or dimension Split-Half technique or Odd-Even Reliability - Scores on one half of a subject responses are compared to the scores on the other half Formula Used: i. Spearman Brown Prophecy Formula ii. Cronbachs Alpha iii. Kuder-Richardson formula-(K-R 20) Item Analysis Reveal satisfactoriness of any statement as far as inclusion in a given scale is concerned High validity and reliability can be built into the instrument in advance through item analysis(Ferguson, 1979) This process checks whether each item is differentiating. It means that the statement does not measure what the battery of items measure; to include it contributes nothing to the scale i. Discriminating index ii. Difficulty Index

3. 4.

5.

Types of Errors: 1. Type I - Occurs when it is incorrectly concluded that a relationship / differences exist between variables/grps. When reality it does not. 2. Type II - Error occurs when it is concluded that no significant relationship/differences exist bet. Variables or grps. When in reality it does.

Errors of Measurement 1. Situational Contaminants a. Participants awareness of observers presence b. Anonymity of response situation c. Friendliness of the researcher d. Environmental Factors 2. Transitory Personal Factors a. Fatigue b. Hunger c. Anxiety d. Mood 3. Response Set Bias a. Social desirability b. Acquiescence c. Extreme responses 4. Administration Variation a. Alteration in method of data collection 5. Instrument Clarity a. Poorly understood direction 6. Item Sampling 7. Instrument Format

You might also like