You are on page 1of 25

Research Methodology

Lecture 4

1
2

Research Designs
 Four Kinds:

 a. Historical design

 b. Descriptive design

 c. Experimental design

 d. Case study design


3

Historical Design
 History is the branch of knowledge concerned with past
events, especially those involving human affairs. (Frank
and Wagnalls, 1976)

 Historical Research is a systematic and critical inquiry of


the whole truth of past events using the critical method in
the understanding and the interpretation of facts which are
applicable to current issues and problems.
4

Descriptive Design
 Focuses at the present condition.

 Provide essential knowledge about the nature of objects


and persons.
 Plays a large part in the development of instruments for the
measurement of many things, instruments that are
employed in all types of quantitative research as data
gathering instruments, like tests, questionnaires, interviews,
observation schedules, checklists, score cards and rating
scales.
5

Experimental Design
 Is a problem-solving approach which describes what will
be in the future when certain variables are carefully
controlled or manipulated.

 Most useful in natural sciences such Botany, Zoology,


Biology, Phycology, Chemistry, Physics, etc.

 Has distinct limitations when used in the fields of


education, psychology and sociology.
6

Case Study
 is a problem solving technique that the study is described
from the past, present, and future.
7

Research Instruments
 Are used for gathering or collecting data, and are important
devices because the success or failure of a study lies on the
data gathered.
8

Qualities of Research Instrument

 Validity
 a. Content Validity
 b. Concurrent
 c. Predictive
 d. Construct

 Reliability

 Usability
9

Validity

 Means the degree to which a test or measuring instrument


measures what it intends to measure.
10

Types of Validity

 a. Content validity

 b. Concurrent

 c. Predictive

 d. Construct
11

Evidences of Questionnaire
Validity (Good and Scates, 1972)
 1. Is the question on the subject?

 2. Is the question perfectly clear and unambiguous?


3. Does the question get at something stable, something
relatively deep-seated, well-considered, nonsuperficial, but
something which is typical of the individual or of the
situation?
 4. Does the question pull?

 5. Do the responses show a reasonable range of variation?


12

Evidences of Questionnaire
Validity (Good and Scates, 1972)
6. Is the information obtained consistent?

7. Is the item sufficiently inclusive?

8. Is there a possibility of using an external criterion to


evaluate the questionnaire?
13

Concurrent Validity

 The degree to which the test agrees or correlates with a


criterion set up as an acceptable measure.
14

Predictive Validity

 Determined by showing how well predictions made from


the test are confirmed by evidence gathered at some
subsequent time.

 The criterion measure against this type of validity is


important because the outcome of the subjects is predicted.
15

Construct Validity

 The extent to which the test measures a theoretical


construct or trait.

 This involves such tests as those of understanding,


appreciation and interpretation of data.
 Example: Intelligence and mechanical aptitude tests.
16

Reliability

 Means the extent to which a ‘test is dependable, self-


consistent and stable’. (Merriam, 1975)
 The test agrees with itself.

 It is concerned with the consistency of responses from


moment to moment.
17

Four Methods in Testing the


Reliability

 1. Test-Retest Method (Spearman Rho)

 2. Parallel-Forms Method

 3. Split-half Method

 4. Internal Consistency Method


18

Usability

 Means the degree to which the research instrument can be


satisfactorily used by teachers, researchers, supervisors and
school managers without undue expenditure of time,
money, and effort.
 It means practicability
19

Factors that determine Usability

 1. Ease of administration

 2. Ease of scoring

 3. Ease of interpretation and application

 4. Low Cost

 5. Proper mechanical make-up


20

SAMPLING DESIGNS

 Sampling – defined as the method of getting a


representative portion of a population.
 Population –is the aggregate or total of objects, persons,
families, species or orders of plants or of animals.
21

Advantages of Sampling
 1. It saves time, money and effort.

 2. It is more effective.

 3. It is faster and cheaper.

 4. It is more accurate.

 5. It gives more comprehensive information.


Limitations of Sampling 22

 1. Sample data involve more care in preparing detailed


subclassifications because of a small number of subjects.
 2. If the sampling plan is not correctly designed and
followed, the results maybe misleading.
 3. Sampling requires an expert to conduct the study in an
area. If this is lacking, the results could be erroneous.
 4. The characteristic to be observed may occur rarely in a
population.
 5. Complicated sampling plans are laborious to prepare.
23

Sampling Design

 Two Kinds:

 1. Scientific Sampling

 2. Non-Scientific Sampling
24

Scientific Sampling

 1. Restricted Random Sampling

 2. Unrestricted Random Sampling

 3. Stratified Random Sampling

 4. Systematic Sampling

 5. Multistage Sampling

 6. Cluster Sampling
25

Nonscientific Sampling

 1. Purposive Sampling

 2. Incidental Sampling

 3. Quota Sampling

You might also like