You are on page 1of 12

13

Chapter 2

METHODOLOGY

Research Design

This study used the descriptive research method which is designed for the

researchers to gather information about present existing conditions needed in the chosen

field of study. This method enables the researchers to interpret the theoretical meaning of

the findings and hypothesis development for further studies.

Based from the definition of Calderon & Gonzales (2007), descriptive research is

concerned with conditions of relationships that exist; practices that prevail; beliefs;

processes that are going on; effects that are being felt, or trends that are developing. The

process of descriptive research goes beyond mere gathering and tabulation of data. It

involves the elements or interpretation of the meaning or significance of what is

described. Thus, description is often combined with comparison and contrast involving

measurements, classifications, interpretation and evaluation.

Thus, the researchers determined the profile of the respondents as to age, sex,

monthly family income, grade point average in fourth year high school, learning style and

personality. Likewise, the study identified the performance of the respondents in the

pretest and posttest. The study delineated the respondent’s mean gain score in the pretest

and posttest. Furthermore, the study defined the significant difference between the

performance of the respondents in the pretest and posttest. Finally, the study explained

the significant relationship of respondents’ profile and posttest scores.


14

Sources of Data

Table 1 reveals the distribution of respondents of the study.

The study used the total enumeration for population of the BSCS first year

straight of the College of Computer Science of the DMMMSU-SLUC. The selected

respondents were employed for the profiling and test examination for the reason that they

enrolled the subject Computer Concepts and Fundamentals. The sections which have

common instructor in the said subject were the respondents of the study. Hence, a total of

one hundred twenty four respondents were obtained.

On the other hand, the content validation of the test material was administered to

five instructors of the College of Computer Science who have taught the subject

Computer Concepts and Fundamental through purposive sampling. They were selected in

view of the fact that they were the experts and knowledgeable enough on the subject

matter.

Table 1. Distribution of Respondents

Position N n

BSCS First Year Straight Students 219 124

Instructors 31 5

BSCS Second Straight Year Students 170 170


15

Furthermore, the validated test material was administered to one hundred seventy

second year straight students under Computer Science Department through total

enumeration, which was needed to determine which among the items were needed to

remove, revise and retain o the test material. They were chosen since they had already

taken the Computer Concepts and Fundamentals during the first semester of the school

year 2013-2014.

Instrumentation and Data Collection

Data gathering was done through the use of questionnaire and test material as the

major sources of information to answer the research problems.

The first instrument was a survey questionnaire. The survey questionnaire was

divided into three parts. The first part concentrated on the personal profile of the

respondents such as age, sex, monthly family income and GPA (refer Appendix G). The

second part pertained to the learning styles of the respondents. Its content was adopted

from Perceptual Learning-Style Preference Questionnaire by Reid (1984). The

respondents were requested to rate the various items in the questionnaire as to how they

respond to each item quickly without too much thought. This questionnaire helped the

respondents identify the way how they learn best and the ways they prefer to learn. Since

the test was adopted from the study of Reid and Mulalic, et al., its validity and reliability

was assumed. The third part dealt with the personality of the respondents towards

learning which lifted from chinesemadeeasier.com. It was administered to determine on

how they handle the feelings that are evoked during the learning process, what kind of

motivation the learner brings to the learning task, as well as personal values, beliefs and
16

attitudes related to learning; whether they prefer to work alone or in groups, and the kind

of relationship the learner prefers to have with the teacher and other learners. These were

all key factors in the learning process. The preceding survey questionnaire were intended

to distribute to the first year students of the Computer Science Department who were

enrolled in the subject Computer Concepts and Fundamentals.

Relative to the test material, the researchers consulted the selected instructors who

handled the subject in order to acquire the test material, table of specifications and course

syllabus. The test material only covered the topics in the midterm period. The survey

questionnaire and test material were presented to English critic to review the grammar

and suggest use of the appropriate words and phrases, to assure that misinterpretations

will not be incurred.

Thereafter, the test material was disseminated to the five validators (refer

Appendix F) who were asked to rate each item in the content validation questionnaire

(refer Appendix I). The foregoing questionnaire was adopted from Eslao, et al. (2014).

The respondents were given ample time to review the test material and answer the

questionnaire for understanding and appreciation. These validation procedures paved way

to necessary changes in order to improve the test material, thus, achieving the intrinsic

purpose of the research. The suggestions of the validators were incorporated in the final

version of the test material. After the analysis of data, the test material found that it was

very much valid with a grand mean 4.3 (refer to Appendix L for detailed results).

In order to determine the items to be discarded, revised and retained of the

validated test material and the reliability of the test material, the researchers used the test-

retest method. This was vital in order to identify which among the test items should be
17

remove, revise and retain and to measure how reliable the test material before distributing

to the respondents. The test was answered by the second year students. Thus, there were

items in the original test material that were discarded, revised and retained. Appendix M

summarizes the result of the test-retest wherein fifteen (15) items were discarded, twenty-

nine (29) items were modified and thirty-one (31) items were retained.

With respect to the reliability of the test material, the computed reliability of the

test material was ± 0.84 with a verbal description of high reliability. It was reliable

because the computed reliability was higher than ±0.7.

Then again, the revised test material was administered to the five validators in

order to check the revised items if it still represents the objectives in Computer Concepts

and Fundamentals.

With regard to the performance of the first year students in the subject, the

researchers administered the pretest as per approval of the instructor. The pretest was

done in the month of June 2014. After that, the fully accomplished test materials were

retrieved by the researchers. The researchers then checked the answers and ranked the

scores in ascending order. After the discussions of all the topics in the midterm, the

researchers conducted posttest to the same set of respondents. The same process was

done by the researchers.

The results of the aforementioned procedure of the study led to the determination

of the mean gain scores of the respondents in the pretest and posttest. Through statistical

tool, the study determined if there was significant difference between the performance of

the respondents in the pretest and posttest. Likewise, it determined if there was a
18

significant relationship between the respondents’ profile as to their age, sex, monthly

family income, GPA, learning styles and personality and their posttest scores.

Conversely, the researchers also used other techniques in gathering information

that can help them in their study. Library research method helped the researchers to

extend their knowledge related to the study. The researchers scanned books, magazines

and other reading materials. Furthermore, surfing the internet is another method that

provided the researchers insights and other important guides in the development of the

study.

Analysis of Data

The data that was collected in this study was subjected to certain statistical

treatments. The data were coded, tallied and tabulated for better presentation and

interpretation of the results. The statistical methods that were used are the following:

The percentage and frequency distributions were used to classify the

respondents according to their age, sex, monthly family income and GPA. The frequency

was accustomed to present the actual response of the respondents to a specific question or

item in the questionnaire. The percentage of each item was computed by dividing it with

the sample total number of respondents who answered the survey. The formula that was

used in the application of this technique was:

P = (f/n) x 100

where:

P = percentage

f = frequency
19

n = number of cases or total sample

In addition, ranking was employed to describe numerical data in addition to

percentage. This was used in the study for comparative purposes and for sharing the

importance of items analyzed.

Another statistical technique that was used by the researchers was the weighted

mean. It was needed to determine the average responses of the different options provided

in the second and third part of the survey questionnaire that was used. It was solved by

the formula:

WM = ∑ fx / n

where:

WM = weighted mean

∑ fx = the sum of all the products of the corresponding f

and x, f being the frequency of each weight and x as

the weight of each operation

n = total number of respondents

The scale below was also used in the study to assign one scale value of each of

the different responses. The following scale shows the interpretation for determining the

favored learning styles of the respondents. Learning styles like visual, auditory,

kinesthetic, tactile, group and individual. It helped to determine the respondents’ major

learning style preference(s), minor learning preference(s), and those learning styles that

are negligible.
20

Statistical Range Descriptive Interpretation

40-50 Major Learning Style Preference

25-39 Minor Learning Style Preference

0-24 Negligible

There are five questions for each learning category in the questionnaire. The

questions were grouped in perceptual learning-style preference questionnaire scoring

sheet according to each learning style as shown in Appendix Q. If the score obtained

from visual questions was in the range of 40-50, it entails that visual is the major learning

style preference of the students. The score range 25-39 denoted that minor learning styles

indicate areas where they can function well as a learner. Moreover, the negligible score 0-

24 indicates that they have had difficulty learning in that way.

In relative to the analysis of the respondents’ responses on the personality

questionnaire, the researchers determined the number of ‘a’ and ‘b’ answers of each

respondent in the specified questions. The more ‘a’ answers will have a corresponding

type of personality, the same with the more ‘b’ answers.

Type of Personality
Questions
A B

1-5 Extrovert Introvert

6-10 Sensing Intuitive

11-15 Thinking Feeling

16-20 Judging Perceiving


21

Meanwhile, the content validity of the test material was interpreted using 5 Point

Likert Scale. Each category was assigned to a numerical value such as Very Much Valid

is equal to 5 and Not Valid which is equal to 1. The total assigned value was determined

by using the weighted mean. The scoring system for each item must be such a high score

consistently reflects favorable response and a low score reflects an unfavorable response.

The consolidated points from the respondents’ answers to each item over a five point

scale are as follows:

Point Range Validity Descriptive Equivalent Rating

5 4.20-5.00 Very Much Valid

4 3.40-4.19 Much Valid

3 2.60-3.39 Moderately Valid

2 1.80-2.59 Less Valid

1 1.00-1.79 Not Valid

With regard to the reliability of the test material, the researchers acquired the total

of correct answers from the even and odd items. After the data were tallied, the Pearson

Product-Moment Coefficient (refer to Appendix O) was used to get the validity (r) which

served as an input for the Spearman Brown Prophecy Formula (refer to Appendix P).

These were used to determine the reliability of the test material. The results of the

computation were verbally described by the following scale:

Computed Reliability Verbal Description

±0 No Reliability

± 0.01−± 0.20 Negligible Reliability


22

± 0.21−± 0.40 Low Reliability

± 0.41−± 0.70 Substantial Reliability

± 0.71−± 0.90 High Reliability

± 0.91−± 0.99 Very High Reliability

±1 Perfect Reliability

Thereafter, the validated test materials were item analyzed using the

discrimination power formula and index of difficulty.

Discrimination Power Formula:

Ds = Pu – Pl

where:

Ds = index of discrimination

Pu = proportion of respondents in upper 27% group who got the item right

Pl = proportion of respondents in lower 27% group who got the item right

The power formula for index of difficulty:

Pu+ Pl
Df =
2

where:

Df = difficulty index

Pu = proportion of respondents in upper group who got the item right

Pl= proportion of respondents in lower group who got the item right

Whereas, according to Padua and De Guzman-Santos (1998), as to index of

discrimination of the test items, ranging from 0.40 and above were considered very good

item and the test item will be retained; indices from 0.30 to 0.39 were considered good

item and it has to be retained; indices from 0.20 to 0.29 were considered fair item and the
23

test item may be improve further; and indices from 0.10 to 0.19 were considered poor

item and it has to be discarded. In general, items with discrimination indices between

0.30 and 0.80 are retained, and items with difficulty indices between 0.20 and 0.80 are

retained. When an item is retained in one index and discarded in another, the item will

probably need revision or modification.

In order to obtain the mean gain score of the respondents in the pretest and

posttest examination, the following formula was used:

∑ Po−∑ Pr
MGS=
n

where:

MGS = Mean gain score

∑Pr = the sum of all the pretest scores

∑Po = the sum of all the posttest scores

n = total number of the respondents

In order to get the equivalent grade of the respondents for pretest and posttest

examination it was explained by the following formula:

EG= (( score
n ) )
∗40 +60

where:

EG = Equivalent Grade

n = total number of items in the test examination

In order to get the significant difference between the performance of the

respondents in the pre-test and post-test examination and same with the significant
24

relationship between the respondents’ profile and their post-test scores, the researchers

used the SPSS Software (Statistical Packages for Social Sciences Software). As stated by

Zamudio (2009), SPSS is a computer based program designed to provide a step-by-step,

hands-on guide to the researcher. Data can be entered directly into the system, or it can be

imported from a number of different sources. The processes for reading data stored in the

SPSS data files, spreadsheet applications, such as Microsoft Excel, database applications,

and Microsoft Access.

Specifically, the researchers used the Pearson Product-Moment Correlation

Coefficient to find out if there is a significant difference between the pretest and posttest

scores of respondents and likewise, the significant relationship between the respondents’

profile and their posttest scores (refer to Appendix O).

You might also like