Professional Documents
Culture Documents
1. Students will be able to know the importance of reliability, validity, and sources of error in the research.
2. Students will determine the evidences in analyzing the data for the result of the research before its
conclusion
Topics:
USTP-CDO|CPE324|A.Sieras|1
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
Every research design needs to be concerned with reliability and validity to measure the quality of the
research.
What is Reliability?
Reliability refers to the consistency of the measurement. Reliability shows how trustworthy is the score of
the test. If the collected data shows the same results after being tested using various methods and sample
groups, the information is reliable. If your method has reliability, the results will be valid.
Reliability addresses the overall consistency of a research study's measure. If a research instrument, for
example a survey or questionnaire, produces similar results under consistently applied conditions, it lessens
the chance that the obtained scores are due to randomly occurring factors, like seasonality or current
events, and measurement error (Marczyk et al., 2005). Measurement error can be reduced by standardizing
the administration of the study, i.e. ensuring that all measurements be taken in the same manner among all
the study participants; making certain the participants understand the purpose of the study and the
instructions; and thoroughly training data collectors in the measurement strategy ((Marczyk et al., 2005).
Example: If you weigh yourself on a weighing scale throughout the day, you’ll get the same results. This is
considered as reliable results obtained through repeated measures.
Example: If a teacher conducts the same math test of students and repeat it next week with the same
questions. If she gets the same score, then the reliability of the test is high.
Validity refers to the accuracy of the measurement. Validity shows how a specific test is suitable for a
particular situation. If the results are accurate according to the researcher's situation, explanation, and
prediction, then the research is valid.
If the method of measuring is accurate, then it’ll produce accurate results. If a method is reliable, then it’s
valid. In contrast, if a method is not reliable, it’s not valid.
Validity understood within the context of judging the quality or merit of a study is often referred to as
research validity (Gliner & Morgan, 2000). As a measure of a research instrument or tool, validity is the
degree to which it actually measures what it is supposed to measure (Wan, 2002). For example, a researcher
studying hospital in patient satisfaction might question the validity of a survey instrument whose items or
questions produce scores measuring physician communication.
USTP-CDO|CPE324|A.Sieras|2
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
Example: Your weighing scale shows different results each time you weigh yourself within a day even after
handling it carefully, and weighing before and after meals. Your weighing machine might be malfunctioning.
It means your method had low reliability. Hence you are getting inaccurate or inconsistent results that are
not valid.
Example: Suppose a questionnaire is distributed among a group of people to check the quality of a
skincare product and repeated the same questionnaire with many groups. If you get the same response
from various participants, it means the validity of the questionnaire and product is high as it has high
reliability.
Most of the time, validity is difficult to measure even though the process of measurement is reliable. It isn’t
easy to interpret the real situation.
Example: If the weighing scale shows the same result, let’s say 70 kg each time, even if your actual weight
is 55 kg, then it means the weighing scale is malfunctioning. However, it was showing consistent results, but
it cannot be considered as reliable. It means the method has low reliability.
Internal validity is the ability to draw a causal link between your treatment and the dependent variable of
interest. It means the observed changes should be due to the experiment conducted, and any external factor
should not influence the variables. Example: age, level, height, and grade.
External validity is the ability to identify and generalize your study outcomes to the population at large.
The relationship between the study's situation and the situations outside the study is considered external
validity
Testing The results of one test Participants of the first experiment may react
affect the results of differently during the second experiment.
another test.
Instrumentation Changes in Change in the research question may give different
the instrument’s results instead of the expected results.
collaboration
USTP-CDO|CPE324|A.Sieras|3
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
Statistical regression Groups selected Students who failed in the pre-final exam are likely to
depending on the get passed in the final exams; they might be more
extreme scores are not confident and conscious than earlier.
as extreme on
subsequent testing.
Selection bias A group of trained and efficient teachers is selected to
Choosing comparison teach children communication skills instead of
groups randomly selecting them.
without randomization.
USTP-CDO|CPE324|A.Sieras|4
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
Reliability can be measured by comparing the consistency of the procedure and its results. There are various
methods to measure validity and reliability. Reliability can be measured through various statistical methods
depending on the types of validity, as explained below:
Types of Reliability
Inter-Rater
Suppose five researchers measure the academic performance
It measures the
of the same student by incorporating various questions from
consistency of the results
all the
at the
Inter-Term It measures the The results of the same tests are split into two halves and
consistency of the compared with each other. If there is a lot of difference in
measurement. results, then the inter-term reliability of the test is low.
USTP-CDO|CPE324|A.Sieras|5
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
Types of Validity
As we discussed above, the reliability of the measurement alone cannot determine its validity. Validity is
difficult to be measured even if the method is reliable. The following type of tests is conducted for
measuring validity.
Type of
Definition Example
validity
Content It shows whether all A language test is designed to measure the writing and reading
validity the aspects of the skills, listening, and speaking skills. It indicates that a test has
test/measurement are high content validity.
covered.
Face It is about the validity The type of questions included in the question paper, time,
validity of the appearance of a and marks allotted. The number of questions and their
test or procedure of categories. Is it a good question paper to measure the
the test. academic performance of students?
Construct It shows whether the test Is the test conducted to measure communication skills is
validity is measuring the correct actually measuring communication skills?
construct
(ability/ attribute, trait,
skill)
USTP-CDO|CPE324|A.Sieras|6
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
According to the experts, it is helpful if to implement the concept of reliability and Validity. Especially, in the
thesis and the dissertation, these concepts are adopted much. The method for implementations given
below:
Segments Explanation
Methodology All the planning about reliability and validity will be discussed here, including
the chosen samples and size and the techniques used to measure reliability
and validity.
Discussion
Please talk about the level of reliability and validity of your results and their
influence on values.
Literature Reviews Discuss the contribution of other researchers to improve reliability and
validity.
Segments Explanation
Conclusion Talk about the issues you faced while ensuring reliability and validity her
USTP-CDO|CPE324|A.Sieras|7
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
Errors are normally classified in three categories: systematic errors, random errors, and blunders.
1. Systematic Errors
Systematic errors are due to identified causes and can, in principle, be eliminated. Errors of this type result in
measured values that are consistently too high or consistently too low.
Systematic errors affect the accuracy of a measurement. They cannot be corrected with repeated
measurements because they will always exist. They can be caused by faulty calibration of an instrument,
poorly maintained instruments, or even faulty reading of the instrument by a person.
1. Instrumental. For example, a poorly calibrated instrument such as a thermometer that reads 102 oC when
immersed in boiling water and 2oC when immersed in ice water at atmospheric pressure. Such a
thermometer would result in measured values that are consistently too high.
2. Observational. For example, parallax in reading a meter scale.
3. Environmental. For example, an electrical power brown out that causes measured currents to be
consistently too low.
4. Theoretical. Due to simplification of the model system or approximations in the equations describing it. For
example, if your theory says that the temperature of the surrounding will not affect the readings taken when
it actually does, then this factor will introduce a source of error.
2. Random Errors
Random errors are positive and negative fluctuations that cause about one-half of the measurements to be
too high and one-half to be too low. Sources of random errors cannot always be identified.
They affect the precision of a measurement. Random errors are caused by problems like reading the
measurement between two lines on a measuring device or if the reading fluctuates. These types of errors can
be reduced by conducting multiple measurements.
1. Observational. For example, errors in judgment of an observer when reading the scale of a measuring
device to the smallest division.
2. Environmental. For example, unpredictable fluctuations in line voltage, temperature, or mechanical
vibrations of equipment.
Random errors, unlike systematic errors, can often be quantified by statistical analysis, therefore, the
effects of random errors on the quantity or physical law under investigation can often be determined.
Example to distinguish between systematic and random errors is suppose that you use a stop watch to
measure the time required for ten oscillations of a pendulum. One source of error will be your reaction time
in starting and stopping the watch. During one measurement you may start early and stop late; on the next
you may reverse these errors. These are random errors if both situations are equally likely. Repeated
measurements produce a series of times that are all slightly different. They vary in random vary about an
average value.
If a systematic error is also included for example, your stop watch is not starting from zero, then your
measurements will vary, not about the average value, but about a displaced value.
USTP-CDO|CPE324|A.Sieras|8
University of Science and Technology of Southern Philippines
College of Engineering and Technology
Bachelor of Science in Computer Engineering
3. Blunders
A final source of error, called a blunder, is an outright mistake. A person may record a wrong value, misread
a scale, forget a digit when reading a scale or recording a measurement, or make a similar blunder. These
blunders should stick out like sore thumbs if we make multiple measurements or if one person checks the
work of another. Blunders should not be included in the analysis of data.
References:
Marczyk, G., Dematteo, D. & Festinger, D. (2005). Essentials of Research Design and Methodology.
Hoboken, NJ: John Wiley & Sons, Inc.
Gliner, J.A. & Morgan, G.A. (2000). Research Methods in Applied Settings: AnIntegrated Approach to Design
and Analysis. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
https://www.roslynschools.org/cms/lib/NY02205423/Centricity/Domain/110/Sources%20of%20Error.pdf
http://www.physics.nmsu.edu/research/lab110g/html/ERRORS.html
USTP-CDO|CPE324|A.Sieras|9