You are on page 1of 12

A Written Report

Presented to College of
Arts and Sciences
Southern Luzon State University
Lucban, Quezon





In Partial Fulfilment of the
Requirement for the Subject
PSY106 Experimental Psychology




Gertine Rodanilla
Joseph James Verano
Karla Jane Dealino
Kenneth Carlo H. Calibjo




July 2013


PRAYER
Let us pray,

O! St. Michael the Archangel defend on us in battle be our safeguard against the
wickedness and snares of the devil May god rebuke him we humbly pray and do thou oh, prince
of the heavenly host by the power of god bind hell Satans and all the other spirit who prowl
about the world seeking and ruin of soul amen.

ENERGIZER:
Brain teasers












EXPERIMENTAL DESIGN: THE CASE OF TWO INDEPENDENT GROUPS
When we study two independent groups, we may use an experiment design of we may
use the method of systematic observation. In conducting an experiment, participants are
randomly assigned to the independent groups forming a two-randomized-groups design. In
using the method of systematic observation, we select two groups with daggering
characteristics that are ready formed. The T-test is an appropriate method of statistical analysis
of data from either design.
Two values assigned to the independent variables
-Treatments
-Methods

Two-randomized-groups design - Participants are randomly assigned to two groups and
which group gets treatment is randomly determined too.

ESTABLISHING EQUALITY OF GROUPS THROUGH RANDOMIZATION

In independent group design the means (averages) of the group on the dependent
variable do not differ reliability at the start of the experiment in a two group design the two
values of the independent variable are then respectively administered to the two groups. If the
statistical test indicates that the two groups are reliably different, it may be concluded that this
difference is due to the variation of the independent variable.

In any given experiment, however, we want the two groups to be equal only on those
factors that might affect out dependent variable.

Unequal Groups are Unlikely - If you wish to reduce the difference in the means of
the two groups, use a large number of participants.

Unequal Groups are possible - Even with a comparatively large number of
participants, it is still possible, although unlikely, that the means of the group will differ
considerably due to random fluctuations.

Analysis Of Covariance - A statistical technique which equates the two groups so that
differences on extraneous variable would not differentially affect the dependent variable score.

Science is Self-Correcting - If any given experiment leads to a false conclusion, and if
the conclusion has any importance at all, an inconsistency between the results of the invalid
experiment and the data from a later experiment will become apparent. The existence of this
problem will then lead to a solution, which will be a matter of discarding the incorrect conclusion.

In the two randomized-groups design, participant ideally are randomly selected from a
population, they are randomly assigned to two groups , and the treatment conditions are also
randomly assigned to the two groups.

Summary of the computations of t for a Two-Independent-Groups Design

We have emphasized the great value of computer analysis as well as the importance of
understanding what the computer is doing. If you are able to follow the steps id this section, you
will achieved that understanding for this application of the test. For elaboration of the sue of the
computer in psychology, refer to appendix C.

Assume that we have obtained the following dependent variable values for the two
groups and that we seek to test the null hypothesis that
1
=
2
(or equally
1
-
2
= 0)


Group 1
10
11
11
12
15
16
16
17


Group 2
8
9
12
12
12
13
14
15
1 6
17

1. Start with Equation 6-2, the equation for computing t:


t =

+(

= mean of group 1 (the experimental group)


= mean of group 2 (the experimental group)


= sum of squares for group 1


= sum of squares for group 2


= no. of sample for group 1


= no. of sample for group 2




note: subscripts have been used to indicate which group the values are for:

2. Compute the sum of

(ie.,

) the sum of

) and n for each group:



Group 1 Group 2

= 108

= 128

= 1,512

= 1712
n = 10

3. Using equation 6-1 compute the means for each group:



= symbol for the summation and may be interpreted as sum of
X = indicates the score that we obtained for each participants
n= no. of people in the group

= 13.50

= 12.80

4. Using Equation 6-3, compute the sum of squares for each group:

SS = X
2


SS
1
= X
1
2


= 1,512



= 54.000

SS
2

= 1,712


=
73.600
5. Substitute the preceding values in Equation 6-2:


[


] (

)


6. Perform the operations as indicated and determine that the value of t is




7. Determine the number of degrees of freedom associated with the preceding
value of t:

Df = N 2 = 18 2 = 16

8. Enter the table of t, and determine the probability associated with this value of t.
In this example 0.70>P>0.60. therefore, assuming a required reliability level of
0.05, the null hypothesis is not rejected and we reach the appropriate conclusion
about our empirical hypothesis.

THE NULL HYPOTHESIS

There is no difference between the population means on the dependent variable of the
two groups.
Null hypothesis is a hypothesis that we attempt to disprove (reject).
Asserts that the difference between population means zero.

Parameters = Measure of ascertained from all possible observations of population.
Statistics = A value computed only from a sample

= stands for the population mean

= stands for the sample mean



We seek to falsify null hypothesis. The present null hypothesis states that the difference
between the populations of two groups is zero (
1
-
2
= 0). If the difference between sample
means (


1
-


2
) is large, then the null hypothesis is probably false and we reject it. However, if
the difference between the samples means small, we probably fail to reject the null hypothesis.
When we fail to reject the null hypothesis, we conclude that any difference between our sample
means is due to chance (random fluctuations).

Degrees of Freedom - A concept that expresses how much freedom you have in
determining values in an array of numbers and a function of het number of participants in
the experiment.

Testing the Null Hypothesis - To reject the null hypothesis the
1
-
2
=0; that is, we
refuse to regard it as reasonable that the true difference between the means of the two
groups is zero when we have obtained such a large difference in sample means.

The independent variable probably influenced the dependent variable which was
precisely the purpose of the experiment.
Specifying the criterion for the test

Setting P is arbitrary - the value of P (also known as the level of alpha value of P is
established prior to the collection of the data that serves as the criterion for testing a
null hypothesis)

The seriousness of the decisions sets the value of P

One criterion is how important it is to believe in the conclusion that is to avoid het
error of rejecting the null hypothesis when it is in fact true.

Testing the empirical Hypothesis - the evidence report asserts that the values for the
experimental rate were reliably higher than those for controls.

One versus Two-Tailed Test - It is more likely that you can reject your null hypothesis
with a one tailed-test. That is, a lower value of t is required to reject a null hypothesis
when using a one-tailed-test. The word tail refers to a tail of the distribution of t.

Steps in Testing an Empirical Hypothesis

1. State the problem
2. State the hypothesis
3. Samples from each population
4. The null hypothesis is stated:
1
-
2
=0;
5. A probability value for determining whether to reject the null hypothesis is established.
6. Collect the data and statistically analyze them
7. If the means are in the direction specified by the hypothesis ad if the null hypothesis is
rejected, it may be concluded that the hypothesis is confirmed and vice versa.

Borderline (Marginal) Reliability

The t-test is decisive
A t cannot be very reliable

Errors concerns changing criterion for rejecting null hypothesis after your statistical
analysis is conducted.
The probability values upon which the table of t was computed were not based on how
far the critical region might be.






THE STANDARD DEVIATION AND VARIANCE

TWO KINDS OF STATISTICS

1. Measure of Central Tendency
a. Mean- the most common measure of central tendency; the average of two extremes
b. Mode- the most frequently occurring value in the distribution.
c. Median- the value above which are 30% of the values and below which are 50% of
the values.

2. Measure of Variability- tell us how the values are spread out

a. Standard Deviation- symbolized as (s), usually the most reliable of the measure in
the essence that it varies least from sample to sample (I). The larger the standard
deviation value indicates the greater variability of the distribution scores.
Homogeneous more similar
Heterogeneous less homogeneous

Formula for the standard Deviation:






b. Variance- the square of the standard deviation
c. Range- the range of a distribution scores equals the highest value minus the lowest
value.


ASSUMPTIONS UNDERLYING THE USE OF STATISTICAL TEST

1. The population distribution is normal.
Normality- means that the distribution is bell-shaped or Gaussian in form
2. The variances of the group is homogeneous
The way n which the distribution are spread out is about the same for the different
groups in the experiment
3. The treatment effects and the error effects are additive
A bit more precisely, means that the standard deviations of each group of dependent
variables scores multiplied by themselves are about the same.
4. Dependent- variable values are dependent
Each dependent variable value must be independent of every other dependent variable
value.

Non-parametric tests (distribution free test) do not make assumptions about
distributions such as normality and homogeneity.

Parametric tests - like the t-test make assumptions about the parameters of the
distributions

NUMBER OF PARTICIPANTS PER GROUP

The larger the true difference between groups, the smaller will be the number of
participants required for the experiment, and the smaller group variances the fewer
participant will be required.

Error Variance denominator of the t-ratio; a measure of the extent to which
participants treated alike exhibit variability of their dependent variable values

2 General Ways in which the Null Hypothesis should be rejected:

1. Increase the difference between the dependent variable means of the group
2. Decrease the variability in the experiment

WAYS TO REDUCE ERROR VARIANCE

A. Reduce Individual Differences
One obvious way to reduce the error variance of the group is to reduce the
extent to which our participants are different
Psychologists increase the homogeneity of their groups by selection
One serious objection to selecting participants is that you restrict the extent to
which you can generalize your results
The greater extent to which you select homogenous participants, the less
sound will be your basis for a broad generalization
B. Use Precise Procedures
You can reduce your variances by treating all participants in the same group
as precisely alike as possible
The greater number of extraneous variables that are operating in a random
fashion, the greater your variances will be.
You should recognize that when you eliminate extraneous variables you
might restrict the degree of generalizing to situations when they are present
C. Reduce Errors
To reduce your variances, reduce errors in reading your measuring
instruments, in recording your data and in your statistical analysis.
The more errors that are present, the large will be the variances
The more reliable your measures of the dependent variable, the less will be
your error variance
One way to make the reliability of the dependent variable measure can be
increased is to make more than one observation on each participant
D. Other Ways
Another technique is the design that you select
Factorial design can also be used to decrease your error variance
Analysis of covariance, frequently effective on reducing error variance
Analysis of covariance enables you to obtain a measure of what you think is a
particular relevant extraneous variable that you are not controlling

INTERIM SUMMARY
Specific Ways to Decrease Error Variance
A. Select homogenous participants according to their scores on some relevant
measure
B. Standardize, in a strict fashion, the experimental procedures used.
C. Reduce errors in observing in recording the dependent variable values
D. Select a relatively precise design
E. Increase the number of participants per group
F. Replicate the experiment

REPLICATION
Repeating an experiment
The method employed by a researcher are repeated in an effort to confirm or
disconfirm the findings obtained
Refers to repeating the experiment, not to confirming the original findings
failed to replicate an experiment literally means they failed to repeat the
original methodology.

Ways to Reduce Error Variance
Reduce Individual Differences
First, recall that our participants, when they enter the experimental situation, are all
different, and all the largest such differences are, the greater will be the variances of
our groups. However, One serious objection to selecting participants is that your
restrict the extent to which you can generalize your results.
Use Precise Procedure
The idea is to treat all participants in the same group as precisely alike as possible.

Reduce Errors
To reduce your variances, reduce errors in reading your measuring instruments, in
recording your data, and in your statistical analysis.
The more errors that are present, the larger will be the variances, assuming that
such errors are of a random nature.


Other Ways:

Analysis of Covariance

This technique enables you to obtain a measure of what you think is a particularly
relevant extraneous variable that you are not controlling.


SUMMARY OF THE COMPUTATION OF t FOR TWO- INDEPENDENT GROUPS DESIGN

Assume that we have obtained the following independent- variable values for the two groups
and that we seek to test the null hypothesis that
1 =

2

(or equally
1 -

2
=0):

Group 1
10
11
11
12
15
16
16
17


Group 2
8
9
12
12
12
13
14
15
16
17
7. Determine the number of degrees of freedom associated with the preceding value of t:
df= N-2 =18-2= 16
8. Enter the table of t, and determine the probability associated with this value of t. In this
example 0.70>P>0.60.Therefore, assuming required reliability level of 0.05, the null hypothesis
is not rejected and we reach the appropriate conclusion about our empirical hypothesis.


Closing Prayer,

Glory be to the father and to the son and to the holy spirit amen.

You might also like