You are on page 1of 29

CONTINUOUS ASSIGNMENT

1
Table of Contents
A......................................................................................................................................................3

T-test............................................................................................................................................3

ANOVA test.................................................................................................................................5

Chi-Square test.............................................................................................................................8

Mann-Whitney U test.................................................................................................................16

Other important tests that are available in the Insomnia output file..........................................20

Regression test...........................................................................................................................20

Correlation.................................................................................................................................22

Descriptive Statistics..................................................................................................................22

B.....................................................................................................................................................23

Reference List................................................................................................................................28

2
A.

T-test

The T-test belongs to the typing of inferential statistic where the existence of significant
difference of the means of two sample groups is determined. The procedure of T-test is
differentiated through the number of samples present. In a test where only a single sample is
present, the mean of the sample group is compared to a specific number provided by the
researcher. On the contrary, if the number of samples is more than one, the mean of every sample
group is compared to a specific number provided by the user, which is generally 0. After the
comparison, the differences for each sample group are compared. This test is very popular
among many in statistics for hypothesis testing.

Three key data values are required for the calculation of T-test. These data values include the
contrast between the mean values obtained from each one of the data set, which is commonly
known as the mean difference. The standard deviations of every data set group are included in
the three key data values, as well as the number of data values for each data set group. There can
be different types of this test which is possible to be performed on the basis of the available data,
as well as the required type of analysis.

The essence of T-test is to perform comparison on the average values obtained from two
different data sets, so that it can be determined if these two data sets belong to same population
or not. The mathematical approach of T-test is to take sample from both of the two available data
sets and thereby establishing the problem statement, with an assumption of a null hypothesis that
considers that the two mean values from each data sets are equal to each other. After this, certain
formulas are applied according to the need to calculate certain values and these certain values are
compared to the standard values. This helps in deciding if the assumed null hypothesis can be
accepted or to be rejected. If it is decided that the null hypothesis is to be rejected, this signifies
that the data readings that are obtained for the test are strong, and there is least possibility that
these accurate data have been obtained by chance.

There are some advantages of T-test, such as the interpretation in T-test is very simple.
Robustness is another attractive benefit of performing T-test. The gathering of data is also very
easy in T-test, as well as the calculation also. However, there are some limitations present also in

3
this test. As the entire test is based on an assumption, if that assumption is violated by the data
collected for the test purpose, the reliability of that test becomes questionable.

Case study:

T test 1

(Source: SPSS)

The group statistics shows that two groups in this test are general health and how many alcoholic
drinks per day. The 1st group has mean of 7.71 and the 2nd group has mean of 7.95. The sample
test shows that the F value is 2.403 for the general health parameter. The equal variances
assumed metric has t value of -0.750 and equal variances not assumed have t value of -0.806.
The mean difference for both the cases is -0.232.

T test 2

4
(Source: SPSS)

The group statistics for the columns overall wellbeing and relationships shows the group 1 and 2
has mean of 1.17 and 2.38 respectively. The sample statistics shows that the equal variances
assumed for parameter relationship has a F value of 2.402. Whereas the t test value is -1.50. In
the case where equal variances are not assumed has a t test value of -1.726. The mean difference
for both the cases is -1.208.

T test 3

(Source: SPSS)

This T test evaluates the overall wellbeing and consent parameter. The not at all shows mean
value of 3 and 2 shows mean value of 2.63. The sample test shows that the F has a high value of
17.111 and significance level is less than 0.05. The t test value is 0.514 for the case where
variances are equal and for the assumption where equal variances are not assumed has a t test
value of 0.449.

ANOVA test

The term ‘ANOVA’ stands for Analysis of Variance. This is one kind of tool for analysis in the
field of statistics, where an aggregate variability is parted. This observed aggregate value is
found within the provided data set and resides in two different parts as the systematic factors and
the random factors. There is also a statistical influence put on the provided data set by the

5
systematic factors, while, on the contrary, there is no such influence left by the random factors.
The ANOVA test is performed by the statistical analysts to find out the amount of influence left
by the independent variables on the dependent variable from a regression study. Before this
analysis method was invented in 1918, the only analysis methods available to the analysts were
the T-test and the Z-test. Therefore, this method is also known as the extension of the T-test and
Z-test methods. However, people also call it by the name of its inventor as ‘Fisher analysis’. The
first application of this method was in experimental psychology, which in turns, moved into the
use of other more complex subjects.

For three numbers or more of data groups, a one-way ANOVA is applied. This helps in gathering
information on the connection between the independent and dependent variables. If there is not
actual variance existing between the data groups, the F-ratio of the ANOVA will be almost equal
to 1. The formula of ANOVA is “F= MST/ MSE”, where F denotes the ANOVA coefficient,
MST denotes the mean sum of squares caused by treatment and MSE stands for the mean sum of
squares caused by error.

The starting step in ANOVA is to analyse the factors that are effective on the data set provided.
After the ANOVA test is complete, additional testing is performed on the methodical factors by
the analyst, which provide measurable contribution on the inconsistency of the data set.

It is allowed in ANOVA test to perform comparison of groups more than two in number, in order
to locate if there exists any kind of relationship between those data sets. The F-ratio statistic of
ANOVA will be around 1 if there exists no real difference between the data groups included in
the test, and this is known as the ‘null hypothesis’. The ANOVA can be of two different types
mainly, which are one-way ANOVA and two-way ANOVA. The main difference between these
two types are the number of independent variables included in the variance test. The one-way
ANOVA helps in determining the existence of any significant difference between the mean
values of independent groups with three or more in numbers. The two-way ANOVA is just an
extension of the one-way version. As in one-way ANOVA, there is only one independent
variable that is effective on the dependent variable, while in the two-way version, the number of
independent variable is two.

There are some advantages of this method, such as, this method provides the overall test of
equality of the mean values of the group. Furthermore, this method is a parametric test, which

6
makes it more powerful. However, there are some shortcomings also of this method, such as
there is a strict requirement of this method that the distribution of the population have to be
normal.

Case study:

Bayesian Estimates of Coefficientsa,b,c


Posterior 95% Credible Interval
Parameter Mode Mean Variance Lower Bound Upper Bound
how stressed over last 2.667 2.667 1.574 .203 5.130
month? = not at all
how stressed over last .d .d .d .d .d
month? = 2
how stressed over last 3.444 3.444 .525 2.022 4.867
month? = 3
how stressed over last 4.364 4.364 .429 3.077 5.650
month? = 4
how stressed over last 5.000 5.000 .525 3.578 6.422
month? = 5
how stressed over last 5.727 5.727 .429 4.441 7.014
month? = 6
how stressed over last 5.667 5.667 .157 4.888 6.446
month? = 7
how stressed over last .d .d .d .d .d
month? = 8
how stressed over last 6.897 6.897 .163 6.104 7.689
month? = 8
how stressed over last 6.167 6.167 .393 4.935 7.398
month? = 9
how stressed over last 8.400 8.400 .944 6.492 10.308
month? = extremely stressed
a. Dependent Variable: mood
b. Model: how stressed over last month?
c. Assume standard reference priors.
d. This parameter is redundant. Posterior statistics are not calculated.

Bayesian Estimates of Error Variancea


Parameter Posterior 95% Credible Interval

7
Mode Mean Variance Lower Bound Upper Bound
Error variance 4.552 4.721 .420 3.618 6.152
a. Assume standard reference priors.

Bayesian ANOVA

Chi-Square test

The Chi-square test is a method used for testing hypothesis. This test helps in measuring the
similarities between a model and the actual observed data. There are some conditions that the
data sets must satisfy before being included in the chi-square test. These conditions are that the
data has to be random, it has to be raw and mutually exclusive as well. For an instance, the toss
of an unbiased coin meet the criteria provided by this test. In this test method, the size of any
inconsistency within the actual result obtained and the expected value, with the sample size and
the number of variables being provided. This method is very helpful in the analysis of
differences of actual frequency and expected frequency within categorical variables, which are
nominal in nature to be precise. This testing method is highly dependent on the size of the
difference, the degree of freedom, as well as the sample size. The degree of freedom indicates
the highest number of values that are logically independent. This test method can also be used
for identifying if two variables are dependent on each other or they are independent. This method
also provides a great help in the testing of the goodness-of-fit within a theoretical distribution
and an observed distribution of frequencies.

The formula to calculate chi-square is “χc2 = ∑ (Oi−Ei)2 / Ei”, where c denotes the degrees of
freedom, O stands for the observed values and the E stands for the expected values. In order to
find out the independence, the analyst would have to collect necessary data on two variables
chosen for the test. After the data is collected, the frequencies of the data for each variable is
compared to each other using the formula discussed previously. In case there is no existence of
relationship between the chosen variables, then the frequencies of each variable is expected to be
close to each other.

This method is a great testing method for testing goodness-of-fit, which is basically the accuracy
of the sample data. If a sample data resembles to the data of the larger population, which is

8
actually minimally represented by the sample data, this property is known as goodness-of-fit.
However, if the sample data does not match with the data of the larger population, then the
sample data cannot be used for the description and drawing conclusions for the larger population.

This testing method is best suited for the analysis purposes where the data is randomly chosen.
This is also a mandatory property for a data for being chi-square method applied on. There are
certainly some interesting facts about this method which makes it preferable, such as, this
method is robust with respect to the spread of data. The computation is also very easy in this
method, which is another attractive feature. Furthermore, it helps in making detailed information
derivation after the analysis is performed. There are some limitations on this method, such as the
requirement about the sample size. It is also very difficult to interpret the data when the total
number of dependent and independent variables are more than or equal to 20.

Case study:

Case Processing Summary


Cases
Valid Missing Total
N Percent N Percent N Percent
general health * do you 249 98.0% 5 2.0% 254 100.0%
smoke

general health * do you smoke Crosstabulation


Count
do you smoke
yes no Total
general health 3 3 2 5
4 1 4 5
5 4 14 18
6 4 15 19
7 4 31 35
8 10 74 84
9 3 51 54
very good 1 28 29
Total 30 219 249

9
Chi-Square Tests
Asymptotic
Significance (2-
Value df sided)
Pearson Chi-Square 18.547 a
7 .010
Likelihood Ratio 14.773 7 .039
Linear-by-Linear Association 14.047 1 .000
N of Valid Cases 249

a. 8 cells (50.0%) have expected count less than 5. The minimum


expected count is .60.

Chi square test 1

(Source: SPSS)

The Chi square test is done on general health and do you smoke parameters. The results show
that pearson Chi-Square has a value of 18.547 and the degrees of freedom value is 7. Likelihood
ratio is 14.773 and the degree of freedom is similarly 7. Linear by linear association is 14.047
and the degrees of freedom is 1. 8 cells have expected count less than 5 and the minimum
expected count is 0.60. Thus it can be stated that there is a probability of better models can be
observed by applying Chi Square on different parameters.

Case Processing Summary


Cases
Valid Missing Total
N Percent N Percent N Percent
how many alcoholic drinks 238 93.7% 16 6.3% 254 100.0%
per day * problem with
sleep?
how many caffeine drinks 243 95.7% 11 4.3% 254 100.0%
per day * problem with
sleep?
hours sleep/ week nights * 251 98.8% 3 1.2% 254 100.0%
problem with sleep?

10
hours sleep/ week ends * 250 98.4% 4 1.6% 254 100.0%
problem with sleep?
how many hours sleep 247 97.2% 7 2.8% 254 100.0%
needed * problem with
sleep?

For understanding and facilitating the statistical applications multiple Chi square tests are done
on the dataset and different parameters are taken into consideration. Alcohols per day, caffeine
drinks per day, hours of sleep in week nights and hours of sleep on weekends and finally how
many hours are needed are the parameters. The condition of data is available in the above figure.

11
Chi square test 2

(Source: SPSS)

This test will help us to look into the factor whether alcohol consumption affects the sleep hours.
From the data it can be observed that more number of people have responded that they do not
have trouble to sleep while they consume alcohol. The Chi square test shows that the Pearson

12
Chi square is 7.380 and the degree of freedom is 9. 13 cells have expected count less than 5 and
the minimum expected count was 0.45.

Chi square test 3

(Source: SPSS)

13
This data analysis shows caffeine consumption and trouble with sleep relation. It can be
observed that in this case also people have respondent that they do not have trouble sleeping
while consuming caffeine.

14
Chi square test 4

(Source: SPSS)

15
This is one of the most important analyses as it shows the required amount of sleep and problem
with sleeping relationship. It can be seen that most of the people have wanted to sleep for 8 hours
and the trouble with sleeping factor is also relatively low in this record as there are 63 responses
that have responded that they do not have trouble during sleep.

Mann-Whitney U test

The Mann-Whitney U test is an alternative to the traditional independent sample T-test. This test
method excludes parameters and is used for performing comparisons on two mean values from
two samples, which belong to same population. The main objective of performing this test is to
identify if these two mean values are equal to each other or not. Generally, this test method is
applied only when the data is either ordinal or the assumptions for the t-test are not met. The use
of this test method can be difficult sometimes as this method does not present the data in the
traditional group mean differences, rather in group rank differences.

As this method does not include any parameter, it does not include any major assumption also.
However, there are still some assumptions that must be made. The first assumption is that the
sample is drawn out of the whole population randomly. It is also assumed that the samples are
independent and mutually independent. Therefore, it is not possible for the samples to reside in
both groups, they must belong to either group. The ordinal measurement scale is also assumed in
this method.

The formula for performing calculations in Mann-Whitney U test is:

Where, U refers to the entire test, N1 refers to the sample size one and N2 refers to the sample
size two. Ri refers to the rank of the sample size.

This test method is widely used in almost every field, however the greater application can be
noticed in psychology, business, healthcare, nursing and many other fields. The application of
this test method is different with respect to the different fields where it is applied. For an

16
instance, this method helps in the medicinal field to identify if the effects off two different
medicines are equal to each other or not. On the other hand, in psychology, this method helps in
the comparison of attitude and behavior, which cannot be performed using many other models.
In the field of business, this method helps in identifying the differences in the preferences of
different people. Thus, the business entities can take necessary actions according to the result
data obtained by performing the test.

There are several important advantages of using this testing method, such as, this method states
if there is an existence of significant difference or the difference has occurred by chance. This
technique also allows the usage of different sized sets of data. It is also very optimized to deal
with skewed data, so does not require the data to be distributed normally. However, there are
some noticeable disadvantages of this model also. The calculation in this method is very lengthy,
which makes it prone to error by human. It also does not explain the reason behind the existence
of the difference. The accuracy of this method is also limited by sample size 5 to 20, as crossing
this limit makes the test less accurate. This method is also suited for mainly data sets that are
independent of each other.

Case study:

17
Mann –Whitney U test 1

(Source: SPSS)

The Mann – Whitney test is done on two groups that are how many hours of sleep is needed and
trouble falling asleep? The test shows that the group how many hours of sleep is needed has a
mean of 7.743 and trouble falling asleep question has a mean of 1.64. The median value for how
many years of sleep is needed is 8 and the median for trouble falling asleep is 2.00. The Mann
whitney test shows that the how many hours of sleep is needed parameter has mena rank of
128.24 and trouble falling asleep has a mean of 122.51. In the test statistics it can be seen that the
Mann-Whitney U is 6644.00.

18
Mann-Whitney U test 2

(Source: SPSS)

This Mann-Whitney U test is done on 3 groups and those are, satisfied with sleep amount, impact
one that is mood and energy level. The mean of satisfied with sleep amount is 5.58, average
energy level is recorded as 6.48 and the mean of metric mood is 5.72. In the U test it shows that
the mean rank for satisfied with sleep amount is 5.80 for not at all. For the energy level not at all
response has a mean rank of 9.20. The Mann-Whitney U value is 14 for satisfied with sleep
amount and 19.00 for the energy level.

19
Mann-Whitney U test 3

(Source: SPSS)

The third Mann-Whitney u test is done on the overall wellbeing, sleepy and assocsenation scale
and relationships parameters. The Mean value for over all wellbeing is 5.65. Whereas the mean
value for, sleepy and assocsenation scale is 25.94. Finally the mean value for relationships is
4.81. Mann-Whitney U test shows that the mean rank of not at all metric in the overall wellbeing
has a mean rank of 13.08 and for the , sleepy and assocsenation scale not at all parameter has
mean rank of 15.21.

Other important tests that are available in the Insomnia output file.

Regression test

20
This is a method in statistics used for different purposes like finance and investments. Regression
analysis helps in the determination of the strength and the characteristics of the existing
relationships between a series of independent variables and a dependent variable. With the help
of this analytical method, managers can understand the value of an asset. The regression can be
generally categorized into two basic categories, which are linear regression and multi-linear
regression. However, there is also the existence of non-linear regression, which is for the
analysis of data that are more complex. The basic difference between the linear and multi-linear
regression is that, there is only one independent variable used for making the prediction of the
dependent variable, while in multi-linear regression, the number of independent variables
becomes more than one. Some very common application of regression is to make prediction of
the sales for a company with respect to the weather, growth of the GDP, sales made in the past
and many other conditions. The CAPM (Capital Asset Pricing Model) is one of the most used
model of regression that helps in finance by indicating price of an asset and finding the cost of
the capital as well.

The formula to calculate regression are “Y = a + bX + u” for simple linear regression and “Y = a
+ b1X1 + b2X2 + b3X3 + ... + btXt + u” for multi-linear regression, where Y denotes the variable
one which the predictions are to be made, X denotes the independent variables on which the
prediction is dependent, a refers to the intercept, b refers to the slope and u refers to the residual
of the regression. This method mathematically helps in the identification of the factors that are
the most important and the factors that are to be ignored, as well as identifying the method in
which these elements make interactions with each other.

The non-linear regression method also consists of two variables like simple linear regression.
However, unlike linear regression, this method helps in connecting two variables situated on a
curved relationship. The main objective of non-linear regression model is to cut off the sum of
squares with maximum possibility. The sum of squares refers to the statistic which gauges the
amount of difference of the dependent variable in the non-linear regression, which was used to
make predictions on the dependent variable.

This analysis method is a very reliable technique for the identification of the variables that are
effective on the subject. Thus, the analyst can identify the factors that matter the most, as well as
the factors that can be simply ignored. Thus, the predictions on a variable can be made with

21
accuracy. However, there is a shortcoming of this very useful technique which cannot be ignored
at all. The process of regression is very lengthy, consisting of complex calculations and analysis.

Correlation

Correlations

physical fitness general health


physical fitness Pearson Correlation 1 .564**
Sig. (2-tailed) .000

N 249 248
general health Pearson Correlation .564 **
1
Sig. (2-tailed) .000

N 248 250
**. Correlation is significant at the 0.01 level (2-tailed).

Descriptive Statistics

Descriptive Statistics

N Minimum Maximum Mean Std. Deviation


weight 232 43 160 73.81 15.407
height 230 150 199 170.31 10.358
general health 250 3 10 7.76 1.597
physical fitness 249 1 10 6.41 1.744
cigs per day 32 0 78 13.70 14.889
how many alcoholic drinks 240 .00 10.00 .9552 1.21114
per day
how many caffeine drinks 245 .00 9.00 2.8980 1.90443
per day
hours sleep/ week nights 253 3.0 10.0 6.976 1.0817
mood 119 1 10 5.72 2.414
memory 118 1 10 4.93 2.650
sleepy 249 1 10 5.54 2.252
Valid N (listwise) 17

The descriptive statistics is done on some of the parameters of the dataset and it can be seen that
the average weight of the sample of this dataset is 73.81 kg and average height is 170.31 cm. The

22
general health index is 7.76 on an average and physical fitness mean is 6.41. Cigs per day shows
13.70 mean and the average alcohol consumption has an average of 0.9552. The mood is
considered as an impact and it shows that the average mood is 5.72 whereas the mood impact has
a highest value of 10. The memory impact is 5.72 on an average, 4.93 is the average memory
ranking and sleepy index has 5.54 average rating.

B.

Control Low jump High Jump

Mean 601.1 Mean 612.5 Mean 638.7


Standard Error 8.65313 Standard Error 6.112374 Standard Error 5.247327
Median 601.5 Median 606 Median 637
Mode 593 Mode #N/A Mode 650
Standard Deviation 27.3636 Standard Deviation 19.32902 Standard Deviation 16.59351
Sample Variance 748.7667 Sample Variance 373.6111 Sample Variance 275.3444
Kurtosis 0.893255 Kurtosis -1.94748 Kurtosis 0.834289
Skewness 0.082533 Skewness 0.252716 Skewness 1.008125
Range 99 Range 50 Range 52
Minimum 554 Minimum 588 Minimum 622
Maximum 653 Maximum 638 Maximum 674
Sum 6011 Sum 6125 Sum 6387
Count 10 Count 10 Count 10

1. The descriptive statistics for the control, Low jump and high jump conditions show that the
mean for control is 601.1, standard error is 8.65, median is 601.5 which is very close to the mean
value. Mode value is 593 as it occurred multiple times. Standard deviation of the control
parameter is 27.36 and sample variance is 748.77. Furthermore, the Kurtosis value for this
parameter is 0.89 and Skewness is 0.085. Range is 99 in which the minimum value is 554 and
maximum value is 653.

Now for the low jump column the mean value is 612.5 which is higher than the control column.
The standard error is 6.11, Median is 606 an there is no mode available. It implies that none of

23
the bone densities were repeated. The standard deviation is 19.32 and sample variance is 373.61.
Kurtosis and Skewness value for this parameter is -1.94 and 0.25 respectively. Range of this
parameter is 50 where minimum value is 588 and the maximum value is 638.

Finally for the High jump column the mean value is 638.7, standard error is 5.25 and median is
637. The mode value is 650 and it signifies that the mode value for high jump category is much
higher than the control mode value. The standarad deviation and sample variance are 16.60 and
275.34. The kurtosis and skewness value are 0.83 and 1.00 respectively. Rang of this column is
52 where minimum value is 622 and the maximum value is 674.

By comparing these three categories it can be specified that high jump category has higher mean
than other two classes. Even the minimum and maximum values are also higher in comparison.
However, the range of change in bone density is highest in control category as the value is 99.

2.

Anova: Single Factor

SUMMARY
Su Averag Varianc
Groups Count m e e
601 748.766
Control 10 1 601.1 7
612 373.611
Low jump 10 5 612.5 1
638 275.344
High Jump 10 7 638.7 4

ANOVA
Source of
Variation SS df MS F P-value F crit
Between 7433.86 3716.93 7.97783 0.00189 3.35413
Groups 7 2 3 7 5 1

24
465.907
Within Groups 12579.5 27 4

20013.3
Total 7 29

ANOVA test

(Source: MS Excel)

From the ANOVA test it can be seen that the three chosen groups control, low jump and high
jump have mean of 601.1, 612.5 and 638.7 respectively. The ANOVA test shows that the F value
is 7.98 and it shows that the model is highly significant. Moreover, the p value or significance
value is less than 0.05 and the value is 0.001. It reflects the high significance of the ANOVA test.
The degrees of freedom of between groups are 2 as the group numbers are 3. Whereas the
degrees of freedom for within group is 27 as there are 30 records available in total for three
different groups.

For ANOVA test the null hypothesis is that all the means of the three groups are same, and the
alternative hypothesis is that the means of the three groups are different. From the above analysis
and p value less than 0.05, the null hypothesis can be rejected.

3. Post-hoc procedures

Descriptive Statistics

Mean Std. Deviation N


Bone density 632.00 19.672 3
V3 616.00 9.539 3
V4 626.00 12.000 3
V5 604.33 18.771 3
V6 607.67 20.429 3
V7 635.67 15.822 3
V8 624.67 22.189 3
V9 605.33 61.849 3
V10 617.67 22.030 3

25
V11 605.00 41.243 3

Mauchly's Test of Sphericitya


Measure: MEASURE_1
Epsilonb
Approx. Chi- Greenhouse-
Within Subjects Effect Mauchly's W Square df Sig. Geisser Huynh-Feldt Lowe
factor10 .000 . 44 . .137 .243 .1
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to an
matrix.
a. Design: Intercept
Within Subjects Design: factor10
b. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests of
Subjects Effects table.

Tests of Within-Subjects Effects


Measure: MEASURE_1
Type III Sum of
Source Squares df Mean Square F Sig.
factor10 Sphericity Assumed 3721.367 9 413.485 .840 .590
Greenhouse-Geisser 3721.367 1.229 3028.036 .840 .468
Huynh-Feldt 3721.367 2.188 1700.910 .840 .501
Lower-bound 3721.367 1.000 3721.367 .840 .456
Error(factor10) Sphericity Assumed 8858.133 18 492.119

Greenhouse-Geisser 8858.133 2.458 3603.884

Huynh-Feldt 8858.133 4.376 2024.376

Lower-bound 8858.133 2.000 4429.067

Tests of Within-Subjects Contrasts


Measure: MEASURE_1
Type III Sum of
Source factor10 Squares df Mean Square F Sig.
factor10 Linear 548.656 1 548.656 .502 .552

26
Quadratic .124 1 .124 .000 .986
Cubic 685.641 1 685.641 5.618 .141
Order 4 70.727 1 70.727 1.466 .350
Order 5 132.585 1 132.585 1.506 .345
Order 6 64.008 1 64.008 .060 .830
Order 7 2101.648 1 2101.648 1.472 .349
Order 8 52.007 1 52.007 3.010 .225
Order 9 65.970 1 65.970 .281 .649
Error(factor10) Linear 2187.766 2 1093.883

Quadratic 651.051 2 325.525

Cubic 244.071 2 122.036

Order 4 96.460 2 48.230

Order 5 176.065 2 88.032

Order 6 2143.068 2 1071.534

Order 7 2855.848 2 1427.924

Order 8 34.555 2 17.278

Order 9 469.250 2 234.625

Tests of Between-Subjects Effects


Measure: MEASURE_1
Transformed Variable: Average
Type III Sum of
Source Squares df Mean Square F Sig.
Intercept 11436717.633 1 11436717.633 3076.923 .000
Error 7433.867 2 3716.933

Post hoc procedures

(Source: SPSS)

27
Reference List

Al-Najjar, D., Al-Najjar, H. and Al-Rousan, N., 2020. CoVID-19 symptoms analysis of deceased
and recovered cases using Chi-square test. European Review for Medical and Pharmacological
Sciences, 24(21), pp.11428-11431.

Aslam, M., 2021. Chi-square test under indeterminacy: an application using pulse count
data. BMC Medical Research Methodology, 21(1), pp.1-5.

Keš, E., Hribernik, M., Umek, A. and Kos, A., 2020. Sensor system for agility assessment: T-test
case study. In 10th International Conference on Information Society and Technology (pp. 293-
298).

Khomytska, I., Teslyuk, V., Kryvinska, N. and Bazylevych, I., 2020. Software-based approach
towards automated authorship acknowledgement—Chi-square test on one consonant
group. Electronics, 9(7), p.1138.

Kim, T.K. and Park, J.H., 2019. More about the basic assumptions of t-test: normality and
sample size. Korean journal of anesthesiology, 72(4), p.331.

Kumamoto, H., Hisano, S. and Takahashi, K., 2021. Constraints on ultra-low-frequency


gravitational waves with statistics of pulsar spin-down rates. II. Mann–Whitney U
test. Publications of the Astronomical Society of Japan, 73(4), pp.1001-1009.

Liu, C. and Sun, Y., 2019. A simple and trustworthy asymptotic t test in difference-in-differences
regressions. Journal of Econometrics, 210(2), pp.327-362.

Niftiyev, I. and Namazova, N., 2020. Analysis of cyclicality in the Azerbaijan economy: results
of the Chi-Square test. Academic Journal of Economic Studies, 6(2), pp.122-134.

Rodríguez, M.A.C., 2020. A Reliability of the measuring instruments and effect size for U
Mann-Whitney test: contribution to the Uttara-Chari’s article. Journal of Indian Association for
Child and Adolescent Mental Health-ISSN 0973-1342, 16(3), pp.209-212.

Schnuerch, M. and Erdfelder, E., 2020. Controlling decision errors with minimal costs: The
sequential probability ratio t test. Psychological methods, 25(2), p.206.

28
Siebert, M. and Ellenberger, D., 2020. Validation of automatic passenger counting: introducing
the t-test-induced equivalence test. Transportation, 47(6), pp.3031-3045.

Turhan, N.S., 2020. Karl Pearsons chi-square tests. Educational Research and Reviews, 15(9),
pp.575-580.

Witherspoon, J.W., Vasavada, R., Logaraj, R.H., Waite, M., Collins, J., Shieh, C., Meilleur, K.,
Bönnemann, C. and Jain, M., 2019. Two-minute versus 6-minute walk distances during 6-minute
walk test in neuromuscular disease: Is the 2-minute walk test an effective alternative to a 6-
minute walk test?. European Journal of Paediatric Neurology, 23(1), pp.165-170.

Xie, J., 2021. Comments on the utilization of Mann-Whitney U test and Kaplan-Meier
method. Journal of Gynecologic Oncology (JGO), 32(3), pp.1-2.

29

You might also like