You are on page 1of 30

Sekolah Menengah Kebangsaan St.

Francis
Additional Mathematics Project Work
2020

Name : Lai Xie Ern


Year : 2020
Class : 5 SA
Teacher’s Name : Pn. Junaidah Mohamad Ramli
Title : Obesity Awareness Campaign
Content Page
Page Title Date

Introduction to Statistics

Introduction of Obesity

Problem Solving

Histogram

Frequency Polygon

Ogive

Further Exploration

Student Role

Conclusion
Introduction of Statistcs

 Definition

Statistics is the discipline that concerns the collection, organization, analysis,


interpretation and presentation of data. In applying statistics to a scientific, industrial,
or social problem, it is conventional to begin with a statistical population or
a statistical model to be studied. Populations can be diverse groups of people or
objects such as "all people living in a country" or "every atom composing a crystal".
Statistics deals with every aspect of data, including the planning of data collection in
terms of the design of surveys and experiments.

 Statistical Methods

a) Descriptive statistic
A descriptive statistic (in the count noun sense) is a summary statistic that
quantitatively describes or summarizes features of a collection
of information, while descriptive statistics in the mass noun sense is the process of
using and analyzing those statistics. Descriptive statistics is distinguished
from inferential statistics (or inductive statistics), in that descriptive statistics aims to
summarize a sample, rather than use the data to learn about the population that the
sample of data is thought to represent.

b) Inferential statistic
Statistical inference is the process of using data analysis to deduce properties of an
underlying probability distribution. Inferential statistical analysis infers properties of
a population, for example by testing hypotheses and deriving estimates. It is assumed
that the observed data set is sampled from a larger population. Inferential statistics can
be contrasted with descriptive statistics. Descriptive statistics is solely concerned with
properties of the observed data, and it does not rest on the assumption that the data
come from a larger population.
 History of Statiscal Method

Picture above is Al-Khalil drawn by an artist.


The earliest writings on probability and statistics date back to Arab
mathematicians and cryptographers, during the Islamic Golden Age between the 8th
and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages,
which contains the first use of permutations and combinations, to list all
possible Arabic words with and without vowels. The earliest book on statistics is the
9th-century treatise Manuscript on Deciphering Cryptographic Messages, written by
Arab scholar Al-Kindi (801–873). In his book, Al-Kindi gave a detailed description of
how to use statistics and frequency analysis to decipher encrypted messages. This text
laid the foundations for statistics and cryptanalysis. Al-Kindi also made the earliest
known use of statistical inference, while he and later Arab cryptographers developed
the early statistical methods for decoding encrypted messages. Ibn Adlan (1187–
1268) later made an important contribution, on the use of sample size in frequency
analysis.
Picture above is John Graunt in 1663

The earliest European writing on statistics dates back to 1663, with the publication
of Natural and Political Observations upon the Bills of Mortality by John
Graunt. Early applications of statistical thinking revolved around the needs of states to
base policy on demographic and economic data, hence its stat- etymology. The scope
of the discipline of statistics broadened in the early 19th century to include the
collection and analysis of data in general. Today, statistics is widely employed in
government, business, and natural and social sciences.

The mathematical foundations of modern statistics were laid in the 17th century with
the development of the probability theory by Gerolamo Cardano, Blaise
Pascal and Pierre de Fermat. Mathematical probability theory arose from the study
of games of chance, although the concept of probability was already examined
in medieval law and by philosophers such as Juan Caramuel. The method of least
squares was first described by Adrien-Marie Legendre in 1805.

Picture above is Kar Pearson in 19th century


The modern field of statistics emerged in the late 19th and early 20th century in three
stages. The first wave, at the turn of the century, was led by the work of Francis
Galton and Karl Pearson, who transformed statistics into a rigorous mathematical
discipline used for analysis, not just in science, but in industry and politics as well.
Galton's contributions included introducing the concepts of standard
deviation, correlation, regression analysis and the application of these methods to the
study of the variety of human characteristics—height, weight, eyelash length among
others. Pearson developed the Pearson product-moment correlation coefficient,
defined as a product-moment, the method of moments for the fitting of distributions to
samples and the Pearson distribution, among many other things. Galton and Pearson
founded Biometrika as the first journal of mathematical statistics
and biostatistics (then called biometry), and the latter founded the world's first
university statistics department at University College London.

 Application of Statistics

Applied statistics comprises descriptive statistics and the application of inferential


statistics. Theoretical statistics concerns the logical arguments underlying justification
of approaches to statistical inference, as well as encompassing mathematical statistics.
Mathematical statistics includes not only the manipulation of probability
distributions necessary for deriving results related to methods of estimation and
inference, but also various aspects of computational statistics and the design of
experiments.

Statistical consultants can help organizations and companies that don't have in-house
expertise relevant to their particular questions.
Statistical methods
Descriptive statistics
Main article: Descriptive statistics

A descriptive statistic (in the count noun sense) is a summary statistic that


quantitatively describes or summarizes features of a collection
of information, while descriptive statistics in the mass noun sense is the process of
using and analyzing those statistics. Descriptive statistics is distinguished
from inferential statistics (or inductive statistics), in that descriptive statistics aims to
summarize a sample, rather than use the data to learn about the population that the
sample of data is thought to represent.

Inferential statistics
Main article: Statistical inference

Statistical inference is the process of using data analysis to deduce properties of an


underlying probability distribution.Inferential statistical analysis infers properties of
a population, for example by testing hypotheses and deriving estimates. It is assumed
that the observed data set is sampled from a larger population. Inferential statistics can
be contrasted with descriptive statistics. Descriptive statistics is solely concerned with
properties of the observed data, and it does not rest on the assumption that the data
come from a larger population.

Terminology and theory of inferential statistics


Statistics, estimators and pivotal quantities

Consider independent identically distributed (IID) random variables with a


given probability distribution: standard statistical inference and estimation
theory defines a random sample as the random vector given by the column vector of
these IID variables. The population being examined is described by a probability
distribution that may have unknown parameters.

A statistic is a random variable that is a function of the random sample, but not a


function of unknown parameters. The probability distribution of the statistic, though,
may have unknown parameters.
Consider now a function of the unknown parameter: an estimator is a statistic used to
estimate such function. Commonly used estimators include sample mean,
unbiased sample variance and sample covariance.

A random variable that is a function of the random sample and of the unknown
parameter, but whose probability distribution does not depend on the unknown
parameter is called a pivotal quantity or pivot. Widely used pivots include the z-
score, the chi square statistic and Student's t-value.

Between two estimators of a given parameter, the one with lower mean squared
error is said to be more efficient. Furthermore, an estimator is said to be unbiased if
its expected value is equal to the true value of the unknown parameter being
estimated, and asymptotically unbiased if its expected value converges at the limit to
the true value of such parameter.

Other desirable properties for estimators include: UMVUE estimators that have the


lowest variance for all possible values of the parameter to be estimated (this is usually
an easier property to verify than efficiency) and consistent
estimators which converges in probability to the true value of such parameter.

This still leaves the question of how to obtain estimators in a given situation and carry
the computation, several methods have been proposed: the method of moments,
the maximum likelihood method, the least squares method and the more recent
method of estimating equations.

Null hypothesis and alternative hypothesis

Interpretation of statistical information can often involve the development of a null


hypothesis which is usually (but not necessarily) that no relationship exists among
variables or that no change occurred over time.

The best illustration for a novice is the predicament encountered by a criminal trial.
The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative
hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of
suspicion of the guilt. The H0 (status quo) stands in opposition to H1 and is maintained
unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to
reject H0" in this case does not imply innocence, but merely that the evidence was
insufficient to convict. So the jury does not necessarily accept H0 but fails to
reject H0. While one can not "prove" a null hypothesis, one can test how close it is to
being true with a power test, which tests for type II errors.

What statisticians call an alternative hypothesis is simply a hypothesis that contradicts


the null hypothesis.

Error

Working from a null hypothesis, two basic forms of error are recognized:

 Type I errors where the null hypothesis is falsely rejected giving a "false


positive".
 Type II errors where the null hypothesis fails to be rejected and an actual
difference between populations is missed giving a "false negative".

Standard deviation refers to the extent to which individual observations in a sample


differ from a central value, such as the sample or population mean, while Standard
error refers to an estimate of difference between sample mean and population mean.

A statistical error is the amount by which an observation differs from its expected


value, a residual is the amount an observation differs from the value the estimator of
the expected value assumes on a given sample (also called prediction).

Mean squared error is used for obtaining efficient estimators, a widely used class of
estimators. Root mean square error is simply the square root of mean squared error.
A least squares fit: in red the points to be fitted, in blue the fitted line.

Many statistical methods seek to minimize the residual sum of squares, and these are
called "methods of least squares" in contrast to Least absolute deviations. The latter
gives equal weight to small and big errors, while the former gives more weight to
large errors. Residual sum of squares is also differentiable, which provides a handy
property for doing regression. Least squares applied to linear regression is
called ordinary least squares method and least squares applied to nonlinear
regression is called non-linear least squares. Also in a linear regression model the non
deterministic part of the model is called error term, disturbance or more simply noise.
Both linear regression and non-linear regression are addressed in polynomial least
squares, which also describes the variance in a prediction of the dependent variable (y
axis) as a function of the independent variable (x axis) and the deviations (errors,
noise, disturbances) from the estimated (fitted) curve.

Measurement processes that generate statistical data are also subject to error. Many of
these errors are classified as random (noise) or systematic (bias), but other types of
errors (e.g., blunder, such as when an analyst reports incorrect units) can also be
important. The presence of missing data or censoring may result in biased
estimates and specific techniques have been developed to address these problems.
Interval estimation
Main article: Interval estimation

Confidence intervals: the red line is true value for the mean in this example, the
blue lines are random confidence intervals for 100 realizations.

Most studies only sample part of a population, so results don't fully represent the
whole population. Any estimates obtained from the sample only approximate the
population value. Confidence intervals allow statisticians to express how closely the
sample estimate matches the true value in the whole population. Often they are
expressed as 95% confidence intervals. Formally, a 95% confidence interval for a
value is a range where, if the sampling and analysis were repeated under the same
conditions (yielding a different dataset), the interval would include the true
(population) value in 95% of all possible cases. This does not imply that the
probability that the true value is in the confidence interval is 95%. From
the frequentist perspective, such a claim does not even make sense, as the true value is
not a random variable. Either the true value is or is not within the given interval.
However, it is true that, before any data are sampled and given a plan for how to
construct the confidence interval, the probability is 95% that the yet-to-be-calculated
interval will cover the true value: at this point, the limits of the interval are yet-to-be-
observed random variables. One approach that does yield an interval that can be
interpreted as having a given probability of containing the true value is to use
a credible interval from Bayesian statistics: this approach depends on a different way
of interpreting what is meant by "probability", that is as a Bayesian probability.

In principle confidence intervals can be symmetrical or asymmetrical. An interval can


be asymmetrical because it works as lower or upper bound for a parameter (left-sided
interval or right sided interval), but it can also be asymmetrical because the two sided
interval is built violating symmetry around the estimate. Sometimes the bounds for a
confidence interval are reached asymptotically and these are used to approximate the
true bounds.

Significance
Main article: Statistical significance

Statistics rarely give a simple Yes/No type answer to the question under analysis.
Interpretation often comes down to the level of statistical significance applied to the
numbers and often refers to the probability of a value accurately rejecting the null
hypothesis (sometimes referred to as the p-value).

In this graph the black line is probability distribution for the test statistic,
the critical region is the set of values to the right of the observed data point
(observed value of the test statistic) and the p-value is represented by the green
area.

The standard approach is to test a null hypothesis against an alternative hypothesis.


A critical region is the set of values of the estimator that leads to refuting the null
hypothesis. The probability of type I error is therefore the probability that the
estimator belongs to the critical region given that null hypothesis is true (statistical
significance) and the probability of type II error is the probability that the estimator
doesn't belong to the critical region given that the alternative hypothesis is true.
The statistical power of a test is the probability that it correctly rejects the null
hypothesis when the null hypothesis is false.

Referring to statistical significance does not necessarily mean that the overall result is
significant in real world terms. For example, in a large study of a drug it may be
shown that the drug has a statistically significant but very small beneficial effect, such
that the drug is unlikely to help the patient noticeably.

Although in principle the acceptable level of statistical significance may be subject to


debate, the p-value is the smallest significance level that allows the test to reject the
null hypothesis. This test is logically equivalent to saying that the p-value is the
probability, assuming the null hypothesis is true, of observing a result at least as
extreme as the test statistic. Therefore, the smaller the p-value, the lower the
probability of committing type I error.

Some problems are usually associated with this framework (See criticism of


hypothesis testing):

 A difference that is highly statistically significant can still be of no practical


significance, but it is possible to properly formulate tests to account for this. One
response involves going beyond reporting only the significance level to include
the p-value when reporting whether a hypothesis is rejected or accepted. The p-
value, however, does not indicate the size or importance of the observed effect and
can also seem to exaggerate the importance of minor differences in large studies.
A better and increasingly common approach is to report confidence intervals.
Although these are produced from the same calculations as those of hypothesis
tests or p-values, they describe both the size of the effect and the uncertainty
surrounding it.
 Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise
because the hypothesis testing approach forces one hypothesis (the null
hypothesis) to be favored, since what is being evaluated is the probability of the
observed result given the null hypothesis and not probability of the null hypothesis
given the observed result. An alternative to this approach is offered by Bayesian
inference, although it requires establishing a prior probability.
 Rejecting the null hypothesis does not automatically prove the alternative
hypothesis.
 As everything in inferential statistics it relies on sample size, and therefore
under fat tails p-values may be seriously mis-computed.
Applications
Applied statistics, theoretical statistics and mathematical statistics

Applied statistics comprises descriptive statistics and the application of inferential


statistics. Theoretical statistics concerns the logical arguments underlying justification
of approaches to statistical inference, as well as encompassing mathematical statistics.
Mathematical statistics includes not only the manipulation of probability
distributions necessary for deriving results related to methods of estimation and
inference, but also various aspects of computational statistics and the design of
experiments.

Statistical consultants can help organizations and companies that don't have in-house
expertise relevant to their particular questions.

Machine learning and data mining

Machine learning models are statistical and probabilistic models that capture patterns
in the data through use of computational algorithms.

Statistics in academy

Statistics is applicable to a wide variety of academic disciplines,


including natural and social sciences, government, and business. Business statistics
applies statistical methods in econometrics, auditing and production and operations,
including services improvement and marketing research. In the field of biological
sciences, the 12 most frequent statistical tests are: Analysis of
Variance (ANOVA), Chi-Square Test, Student’s T Test, Linear Regression, Pearson’s
Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon’s
Diversity Index, Tukey’s Test, Cluster Analysis, Spearman’s Rank Correlation
Test and Principal Component Analysis.

A typical statistics course covers descriptive statistics, probability, binomial


and normal distributions, test of hypotheses and confidence intervals, linear
regression, and correlation. Modern fundamental statistical courses for undergraduate
students focus on the correct test selection, results interpretation and use of free
statistics software.
Statistical computing

gretl, an example of an open source statistical package

Main article: Computational statistics

The rapid and sustained increases in computing power starting from the second half of
the 20th century have had a substantial impact on the practice of statistical science.
Early statistical models were almost always from the class of linear models, but
powerful computers, coupled with suitable numerical algorithms, caused an increased
interest in nonlinear models (such as neural networks) as well as the creation of new
types, such as generalized linear models and multilevel models.

Increased computing power has also led to the growing popularity of computationally
intensive methods based on resampling, such as permutation tests and the bootstrap,
while techniques such as Gibbs sampling have made use of Bayesian models more
feasible. The computer revolution has implications for the future of statistics with a
new emphasis on "experimental" and "empirical" statistics. A large number of both
general and special purpose statistical software are now available. Examples of
available software capable of complex statistical computation include programs such
as Mathematica, SAS, SPSS, and R.
Statistics applied to mathematics or the arts

Traditionally, statistics was concerned with drawing inferences using a semi-


standardized methodology that was "required learning" in most sciences. This
tradition has changed with the use of statistics in non-inferential contexts. What was
once considered a dry subject, taken in many fields as a degree-requirement, is now
viewed enthusiastically. Initially derided by some mathematical purists, it is now
considered essential methodology in certain areas.

 In number theory, scatter plots of data generated by a distribution function


may be transformed with familiar tools used in statistics to reveal underlying
patterns, which may then lead to hypotheses.
 Methods of statistics including predictive methods in forecasting are combined
with chaos theory and fractal geometry to create video works that are considered
to have great beauty.
 The process art of Jackson Pollock relied on artistic experiments whereby
underlying distributions in nature were artistically revealed. With the advent of
computers, statistical methods were applied to formalize such distribution-driven
natural processes to make and analyze moving video art.
 Methods of statistics may be used predicatively in performance art, as in a
card trick based on a Markov process that only works some of the time, the
occasion of which can be predicted using statistical methodology.
 Statistics can be used to predicatively create art, as in the statistical
or stochastic music invented by Iannis Xenakis, where the music is performance-
specific. Though this type of artistry does not always come out as expected, it does
behave in ways that are predictable and tunable using statistics.
Introduction Of Obesity

Definition of obesity

Overweight and obesity are defined as abnormal or excessive fat accumulation that
presents a risk to health. A body mass index (BMI) over 25 is considered overweight,
and over 30 is obese. The issue has grown to epidemic proportions, with over 4
million people dying each year as a result of being overweight or obese in 2017
according to the global burden of disease. 

Rates of overweight and obesity continue to grow in adults and children. From 1975
to 2016, the prevalence of overweight or obese children and adolescents aged 5–19
years increased more than four-fold from 4% to 18% globally.

Obesity is one side of the double burden of malnutrition, and today more people are
obese than underweight in every region except sub-Saharan Africa and Asia. Once
considered a problem only in high-income countries, overweight and obesity are now
dramatically on the rise in low- and middle-income countries, particularly in urban
settings. The vast majority of overweight or obese children live in developing
countries, where the rate of increase has been more than 30% higher than that of
developed countries.
Classification of Obesity

Body Mass Index (BMI) is a person’s weight in kilograms divided by the square of
height in meters. A high BMI can be an indicator of high body fatness.

To calculate BMI, see the Adult BMI Calculator or determine BMI by finding your
height and weight in this BMI Index Chartexternal icon.

 If your BMI is less than 18.5, it falls within the underweight range.
 If your BMI is 18.5 to <25, it falls within the normal.
 If your BMI is 25.0 to <30, it falls within the overweight range.
 If your BMI is 30.0 or higher, it falls within the obese range.

Obesity is frequently subdivided into categories:

 Class 1: BMI of 30 to < 35


 Class 2: BMI of 35 to < 40
 Class 3: BMI of 40 or higher. Class 3 obesity is sometimes categorized as
“extreme” or “severe” obesity.

At an individual level, BMI can be used as a screening tool but is not diagnostic of the
body fatness or the health of an individual. A trained healthcare provider should perform
appropriate health assessments in order to evaluate an individual’s health status and risks.
If you have questions about your BMI, talk with your health care provider.

Adult Body Mass Index (BMI)


Height Weight Range BMI Considered
5′ 9″ 56 kg or less Below 18.5 Underweight
57 kg to 76 kg 18.5 to 24.9 Healthy weight
77 kg to 92 kg 25.0 to 29.9 Overweight
93 kg or more 30 or higher Obese
123kg or more 40 or higher Class 3 Obese

Adult Body Mass Index (BMI)

BMI does not measure body fat directly, but research has shown that BMI is moderately
correlated with more direct measures of body fat obtained from skinfold thickness
measurements, bioelectrical impedance, underwater weighing, dual energy x-ray
absorptiometry (DXA) and other methods 1,2,3. Furthermore, BMI appears to be
strongly correlated with various adverse health outcomes consistent with these more
direct measures of body fatness 

Effects on health
People who have obesity, compared to those with a normal or healthy weight, are at
increased risk for many serious diseases and health conditions, including the
following:

 All-causes of death (mortality)


 High blood pressure (Hypertension)
 High LDL cholesterol, low HDL cholesterol, or high levels of triglycerides
(Dyslipidemia)
 Type 2 diabetes
 Coronary heart disease
 Stroke
 Gallbladder disease
 Osteoarthritis (a breakdown of cartilage and bone within a joint)
 Sleep apnea and breathing problems
 Many types of cancer external icon
 Low quality of life
 Mental illness such as clinical depression, anxiety, and other mental disorders
 Body pain and difficulty with physical functioning

Causes
The balance between calorie intake and energy expenditure determines a person's
weight. If a person eats more calories than he or she burns (metabolizes), the person
gains weight (the body will store the excess energy as fat). If a person eats fewer
calories than he or she metabolizes, he or she will lose weight. Therefore, the most
common causes of obesity are overeating and physical inactivity. Ultimately, body
weight is the result of genetics, metabolism, environment, behavior, and culture.

 Physical inactivity. Sedentary people burn fewer calories than people who


are active.
 Genetics. A person is more likely to develop obesity if one or both parents are
obese.
 A diet high in simple carbohydrates. The role of carbohydrates in weight
gain is not clear. Carbohydrates increase blood glucose levels, which in turn
stimulate insulin release by the pancreas, and insulin promotes the growth of
fat tissue and can cause weight gain.
 Frequency of eating. The relationship between frequency of eating (how
often you eat) and weight is somewhat controversial.
 Medications. Medications associated with weight gain include
certain antidepressants.
 Psychological factors. For some people, emotions influence eating habits.
Many people eat excessively in response to emotions such as boredom,
sadness, stress, or anger.
 Hormones. Women tend to gain weight especially during certain events such
as pregnancy, menopause, and in some cases, with the use of oral
contraceptives.

History of obesity
In the evolutionary history of humankind, bodily fat seems to have served nature’s
purpose by outfitting the species with a built-in mechanism for storing its own food
reserves. During prehistoric times, when the burden of disease was that of pestilence
and famine, natural selection rewarded the “thrifty” genotypes of those who could
store the greatest amount of fat from the least amount of the then erratically available
foods and to release it as frugally as possible over the long run. This ability to store
surplus fat from the least possible amount of food intake may have made the
difference between life and death, not only for the individual but also more
importantly for the species. Those who could store fat easily had an evolutionary
advantage in the harsh environment of early hunters and gatherers.
The esthetic value and cultural significance attached to obesity is reflected in the
mysterious nude female figurines of Stone Age Europe, dating back to more than
20,000 years ago, considered to be matriarchal icons of fertility or the mother
goddess. The best known of these earliest representations of the human form is the
one discovered in Willendorf, Australia in 1908. Commonly known as the Venus of
Willendorf, its squat body, bulbous contours, pendulous breasts, and prominent belly
are as esthetically a factual rendering of gross obesity as can be.

Problem Solving
Obesity can lead to various health problems such as diabetes, heart disease and stroke.
Your school has decided to carry out an “Obesity Awareness Campaign” with the
aim to create awareness among students about obesity related health problems.
In this campaign, you are required to obtain the height and weight of students in your
class.
A.

No. Name Height (m) Weight (kg)


1 Adrian Tan Bock Choon 1.69 80
2 Bong Li Zhen 1.79 52
3 Damond Yen Wei Wen 1.64 54
4 Davin Avalani 1.60 47
5 Goo Kai Han 1.76 50
6 Goh Jin Hong 1.75 60
7 Jeremy Hong 1.70 75
8 Joshua Gan Shen Wei 1.74 60
9 Joseph Lim 1.70 45
10 Koh Kai Bo 1.78 102
11 Lee Jun Wei 1.64 49
12 Lim Hong Yan 1.77 76
13 Lim Jun Leh 1.64 55
14 Low Sze Shun 1.74 72
15 Mikhail Imran 1.64 45
16 Muhammad Sufi Kai 1.74 50
17 Nigel Teh Zu Qin 1.63 54
18 Nick Suary Bin Johari 1.83 100
19 Ng Jia Hong 1.63 59
20 Ong Jung Yu 1.69 70
21 Ong Wei Chuen 1.79 63
22 Ong Wen Jia 1.84 114
23 Shawn Hoo Jet Hou 1.67 52
24 Syed Abdullah 1.79 59
25 Tan Choon Shen 1.68 44
26 Tan June Yuan 1.74 50
27 Terry Goh Kai Zhe 1.70 50
28 Thaanesh Devar 1.68 70
29 Wong Chin Kiat 1.81 96
30 Yap Zhe Xian 1.60 48
B.
Weight (kg) Frequency,f Cumulative Midpoint, x fx
frequency
40 - 49 6 6 44.5 267
50 - 59 11 17 54.5 599.5
60 - 69 3 20 64.5 193.5
70 - 79 5 25 74.5 372.5
80 - 89 1 26 84.5 84.5
90 - 99 1 27 94.5 94.5
100 - 109 2 29 104.5 209
110 - 119 1 30 114.5 114.5

C.
x x2 f x - x̄ fx2 (x - x̄ )2 f (x - x̄ )2
44.5 1980.25 6 -20 11881.50 400 2400
54.5 2970.25 11 -10 32672.75 100 1100
64.5 4160.25 3 0 12480.75 0 0
74.5 5550.25 5 10 27751.25 100 500
84.5 7140.25 1 20 7140.25 400 400
94.5 8930.25 1 30 8930.25 900 900
104.5 10920.25 2 40 21840.50 1600 3200
114.5 13110.25 1 50 13110.25 2500 2500

E. Calculation
1. Mean, x̄
267  599.5  193.5  372.5  84.5  94.5  209  114 .5 1935

30 30
= 64.5
2. Median
N 1  C. f .
M  L i
f
15  6
M  50  9
11
=57.36
3. Mode
=54.5
4. Best Measure of Tendency
The mean is the best measure of tendency because it uses all values in the data set to
give an average values.

5. Standard Deviation, σ
1st method

 f ( x  x)
2


f
11000

30
 19.14
2nd method

 fx
2

  (x)2
f
 4526.92  4160.25
 19.14

F. Conclusion
 From my result, the mean is 64.5 and the standard deviation is 11.55. So, it
indicated that the dispersion is not too wide from the value of the mean obtained.
From the mean and standard deviation, the heaviest is 114 kg and the lightest is
44 kg. So, it indicated that the weight of the student not extremely heavy nor
light.
Further Exploration
Further Exploration:
The body mass index (BMI) gives and indication of the physical state of a person as
being underweight, normal, overweight or orbese. BMI can be calculated by using the
following formula:

weight ( kg )
BMI 
height (m)  height (m)

The table below shows the BMI and the corresponding physical state of a person:

BMI Category
Below 18.5 Underweight
18.5 - 24.9 Normal
25 - 29.9 Overweight
30 and above Obese

BMI’s Data for each student:


Name of students BMI
Adrian Tan Bock Choon 28.01
Bong Li Zhen 16.23
Damond Yen Wei Wen 20.01
Davin Avalani 18.35
Goo Kai Han 16.14
Goh Jin Hong 19.59
Jeremy Hong 25.95
Joshua Gan Shen Wei 19.80
Joseph Lim 15.57
Koh Kai Bo 32.19
Lee Jun Wei 18.21
Lim Hong Yan 24.20
Lim Jun Leh 20.45
Low Sze Shun 23.78
Mikhail Imran 16.73
Muhammad Sufi Kai 16.51
Nigel Teh Zu Qin 20.32
Nick Suary Bin Johari 29.86
Ng Jia Hong 22.21
Ong Jung Yu 24.50
Ong Wei Chuen 19.66
Ong Wen Jia 32.67
Shawn Hoo Jet Hou 18.65
Syed Abdullah 18.41
Tan Choon Shen 15.59
Tan June Yuan 16.51
Terry Goh Kai Zhe 17.31
Thaanesh Devar 24.80
Wong Chin Kiat 29.30
Yap Zhe Xian 18.75

BMI Frequency Cumulative Frequency


15.0 - 16.9 8 8
17.0 - 18.9 5 13
19.0 - 20.9 6 19
21.0 - 22.9 1 20
23.0 - 24.9 4 24
25.0 - 26.9 1 25
27.0 - 28.9 1 26
29.0 - 30.9 2 28
31.0 - 32.9 2 30

Conclusion of student’s BMI

From my survey, I can conclude that 11 students are underweight, 13 are normal,
4 are overweight and 2 students are obese. So, 36.67% of the students are
underweight, another 43.33% are normal, 13.33% are overweight and 6.67% are
obese. The highest BMI score is 32.67 while the lowest BMI score is 15.57.
V) Student Role
Obesity has been proven to be crucial factor of many chronic diseases which
affect human lives every single day. Such as, heart diseases, fatty liver and diabetes.
So, we need to make take extra precautionary steps in order to prevent ourselves from
becoming obese. A student’s role in avert obesity is through the students themselves.
A student would first need to implement good eating habits and prevent themselves
from oily and snack foods which will leads to obesity. This implementation would
also attract other students and peers to practice good eating habits to avoid this
problems. Students would also need to get fit by doing exercise and workout to
release sweat and body acid. Dieting is a factor of good health and makes our fitter
and has better body mass.. Lastly, students would also need to go for regular check-
ups with doctors to know our initial body weight and body problems. If a student can
carry out these steps, obesity can prevented and healthy lifestyles will be the new
normal for human beings.
VI) Conclusion
Based on this project, I have concluded that this project has made me understand
statistics much more and I have understood the importance of statistics in making
choices in daily life.
From my survey, 6.67% of the students are obese. 13.33% of students are
overweight, 43.33% of students are normal and lastly underweight are 36.67% of the
students. This shows that most of the students in my survey are either underweight or
normal category. This shows a good result and this project is a complete success.
Most of the students in the normal category. The charts shows a low result of
obesity. Underweight students are quite high and these students will face many
negative effects. This stastistics may help the students on how to make their decision
by looking at this BMI graph.
Students who are underweight most probably are cause by stress and skipping
their meal that makes them thinner and malnutrition. Their body will start to not
function properly. Students in this category should try to increase their weight by
meeting a nutritionist. These people are much more experienced in this field and will
be able to give a good suggestion and advice on dieting and the right food intake in
daily basis. These students should also try not to take empty calories or junk food as it
would not help in the process. These students should also consider high protein meats
which can help build stronger muscles.
As for the overweight and obese students, they should prevent from taking junk
food and oily food as it will contribute to weight gain which will end up in obesity.
Whereas some students may think that skipping their meal will help them in reducing
their weight.This is not a proper way of dieting. Make sure that every meal taken is
low calories and has good protein. This will help in dieting. Obesity will increase the
student’s risk of chronic diseases such as heart disease and diabetes. They should
exercise more frequently to burn off the extra fat. Meeting a dietition will help those
obese students in how to eat a healthy meal instead of skipping meal. Not just that,
this profession will help them to give a suggestion on how to diet by exercising and
doing workouts.

You might also like