You are on page 1of 3

STATISTICS:

1. WHAT ARE THE DIFFERENT SAMPLING TECHNIQUE?


- When you conduct research about a group of people, it’s rarely possible to collect data from
every person in that group. Instead, you select a sample. The sample is the group of individuals
who will actually participate in the research.

To draw valid conclusions from your results, you have to carefully decide how you will select a
sample that is representative of the group as a whole. There are two types of sampling
methods:

 Probability sampling involves random selection, allowing you to make statistical


inferences about the whole group.
 Non-probability sampling involves non-random selection based on convenience or
other criteria, allowing you to easily collect initial data.

2. ORIGIN OS STATISTICS
- Statistics may be said to have its origin in census counts taken thousands of years
ago; as a distinct scientific discipline, however, it was developed in the early 19th
century as the study of populations, economies, and moral actions and later in
that century as the mathematical tool for analyzing such numbers. For technical
information on these subjects, see probability theory and statistics.

3. PROMENENT PERSON BEHIND STATISTICS


4. Johann Carl Friedrich Gauss (1777-1855) was a German mathematical prodigy who laid
much of the groundwork for statistics, particularly given his work in probability theory.
He may be best known for the method of least squares (managing errors in
observations).
5. Florence Nightingale (1820-1910) traveled as a nurse to a hospital during the Crimean
War in 1854. The conditions were alarmingly unsanitary. Nightingale proceeded to use
her skills in data collection and analysis (honed by her study of mathematics) to provide
evidence that the conditions surrounding the soldiers were likely more deadly than the
wounds incurred during battle. She created a graphic that clearly established that fact –
a novel approach at that time. This made Nightingale a pioneer in the field of statistical
graphics.
6. Karl Pearson (1857-1936) has shared the title of the father of modern statistics with his
fellow statistician (and rival) Ronald A. Fisher. Among his major contributions to
statistics is the Pearson Product Moment Correlation, a procedure to ascertain the
magnitude of a relationship/association between variables. He also developed the Chi-
Square distribution. Pearson founded the world’s first university statistics department at
the University College London in 1911 and wrote: “The Grammar of Science” (“Statistics
is the grammar of science”) in 1932.
7. William Sealy Gosset (1876-1937). Among the great statisticians is a man who was not a
statistician at all – he was, in fact, the head brewer of Guinness beer. He was tasked
with testing the consistency of hops in small batches and thus was born the now
prominent t-distribution, a method for interpreting information extracted from small
samples of data. Why isn’t he better known? When he published his findings, Gosset
was required to adopt a pseudonym in order to protect Guinness trade secrets, so
perhaps you might know him as “A.Student.”
8. Ronald A. Fisher (1890-1962) is considered the father of modern statistics along with
Karl Pearson. It was Fisher who laid the groundwork for much of experimental design,
statistical inference, and the procedure known as Analysis of Variance (ANOVA). Fisher
argued for the concept of randomization in experimental design and proposed the now
conventional use of p-values of .05 as a threshold for statistical significance. Fisher also
developed the maximum likelihood method of estimation (i.e., estimating parameters of
a statistical model given observations).
9. Edwards Deming (1900-1993) developed the concept of quality control. He was
instrumental in assisting post-WWII Japan rise as a world power in the industry, given
his expertise in systems and systems thinking. Deming also taught industry leaders how
to put their focus on both internal groups and external groups, and how they relate to
and work with each other – a form of collaboration so fundamental in research
endeavors today.
10. Gertrude Cox (1900-1978) was among the famous statisticians to experience many
“firsts.” Cox was the first recipient of Iowa State’s master’s degree in statistics. She was
the first full female professor as well as the first female department head at North
Carolina State College in 1941, founding the Department of Experimental Statistics. She
was also the first woman elected to membership in the National Academy of Sciences in
1975. Cox viewed statisticians as “partners in science” – a validation of statistician John
Tukey’s statement: “the best thing about being a statistician is that you get to play in
everyone’s backyard.” Another of Cox’s significant contributions to statistics was
championing the use of computers for analysis.
11. John Tukey (1915-2000) could certainly be described as one of the great statisticians; his
own contributions to statistics were wide-ranging and numerous. He coined the term
“bit” from binary digit as well as the term “software.” He is known for robust methods,
graphing, and creating the ubiquitous box plot (introduced in his classic
book Exploratory Data Analysis). The Tukey Range Test is employed often in ANOVA
when doing multiple comparison procedures (testing if means differ significantly).
12. George Box (1919-2013) was a British chemist who considered himself an “accidental
statistician.” He was called upon as a sergeant in WWII to study the effects of poisonous
gases. Studying under Fisher, he developed expertise in data transformations,
developing the Box-Cox transformation (transforming non-normal dependent variables
into a normal shape). He may also be best known for his statement, “essentially all
models are wrong, but some are useful.” This was not intended as an indictment, but
rather, the need to ensure that model results could be applied to everyday life.
13. Janet Norwood (1923-2015) was the first female commissioner of the US Bureau of
Labor Statistics (appointed in 1979 by Carter and re-appointed twice by Reagan). She
had a leading role in the enhancement of critical government statistics such
as Consumer Price Index (CPI) and unemployment. She was elected the president of the
American Statistical Association in 1989 and was a senior fellow in both the Urban
Institute and the New York Conference Board, a think tank established in 1916.

4.CENTRAL TENDENCY

- A measure of central tendency is a single value that attempts to describe a set of data by
identifying the central position within that set of data. As such, measures of central tendency
are sometimes called measures of central location. They are also classed as summary statistics.
The mean (often called the average) is most likely the measure of central tendency that you are
most familiar with, but there are others, such as the median and the mode.

The mean, median and mode are all valid measures of central tendency, but under different
conditions, some measures of central tendency become more appropriate to use than others.
In the following sections, we will look at the mean, mode and median, and learn how to
calculate them and under what conditions they are most appropriate to be used.

The mean (or average) is the most popular and well known measure of central
tendency. It can be used with both discrete and continuous data, although its
use is most often with continuous data (see our Types of Variable guide for data
types).

The median is the middle score for a set of data that has been arranged in order
of magnitude. The median is less affected by outliers and skewed data.

The mode is the most frequent score in our data set. On a histogram it
represents the highest bar in a bar chart or histogram. You can, therefore,
sometimes consider the mode as being the most popular option.

You might also like