Professional Documents
Culture Documents
Textbook Introduction To Statistical Methods Design of Experiments and Statistical Quality Control Dharmaraja Selvamuthu Ebook All Chapter PDF
Textbook Introduction To Statistical Methods Design of Experiments and Statistical Quality Control Dharmaraja Selvamuthu Ebook All Chapter PDF
https://textbookfull.com/product/statistical-methods-an-
introduction-to-basic-statistical-concepts-and-analysis-2nd-
edition-cheryl-ann-willard/
https://textbookfull.com/product/a-first-course-in-quality-
engineering-integrating-statistical-and-management-methods-of-
quality-third-edition-krishnamoorthi/
https://textbookfull.com/product/an-introduction-to-statistical-
methods-and-data-analysis-7th-edition-r-lyman-ott/
https://textbookfull.com/product/introduction-to-statistical-
methods-for-financial-models-1st-edition-thomas-a-severini/
A First Course in Quality Engineering: Integrating
Statistical and Management Methods of Quality 3rd
Edition K S Krishnamoorthi
https://textbookfull.com/product/a-first-course-in-quality-
engineering-integrating-statistical-and-management-methods-of-
quality-3rd-edition-k-s-krishnamoorthi/
https://textbookfull.com/product/statistical-data-analysis-using-
sas-intermediate-statistical-methods-mervyn-g-marasinghe/
https://textbookfull.com/product/statistical-field-theory-an-
introduction-to-exactly-solved-models-in-statistical-physics-2nd-
edition-giuseppe-mussardo/
https://textbookfull.com/product/microstates-entropy-and-quanta-
an-introduction-to-statistical-mechanics-don-koks/
https://textbookfull.com/product/a-primer-of-permutation-
statistical-methods-kenneth-j-berry/
Dharmaraja Selvamuthu · Dipayan Das
Introduction to Statistical
Methods, Design of
Experiments and
Statistical Quality Control
Introduction to Statistical Methods, Design
of Experiments and Statistical Quality Control
Dharmaraja Selvamuthu Dipayan Das
•
Introduction to Statistical
Methods, Design
of Experiments and Statistical
Quality Control
123
Dharmaraja Selvamuthu Dipayan Das
Department of Mathematics Department of Textile Technology
Indian Institute of Technology Delhi Indian Institute of Technology Delhi
New Delhi, India New Delhi, India
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Foreword
v
vi Foreword
add value as assessment tools for instructors and also offer additional practice for
students. The levels of difficulty in exercises are designed with such end in mind.
The authors will be appreciated by both students and instructors for this valuable
addition.
Good textbooks are like caring companions for students. This book has achieved
that merit.
vii
viii Preface
examples are also discussed in Chap. 2. Chapter 3 presents the descriptive statistics,
which starts with concepts such as data, information, and description. Various
descriptive measures, such as central tendency measures, variability measures, and
coefficient of variation, are presented in this chapter. Inference in mathematics is
based on logic and presumably infallible at least when correctly applied, while
statistical inference considers how inference should proceed when the data are
subject to random fluctuation. Sampling theory can be employed to obtain infor-
mation about samples drawn at random from a known population. However, often it
is more important to be able to infer information about a population from samples
drawn from it. Such problems are dealt with in statistical inference. The statistical
inference may be divided into four major areas: theory, estimation, tests of
hypothesis, and correlation and regression analysis. This book treats these four
areas separately, dealing with the theory of sampling distributions and estimation in
Chap. 4, hypothesis testing in Chap. 5, and correlation and regression analysis in
Chap. 6. The statistical inference is dealt with in detail with sampling distribution in
Chap. 4. The standard sampling distributions such as chi-square, Student’s t, and
F distributions are presented. The sample mean and sample variance are studied,
and their expectations and variances are given. The central limit theorem is applied
to determine the probability distribution they follow. Then, this chapter deals with
point estimation, a method of moments, maximum likelihood estimator, and
interval estimation. The classic methods are used to estimate unknown population
parameters such as mean, proportion, and variance by computing statistics from
random samples and applying the theory of sampling distributions.
Chapter 5 covers a statistical test of the hypothesis in detail with many examples.
The topics such as simple and composite hypotheses, types of error, power,
operating characteristic curves, p value, Neyman–Pearson method, generalized
likelihood ratio test, use of asymptotic results to construct tests, and generalized
ratio test statistic are covered. In this chapter, analysis of variance, in particular,
one-way ANOVA, is also introduced, whereas its applications are presented in the
later chapters. Chapter 6 discusses the analysis of correlation and regression. This
chapter starts by introducing Spearman’s correlation coefficient and rank correlation
and later on presents simple linear regression and multiple linear regression.
Further, in this chapter, nonparametric tests such as Wilcoxon, Smirnov, and
median tests are presented. The descriptive statistics, sampling distributions, esti-
mations, statistical inference, testing of hypothesis, and correlation and regression
analysis are presented in Chaps. 2–6 and are applied to the design and analysis of
experiments in Chaps. 7–9. Chapter 7 gives an introduction to the design of
experiments. Starting with the definition of the design of experiments, this chapter
gives a brief history of experimental design along with the need for it. It then
discusses the principles and provides us with the guidelines of the design of
experiments and ends with the illustration of typical applications of statistically
designed experiments in process-, product-, and management-related activities. This
chapter also deals with a very popular design of experiments, known as a com-
pletely randomized design, which describes how to conduct an experiment and
discusses the analysis of the data obtained from the experiment. The analysis of the
Preface ix
The exposition of the entire book is processed with easy access to the subject
matter without sacrificing rigor, at the same time keeping prerequisites to a mini-
mum. A distinctive feature of this text is the “Remarks” following most of the
theorems and definitions. In Remarks, a particular result or concept being presented
is discussed from an intuitive point of view. A list of references is given at the end
of each chapter. Also, at the end of each chapter, there is a list of exercises to
facilitate the understanding of the main body of each chapter. Most of the examples
and exercises are classroom-tested in the course that we taught over many years.
Since the book is the outcome of years of teaching experience continuously
improved with students’ feedback, it is expected to yield a fruitful learning expe-
rience for the students, and the instructors will also enjoy facilitating such creative
learning. We hope that this book will serve as a valuable text for students.
We would like to express our gratitude to our organization—Indian Institute of
Technology Delhi—and numerous individuals who have contributed to this book.
Many former students of IIT Delhi, who took courses, namely MAL140 and
TTL773, provided excellent suggestions that we have tried to incorporate in this
book. We are immensely thankful to Prof. A. Rangan of IIT Madras for his
encouragement and criticism during the writing of this book. We are also indebted
to our doctoral research scholars, Dr. Arti Singh, Mr. Puneet Pasricha, Ms. Nitu
Sharma, Ms. Anubha Goel, and Mr. Ajay K. Maddineni, for their tremendous help
during the preparation of the manuscript in LaTeX and also for reading the
manuscript from a student point of view.
We gratefully acknowledge the book grant provided by the office of Quality
Improvement Programme of the IIT Delhi. Our thanks are also due to Mr. Shamim
Ahmad from Springer for his outstanding editorial work for this book. We are also
grateful to those anonymous referees who reviewed our book and provided us with
excellent suggestions. On a personal note, we wish to express our deep appreciation
to our families for their patience and support during this work.
In the end, we wish to tell our dear readers that we have tried hard to make this
book free of mathematical and typographical errors and misleading or ambiguous
statements. However, it might be possible that some are still being left in this book.
We will be grateful to receive such corrections and also suggestions for further
improvement of this book.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Statistical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Problem of Data Representation . . . . . . . . . . . . . . . . 2
1.2.2 Problem of Fitting the Distribution to the Data . . . . . 3
1.2.3 Problem of Estimation of Parameters . . . . . . . . . . . . 4
1.2.4 Problem of Testing of Hypothesis . . . . . . . . . . . . . . 5
1.2.5 Problem of Correlation and Regression . . . . . . . . . . 6
1.3 Design of Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 Necessity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Statistical Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Review of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 Basics of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 Definition of Probability . . . . . . . . . . . . . . . . . . . . . 19
2.1.2 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3 Total Probability Rule . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.4 Bayes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Random Variable and Distribution Function . . . . . . . . . . . . . . 26
2.2.1 Discrete Type Random Variable . . . . . . . . . . . . . . . 27
2.2.2 Continuous Type Random Variable . . . . . . . . . . . . . 31
2.2.3 Function of a Random Variable . . . . . . . . . . . . . . . . 34
2.3 Moments and Generating Functions . . . . . . . . . . . . . . . . . . . . 36
2.3.1 Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.2 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.3 Moment of Order n . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.4 Generating Functions . . . . . . . . . . . . . . . . . . . . . . . 40
xi
xii Contents
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Appendix A: Statistical Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
About the Authors
xix
Acronyms
xxi
Chapter 1
Introduction
1.1 Statistics
Statistics is the science of data. The term statistics is derived from the New Latin
statisticum collegium (“council of state”) and the Italian word statista (“statesman”).
In a statistical investigation, it is known that for reasons of time or cost, one may not
be able to study each individual element (of population). Consider a manufacturing
unit that receives raw material from the vendors. It is then necessary to inspect the
raw materials before accepting it. It is practically impossible to check each and every
item of raw material. Thus, a few items (sample) are randomly selected from the lot
or batch and inspected individually before taking a decision to reject or accept the
lot. Consider another situation where one wants to find the retail book value (depen-
dent variable) of a used automobile using the age of the automobile (independent
variable). After conducting a study over the past sale of the used automobile, we
are left with the set of numbers. The challenge is to extract meaningful informa-
tion from the behavior observed (i.e., how age of the automobile is related to the
retail book value). Hence, statistics deals with the collection, classification, analysis,
and interpretation of data. Statistics provide us with an objective approach to do
this. There are several statistical techniques available for learning from data. One
needs to note that the scope of statistical methods is much wider than only statistical
inference problems. Such techniques are frequently applied in different branches of
science, engineering, medicine, and management. One of them is known as design
of experiments. When the goal of a study is to demonstrate cause and effect, experi-
ment is the only source of convincing data. For example, consider an investigation in
which researchers observed individuals and measure variable of interest but do not
attempt to influence response variable. But to study cause and effect, the researcher
deliberately imposes some treatment on individuals and then observes the response
variables. Thus, design of experiment refers to the process of planning and conducting
experiments and analyzing the experimental data by statistical methods so that valid
and objective conclusions can be obtained with minimum use of resources. Another
The architect of modern statistical methods in the Indian subcontinent was undoubt-
edly Mahalanobis,1 but he was helped by a very distinguished scientist C R Rao.2
Statistical methods are mathematical formulas, models, and techniques that are used
in statistical inference of raw data. Statistical inference mainly takes the form of
problem of point or interval estimation of certain parameters of the population and
of testing various claims about the population parameters known as hypothesis testing
problem. The main approaches to statistical inference can be classified into paramet-
ric, nonparametric, and Bayesian. Probability is an indispensable tool for statistical
inference. Further, there is a close connection between probability and statistics. This
is because characteristics of the population under study are assumed to be known in
probability problem, whereas in statistics, the main concern is to learn these charac-
teristics based on the characteristics of sample drawn from the population.
Statistics and data analysis procedures generally yield their output in numeric or
tabular forms. In other words, after an experiment is performed, we are left with
the set of numbers (data). The challenge is to understand the features of the data
and extract useful information. Empirical or descriptive statistics helps us in this. It
encompasses both graphical visualization methods and numerical summaries of the
data.
1 Prasanta Chandra Mahalanobis (June 29, 1893–June 28, 1972) was an Indian scientist and applied
statistician. He is best remembered for the Mahalanobis distance, a statistical measure, and for being
one of the members of the first Planning Commission of free India.
2 Calyampudi Radhakrishna Rao, known as C R Rao (born September 10, 1920) is an Indian-born,
Graphical Representation
Over the years, it has been found that tables and graphs are particularly useful ways
for presenting data. Such graphical techniques include plots such as scatter plots,
histograms, probability plots, spaghetti plots, residual plots, box plots, block plots,
and bi-plots. In descriptive statistics, a box plot is a convenient way of graphically
depicting groups of numerical data through their quartiles. A box plot presents a
simple but effective visual description of the main features, including symmetry or
skewness, of a data set. On the other hand, pie charts and bar graphs are useful in
the scenario when one is interested to depict the categories into which a population
is categorized. Thus, they apply to categorical or qualitative data. In a pie chart, a
circle (pie) is used to represent a population and it is sliced up into different sectors
with each sector representing the proportion of a category. One of the most basic and
frequently used statistical methods is to plot a scatter diagram showing the pattern of
relationships between a set of samples, on which there are two measured variables
x and y (say). One may be interested in fitting a curve to this scatter, or in the
possible clustering of samples, or in outliers, or in colinearities, or other regularities.
Histograms give a different way to organize and display the data. A histogram does
not retain as much information on the original data as a stem-and-leaf diagram, in
the sense that the actual values of the data are not displayed. Further, histograms
are more flexible in selecting the classes and can also be applied to the bivariate
data. Therefore, this flexibility makes them suitable as estimators of the underlying
distribution of the population.
Descriptive Statistics
Descriptive statistics are broken down into measures of central tendency and mea-
sures of variability (spread), and these measures provide valuable insight into the
corresponding population features. Further, in descriptive statistics, the feature iden-
tification and parameter estimation are obtained with no or minimal assumptions on
the underlying population. Measures of central tendency include the mean, median,
and mode, while measures of variability include the standard deviation or variance,
the minimum and maximum variables, and the kurtosis and skewness. Measures of
central tendency describe the center position of a data set. On the other hand, mea-
sures of variability help in analyzing how spread-out the distribution is for a set of
data. For example, in a class of 100 students, the measure of central tendency may
give average marks of students to be 62, but it does not give information about how
marks are distributed because there can still be students with 1 and 100 marks. Mea-
sures of variability help us communicate this by describing the shape and spread of
the data set.
For instance, suppose one is interested to examine n people and record a value
1 for people who have been exposed to the tuberculosis (TB) virus and a value
0 for people who have not been so exposed. The data will consist of a random
vector X = (X 1 , X 2 , . . . , X n ) where X i = 1 if the ith person has been exposed to
the TB virus and X i = 0 otherwise. A possible model would be to assume that
X 1 , X 2 , . . . , X n behave like n independent Bernoulli random variables each of which
has the same (unknown) probability p of taking the value 1. If the assumed parametric
model is a good approximation to the data generation mechanism, then the parametric
inference is not only valid but can be highly efficient. However, if the approximation
is not good, the results can be distorted. For instance, we wish to test a new device
for measuring blood pressure. We will try it out on n people and record the difference
between the value returned by the device and the true value as recorded by standard
techniques. The data will consist of a random vector X = (X 1 , X 2 , . . . , X n ) where
X i is the difference for the ith person. A possible model would be to assume that
X 1 , X 2 , . . . , X n behave like n independent random variables each having a normal
distribution with mean 0 and variance σ 2 density where σ 2 is some unknown positive
real number. It has been shown that even small deviations of the data generation
mechanism from the specified model can lead to large biases. Three methods of fitting
models to data are: (a) the method of moments, which derives its name because it
identifies the model parameters that correspond (in some sense) to the nonparametric
estimation of selected moments, (b) the method of maximum likelihood, and (c) the
method of least squares which is most commonly used for fitting regression models.
In addition, there is a need to focus on one of the main approaches for extrapolating
sample information to the population, called the parametric approach. This approach
starts with the assumption that the distribution of the population of interest belongs to
a specific parametric family of distribution models. Many such models depend on a
small number of parameters. For example, Poisson models are identified by the single
parameter λ, and normal models are identified by two parameters, μ and σ 2 . Under
this assumption (i.e., that there is a member of the assumed parametric family of
distributions that equals the population distribution of interest), the objective becomes
that of estimating the model parameters, to identify which member of the parametric
family of distributions best fits the data.
Point Estimation
Point estimation, in statistics, is the process of finding an approximate value of some
parameter of a population from random samples of the population. The method
mainly comprises of finding out an estimating formula of a parameter, which is
called the estimator of the parameter. The numerical value, which is obtained from
the formula on the basis of a sample, is called estimate.
1.2 Statistical Methods 5
1
n
Xi .
n i=1
Example 1.2 A retailer buys garments of the same style from two manufacturers and
suspects that the variation in the masses of the garments produced by the two makers
is different. A sample of size n 1 and n 2 was therefore chosen from a batch of garments
produced by the first manufacturer and the second manufacturer, respectively, and
weighed. We wish to find the confidence intervals for the ratio of variances of mass
of the garments from the one manufacturer with the other manufacturer.
Other than point estimation and interval estimation, one may be interested in deciding
which value among a set of values is true for a given distribution. In practice, the
functional form of the distribution is unknown. One may be interested in some
properties of the population without making any assumption on the distribution. This
procedure of taking a decision on the value of the parameter (parametric) or nature of
distribution (nonparametric) is known as the testing of hypothesis. The nonparametric
tests are also known as distribution-free tests. Some of the standard hypothesis tests
are z test, t test (parametric) and K S test, median test (nonparametric).
6 1 Introduction
Example 1.4 Often one wishes to investigate the effect of a factor (independent
variable x) on a response (dependent variable y). We then carry out an experiment
to compare a treatment when the levels of the factor are varied. This is a hypothesis
testing problem where we are interested in testing the equality of treatment means of
a single factor x on a response variable y (such problems are discussed in Chap. 7).
Example 1.5 Consider the following problem. A survey showed that a random sam-
ple of 100 private passenger cars was driven on an average 9,500 km a year with a
standard deviation of 1,650 km. Use this information to test the hypothesis that pri-
vate passenger cars are driven on the average 9,000 km a year against the alternatives
that the correct average is not 9,000 km a year.
Example 1.7 Consider one another example of the weekly number of accidents over
a 30-week period in Delhi roads. From the sample of n observations, we wish to test
the hypothesis that the number of accidents in a week has a Poisson distribution.
that are used to measure the degree of correlation. Correlations are useful because
they can indicate a predictive relationship that can be exploited in practice. On the
other hand, regression analysis is a tool to identify the relationship that exists between
a dependent variable and one or more independent variables. In this technique, we
make a hypothesis about the relationship and then estimate the parameters of the
model and hence the regression equation. Correlation analysis can be used in two
basic ways: in the determination of the predictive ability of the variable and also in
determining the correlation between the two variables given.
The first part of the book discusses statistical methods whose applications in the
field of design of experiments and SQC are discussed in second part of the book. For
instance, in design of experiments, a well-designed experiment makes it easier to
understand different sources of variation. Analysis techniques such as ANOVA and
regression help to partition the variation for predicting the response or determining
if the differences seen between factor levels are more than expected when compared
to the variability seen within a factor level.
1.3.1 History
3 Sir Ronald Aylmer Fisher FRS (February 17, 1890–July 29, 1962), who published as R. A. Fisher,
was a British statistician and geneticist. For his work in statistics, he has been described as “a genius
who almost single-handedly created the foundations for modern statistical science” and “the single
most important figure in twentieth-century statistics.”
8 1 Introduction
1.3.2 Necessity
There are several ways an experiment can be performed. They include best-guess
approach (trial-and-error method), one-factor-at-a-time approach, and design of
experiment approach. Let us discuss them one by one with the help of a practical
example. Suppose a product development engineer wanted to minimize the electrical
resistivity of electro-conductive yarns prepared by in situ electrochemical polymer-
ization of an electrically conducting monomer. Based on the experience, he knew
that the polymerization process factors, namely polymerization time and polymeriza-
tion temperature, played an important role in determining the electrical resistivity of
the electro-conductive yarns. He conducted an experiment with 20 min polymeriza-
tion time and 10 ◦ C polymerization temperature and prepared an electro-conductive
yarn. This yarn showed an electrical resistivity of 15.8 k/m. Further, he prepared
another electro-conductive yarn keeping the polymerization time at 60 min and poly-
merization temperature at 30 ◦ C. Thus, prepared electro-conductive yarn exhibited
electrical resistivity of 5.2 k/m. He thought that this was the lowest resistivity pos-
sible to obtain, and hence, he decided not to carry out any experiment further. This
strategy of experimentation, often known as best-guess approach or trial-and-error
method, is frequently followed in practice. It sometimes works reasonably well if
the experimenter has an in-depth theoretical knowledge and practical experience of
the process. However, there are serious disadvantages associated with this approach.
Consider that the experimenter does not obtain the desired results. He will then
continue with another combination of process factors. This can be continued for a
long time, without any guarantee of success. Further, consider that the experimenter
obtains an acceptable result. He then stops the experiment, though there is no guar-
antee that he obtains the best solution. Another strategy of experiment that is often
used in practice relates to one-factor-at-a-time approach. In this approach, the level
of a factor is varied, keeping the level of the other factors constant. Then, the level
of another factor is altered, keeping the level of remaining factors constant. This is
continued till the levels of all factors are varied. The resulting data are then ana-
lyzed to show how the response variable is affected by varying each factor while
keeping other factors constant. Suppose the product development engineer followed
this strategy of experimentation and obtained the results as displayed in Fig. 1.1. It
can be seen that the electrical resistivity increased from 15.8 to 20.3 k/m when
the polymerization time increased from 20 to 60 min, keeping the polymerization
temperature constant at 10 ◦ C. Further, it can be seen that the electrical resistivity
decreased from 15.8 to 10.8 k/m when the polymerization temperature raised from
10 to 30 ◦ C, keeping the polymerization time at 20 min. The optimal combination of
process factors to obtain the lowest electrical resistivity (10.8 k/m) would be thus
chosen as 20 min polymerization time and 30 ◦ C polymerization temperature.
The major disadvantage of the one-factor-at-a-time approach lies in the fact that
it fails to consider any possible interaction present between the factors. Interaction
is said to happen when the difference in responses between the levels of one factor is
not same at all levels of the other factors. Figure 1.2 displays an interaction between
1.3 Design of Experiments 9
Kun Sirkka kotiin tultuaan näki huoneensa tyhjänä, jäi hän alussa
aivan sanattomaksi. Minun täytyi ilmaista kaikki; mitäpä olisi
hyödyttänytkään salata totuutta.
Vihdoin tuli Katri kotiin kantaen jotakin, jota hän piteli kuin olisi
siinä ollut helposti särkyvä esine. Hän laski sen sänkyyn, oikaisihe ja
sanoi veitikkamaisesti hymyillen:
"Olen hankkinut vuokralaisen!"
"Älä minua kiitä", sanoi hän nauraen. "Se on yhtä paljon sinun
ansiotasi."
Kun hän sitte katsoi ympärilleen ja näki, että huone oli tyhjä,
säikähti hän niin että pelkäsin hänen pyörtyvän ja otin hänestä kiinni.
Mutta hän sanoi:
"Mutta mitä sanoo setä, kun saa kuulla, että meille onkin tullut
poika!" nauroi Katri eräänä aamuna kylvettäessään lasta keittiön
astiasoikossa.
"Sinä pudotat."
Mutta Katri väitti, että lapsi oli hänen, hän antaa sille nimen. Minä
huomautin vaatimattomasti, että poika oli hiukan niinkuin minunkin.
Sitä puhetta ei otettu kuuleviin korviin. Myöhemmin heillä oli pitkä ja
perinpohjainen neuvottelu lapsen nimestä. Koko almanakka tutkittiin
kannesta kanteen.
"No, jos tahdotte, niin olkoon sanottu, että kaivosyhtiö juuri jättää
vararikkoanomuksen, mutta me muodostamme uuden yhtiön."
"Entä vanhat?"
Tirehtööri hymähti:
"Petturi! Rosvo!"
*****
Mies aikoi vetäytyä portaita alas, mutta minä sanoin hänen omilla
sanoillaan:
Mutta kun puhuin asiasta naisille, pyysi Sirkka, ettei siitä puhuttaisi
enempää. Tulisi poliisitutkinto ja oikeudenkäynti, ja hän saisi hävetä.
Katri oli samaa mieltä. Nuoren tytön maine siitä kärsisi, niin syytön
kuin hän olikin.
Minua suututti heidän itsepäisyytensä.
Olin varma, että hän nyt muuttaisi mielipiteensä. Mutta hän sanoi
lempeästi:
*****
4 asunto-osaketta
Kuninkuus ja hirttonuora.
"Harmaita hiuksia!"
Entä Katri? Hän oli kuin haamu, surkea jäännös ennen niin
kauniista ja iloisesta olennosta. Suuret, kirkkaat silmät olivat syvällä
kuopissaan, äsken niin verevä iho oli lakastunut kalpeaksi.
Sydäntäni vihloi katsellessani häntä. Tunsin katkeraa soimausta
sisimmässäni. Pidin itseäni syyllisenä hänen onnettomuuteensa.
KUNINGAS.