You are on page 1of 31

Chapter-1

Introduction to Research
and Research Methods

INTRODUCTION TO RESEARCH
Research and experimental development is formal work
undertaken systematically to increase the stock of knowledge,
including knowledge of humanity, culture and society, and the use of
this stock of knowledge to devise new applications (OECD (2002)
Frascati Manual: proposed standard practice for surveys on research
and experimental development, 6th edition. It is used to establish or
confirm facts, reaffirm the results of previous work, solve new or
existing problems, support theorems, or develop new theories. A
research project may also be an expansion on past work in the field.
To test the validity of instruments, procedures, or experiments,
research may replicate elements of prior projects, or the project as a
whole. The primary purposes of basic research (as opposed to applied
research) are documentation, discovery, interpretation, or the research
and development of methods and systems for the advancement of
human knowledge. Approaches to research depend on epistemologies,
which vary considerably both within and between humanities and
sciences.
Research has been defined in a number of different ways.
1. A broad definition of research is given by Martyn Shuttleworth -
2 Introduction to Research and Research Methods

"In the broadest sense of the word, the definition of research


includes any gathering of data, information and facts for the
advancement of knowledge."
2. Another definition of research is given by Creswell who states -
"Research is a process of steps used to collect and analyze
information to increase our understanding of a topic or issue". It
consists of three steps: Pose a question, collect data to answer
the question, and present an answer to the question.
3. The Merriam-Webster Online Dictionary defines research in more
detail as "a studious inquiry or examination; especially :
investigation or experimentation aimed at the discovery and
interpretation of facts, revision of accepted theories or laws in
the light of new facts, or practical application of such new or
revised theories or laws".

Steps in conducting research


Research is often conducted using the hourglass model structure
of research. The hourglass model starts with a broad spectrum for
research, focusing in on the required information through the method
of the project (like the neck of the hourglass), then expands the research
in the form of discussion and results. The major steps in conducting
research are:
1. Identification of research problem
2. Literature review
3. Specifying the purpose of research
4. Determine specific research questions or hypotheses
5. Data collection
6. Analyzing and interpreting the data
7. Reporting and evaluating research
The steps generally represent the overall process, however they
should be viewed as an ever-changing process rather than a fixed set
of steps. Most researches begin with a general statement of the
problem, or rather, the purpose for engaging in the study. The literature
review identifies flaws or holes in previous research which provides
justification for the study. Often, a literature review is conducted in a
Introduction to Research and Research Methods 3

given subject area before a research question is identified. A gap in the


current literature, as identified by a researcher, then engenders a
research question. The research question may be parallel to the
hypothesis. The hypothesis is the supposition to be tested. The
researcher(s) collects data to test the hypothesis. The researcher(s)
then analyzes and interprets the data via a variety of statistical methods,
engaging in what is known as Empirical research. The results of the
data analysis in confirming or failing to reject the Null hypothesis are
then reported and evaluated. At the end the researcher may discuss
avenues for further research.
Rudolph Rummel says, "... no researcher should accept any one
or two tests as definitive. It is only when a range of tests are consistent
over many kinds of data, researchers, and methods can one have
confidence in the results."

Introduction to Methodology
A methodology is usually a guideline system for solving a problem,
with specific components such as phases, tasks, methods, techniques
and tools. It can be defined also as follows:
1. "The analysis of the principles of methods, rules, and postulates
employed by a discipline";
2. "The systematic study of methods that are, can be, or have been
applied within a discipline";
3. "The study or description of methods".
A methodology can be considered to include multiple methods,
each as applied to various facets of the whole scope of the
methodology. The research can be divided between two parts, they
are qualitative research and quantitative research.
Generally speaking, methodology does not describe specific
methods despite the attention given to the nature and kinds of processes
to be followed in a given procedure or in attaining an objective. When
proper to a study of methodology, such processes constitute a
constructive generic framework; thus they may be broken down in
sub-processes, combined, or their sequence changed.
4 Introduction to Research and Research Methods

RESEARCH METHODS
To understand the use of statistics, one needs to know a little bit
about experimental design or how a researcher conducts investigations.
A little knowledge about methodology will provide us with a place to
hang our statistics. In other words, statistics are not numbers that
just appear out of nowhere. Rather, the numbers (data) are generated
out of research. Statistics are merely a tool to help us answer research
questions. As such, an understanding of methodology will facilitate
our understanding of basic statistics.
Validity
A key concept relevant to a discussion of research methodology
is that of validity. When an individual asks, "Is this study valid?", they
are questioning the validity of at least one aspect of the study. There
are four types of validity that can be discussed in relation to research
and statistics. Thus, when discussing the validity of a study, one
must be specific as to which type of validity is under discussion.
Therefore, the answer to the question asked above might be that the
study is valid in relation to one type of validity but invalid in relation to
another type of validity.
Each of the four types of validity will be briefly defined and
described below. Be aware that this represents a cursory discussion
of the concept of validity. Each type of validity has many threats
which can pose a problem in a research study. Examples, but not an
exhaustive discussion, of threats to each validity will be provided.
For a comprehensive discussion of the four types of validity, the
threats associated with each type of validity, and additional validity
issues see Cook and Campbell (1979).
Statistical Conclusion Validity: Unfortunately, without a
background in basic statistics, this type of validity is difficult to
understand. According to Cook and Campbell (1979), "statistical
conclusion validity refers to inferences about whether it is reasonable
to presume covariation given a specified alpha level and the obtained
variances (p. 41)." Essentially, the question that is being asked is -
"Are the variables under study related?" or "Is variable A correlated
(does it covary) with Variable B?". If a study has good statistical
conclusion validity, we should be relatively certain that the answer to
Introduction to Research and Research Methods 5

these questions is "yes". Examples of issues or problems that would


threaten statistical conclusion validity would be random heterogeneity
of the research subjects (the subjects represent a diverse group - this
increases statistical error) and small sample size (more difficult to
find meaningful relationships with a small number of subjects).
Internal Validity: Once it has been determined that the two variables
(A & B) are related, the next issue to be determined is one of causality.
Does A cause B? If a study is lacking internal validity, one can not
make cause and effect statements based on the research; the study
would be descriptive but not causal. There are many potential threats
to internal validity. For example, if a study has a pretest, an experimental
treatment, and a follow-up posttest, history is a threat to internal
validity. If a difference is found between the pretest and posttest, it
might be due to the experimental treatment but it might also be due to
any other event that subjects experienced between the two times of
testing (for example, a historical event, a change in weather, etc.).
Construct Validity: One is examining the issue of construct validity
when one is asking the questions "Am I really measuring the construct
that I want to study?" or "Is my study confounded (Am I confusing
constructs)?". For example, if I want to know a particular drug
(Variable A) will be effective for treating depression (Variable B) , I
will need at least one measure of depression. If that measure does not
truly reflect depression levels but rather anxiety levels (Confounding
Variable X), than my study will be lacking construct validity. Thus,
good construct validity means the we will be relatively sure that
Construct A is related to Construct B and that this is possibly a causal
relationship. Examples of other threats to construct validity include
subjects apprehension about being evaluated, hypothesis guessing on
the part of subjects, and bias introduced in a study by expectencies
on the part of the experimenter.
External Validity: External validity addresses the issue of being
able to generalize the results of your study to other times, places, and
persons. For example, if you conduct a study looking at heart disease
in men, can these results be generalized to women? Therefore, one
needs to ask the following questions to determine if a threat to the
external validity exists: "Would I find these same results with a
difference sample?", "Would I get these same results if I conducted
my study in a different setting?", and "Would I get these same results
6 Introduction to Research and Research Methods

if I had conducted this study in the past or if I redo this study in the
future?" If I can not answer "yes" to each of these questions, then the
external validity of my study is threatened.
Types of Research Studies
There are four major classifications of research designs. These
include observational research, correlational research, true
experiments, and quasi-experiments. Each of these will be discussed
further below.
Observational research: There are many types of studies which
could be defined as observational research including case studies,
ethnographic studies, ethological studies, etc. The primary
characteristic of each of these types of studies is that phenomena are
being observed and recorded. Often times, the studies are qualitative
in nature. For example, a psychological case study would entail
extensive notes based on observations of and interviews with the
client. A detailed report with analysis would be written and reported
constituting the study of this individual case. These studies may also
be qualitative in nature or include qualitative components in the
research. For example, an ethological study of primate behavior in
the wild may include measures of behavior durations ie. the amount
of time an animal engaged in a specified behavior. This measure of
time would be qualitative.
Correlational research: In general, correlational research examines
the covariation of two or more variables. For example, the early
research on cigarette smoking examine the covariation of cigarette
smoking and a variety of lung diseases. These two variable, smoking
and lung disease were found to covary together.
Correlational research can be accomplished by a variety of
techniques which include the collection of empirical data. Often times,
correlational research is considered type of observational research as
nothing is manipulated by the experimenter or individual conducting
the research. For example, the early studies on cigarette smoking did
not manipulate how many cigarettes were smoked. The researcher
only collected the data on the two variables. Nothing was controlled
by the researchers.
It is important to not that correlational research is not causal
research. In other words, we can not make statements concerning
Introduction to Research and Research Methods 7

cause and effect on the basis of this type of research. There are two
major reasons why we can not make cause and effect statements.
First, we don¹t know the direction of the cause. Second, a third
variable may be involved of which we are not aware. An example
may help clarify these points.
In major clinical depressions, the neurotransmitters serotonin and/
or norepinephrine have been found to be depleted (Coppen, 1967;
Schildkraut & Kety, 1967). In other words, low levels of these two
neurotransmitters have been found to be associated with increased
levels of clinical depression. However, while we know that the two
variables covary - a relationship exists - we do not know if a causal
relationship exists. Thus, it is unclear whether a depletion in serotonin/
norepinephrine cause depression or whether depression causes a
depletion is neurotransmitter levels. This demonstrates the first
problem with correlational research; we don't know the direction of
the cause. Second, a third variable has been uncovered which may be
affecting both of the variables under study. The number of receptors
on the postsynaptic neuron has been found to be increased in
depression (Segal, Kuczenski, & Mandell, 1974; Ventulani, Staqarz,
Dingell, & Sulser, 1976). Thus, it is possible that the increased number
of receptors on the postsynaptic neuron is actually responsible for
the relationship between neurotransmitter levels and depression. As
you can see from the discussion above, one can not make a simple
cause and effect statement concerning neurotransmitter levels and
depression based on correlational research. To reiterate, it is
inappropriate in correlational research to make statements concerning
cause and effect.
True Experiments: The true experiment is often thought of as a
laboratory study. However, this is not always the case. A true
experiment is defined as an experiment conducted where an effort is
made to impose control over all other variables except the one under
study. It is often easier to impose this sort of control in a laboratory
setting. Thus, true experiments have often been erroneously identified
as laboratory studies.
To understand the nature of the experiment, we must first define
a few terms:
1. Experimental or treatment group - this is the group that receives
the experimental treatment, manipulation, or is different from the
control group on the variable under study..
8 Introduction to Research and Research Methods

2. Control group - this group is used to produce comparisons. The


treatment of interest is deliberately withheld or manipulated to
provide a baseline performance with which to compare the
experimental or treatment group's performance.
3. Independent variable - this is the variable that the experimenter
manipulates in a study. It can be any aspect of the environment
that is empirically investigated for the purpose of examining its
influence on the dependent variable.
4. Dependent variable - the variable that is measured in a study. The
experimenter does not control this variable.
5. Random assignment - in a study, each subject has an equal
probability of being selected for either the treatment or control
group.
6. Double blind - neither the subject nor the experimenter knows
whether the subject is in the treatment of the control condition.
Now that we have these terms defined, we can examine further
the structure of the true experiment. First, every experiment must
have at least two groups: an experimental and a control group. Each
group will receive a level of the independent variable. The dependent
variable will be measured to determine if the independent variable has
an effect. As stated previously, the control group will provide us with
a baseline for comparison. All subjects should be randomly assigned
to groups, be tested a simultaneously as possible, and the experiment
should be conducted double blind. Perhaps an example will help clarify
these points.
Wolfer and Visintainer (1975) examined the effects of systematic
preparation and support on children who were scheduled for inpatient
minor surgery. The hypothesis was that such preparation would reduce
the amount of psychological upset and increase the amount of
cooperation among thee young patients. Eighty children were selected
to participate in the study. Children were randomly assigned to either
the treatment or the control condition. During their hospitalization the
treatment group received the special program and the control group
did not. Care was take such that kids in the treatment and the control
groups were not roomed together. Measures that were taken included
heart rates before and after blood tests, ease of fluid intake, and self-
report anxiety measures. The study demonstrated that the systematic
Introduction to Research and Research Methods 9

preparation and support reduced the difficulties of being in the hospital


for these kids.
Let us examine now the features of the experiment described
above. First, there was a treatment and control group. If we had had
only the treatment group, we would have no way of knowing whether
the reduced anxiety was due to the treatment or the weather, new
hospital food, etc. The control group provides us with the basis to
make comparisons The independent variable in this study was the
presence or absence of the systematic preparation program. The
dependent variable consisted of the heart rates, fluid intake, and anxiety
measures. The scores on these measures were influenced by and
depended on whether the child was in the treatment or control group.
The children were randomly assigned to either group. If the "friendly"
children had been placed in the treatment group we would have no
way of knowing whether they were less anxious and more cooperative
because of the treatment or because they were "friendly". In theory,
the random assignment should balance the number of "friendly" children
between the two groups. The two groups were also tested at about
the same time. In other words, one group was not measured during
the summer and the other during the winter. By testing the two groups
as simultaneously as possible, we can rule out any bias due to time.
Finally, the children were unaware that they were participants in an
experiment (the parents had agreed to their children's participation in
research and the program), thus making the study single blind. If the
individuals who were responsible for the dependent measures were
also unaware of whether the child was in the treatment or control
group, then the experiment would have been double blind.
A special case of the true experiment is the clinical trial. A clinical
trial is defined as a carefully designed experiment that seeks to
determine the clinical efficacy of a new treatment or drug. The design
of a clinical trial is very similar to that of a true experiment. Once
again, there are two groups: a treatment group (the group that receives
the therapeutic agent) and a control group (the group that receives
the placebo). The control group is often called the placebo group.
The independent variable in the clinical trial is the level of the therapeutic
agent. Once again, subjects are randomly assigned to groups, they
are tested simultaneously, and the experiment should be conducted
double blind. In other words, neither the patient or the person
10 Introduction to Research and Research Methods

administering the drug should know whether the patient is receiving


the drug or the placebo.
Quasi-Experiments: Quasi-experiments are very similar to true
experiments but use naturally formed or pre-existing groups. For
example, if we wanted to compare young and old subjects on lung
capacity, it is impossible to randomly assign subjects to either the
young or old group (naturally formed groups). Therefore, this can
not be a true experiment. When one has naturally formed groups, the
variable under study is a subject variable (in this case - age) as opposed
to an independent variable. As such, it also limits the conclusions we
can draw from such an research study. If we were to conduct the
quasi-experiment, we would find that the older group had less lung
capacity as compared to the younger group. We might conclude that
old age thus results in less lung capacity. But other variables might
also account for this result. It might be that repeated exposure to
pollutants as opposed to age has caused the difference in lung capacity.
It could also be a generational factor. Perhaps more of the older group
smoked in their early years as compared to the younger group due to
increased awareness of the hazards of cigarettes. The point is that
there are many differences between the groups that we can not control
that could account for differences in our dependent measures. Thus,
we must be careful concerning making statement of causality with
quasi-experimental designs.
Quasi-experiments may result from studying the differences
between naturally formed groups (ie. young & old; men & women).
However, there are also instances when a researcher designs a study
as a traditional experiment only to discover that random assignment
to groups is restricted by outside factors. The researcher is forced to
divide groups according to some pre-existing criteria. For example, if
a corporation wanted to test the effectiveness of a new wellness
program, they might decide to implement their program at one site
and use a comporable site (no wellness program) as a control. As the
employees are not shuffled and randomly assigned to work at each
site, the study has pre-existing groups. After a few months of study,
the researchers could then see if the wellness site had less absenteeism
and lower health costs than the non-wellness site. The results are
again restricted due to the quasi-correlational nature of the study. As
the study has pre-existing groups, there may be other differences
Introduction to Research and Research Methods 11

between those groups than just the presence or absence of a wellness


program. For example, the wellness program may be in a significantly
newer, more attractive building, or the manager from hell may work
at the nonwellness program site. Either way, it a difference is found
between the two sites it may or may not be due to the presence/
absence of the wellness program.
To summarize, quasi-experiments may result from either studying
naturally formed groups or use of pre-existing groups. When the
study includes naturally formed groups, the variable under study is a
subject variable. When a study uses pre-existing groups that are not
naturally formed, the variable that is manipulated between the two
groups is an independent variable (With the exception of no random
assignment, the study looks similar in form to a true experiment). As
no random assignment exists in a quasi-experiment, no causal
statements can be made based on the results of the study.
Populations and Samples
When conducting research, one must often use a sample of the
population as opposed to using the entire population. Before we go
further into the reasons why, let us first discuss what differentiates
between a population and a sample.
A population can be defined as any set of persons/subjects having
a common observable characteristic. For example, all individuals who
reside in the United States make up a population. Also, all pregnant
women make up a population. The characteristics of a population are
called a parameter. A statistic can be defined as any subset of the
population. The characteristics of a sample are called a statistic.
Why Sample?
This brings us to the question of why sample. Why should we
not use the population as the focus of study. There are at least four
major reasons to sample.
First, it is usually too costly to test the entire population. The
United States government spends millions of dollars to conduct the
U.S. Census every ten years. While the U.S. government may have
that kind of money, most researchers do not.
The second reason to sample is that it may be impossible to test
the entire population. For example, let us say that we wanted to test
12 Introduction to Research and Research Methods

the 5-HIAA (a serotonergic metabolite) levels in the cerebrospinal


fluid (CSF) of depressed individuals. There are far too many individuals
who do not make it into the mental health system to even be identified
as depressed, let alone to test their CSF.
The third reason to sample is that testing the entire population
often produces error. Thus, sampling may be more accurate. Perhaps
an example will help clarify this point. Say researchers wanted to
examine the effectiveness of a new drug on Alzheimer's disease. One
dependent variable that could be used is an Activities of Daily Living
Checklist. In other words, it is a measure of functioning o a day to
day basis. In this experiment, it would make sense to have as few of
people rating the patients as possible. If one individual rates the entire
sample, there will be some measure of consistency from one patient
to the next. If many raters are used, this introduces a source of error.
These raters may all use a slightly different criteria for judging Activities
of Daily Living. Thus, as in this example, it would be problematic to
study an entire population.
The final reason to sample is that testing may be destructive. It
makes no sense to lesion the lateral hypothalamus of all rats to
determine if it has an effect on food intake. We can get that
information from operating on a small sample of rats. Also, you
probably would not want to buy a car that had the door slammed
five hundred thousand time or had been crash tested. Rather, you
probably would want to purchase the car that did not make it into
either of those samples.

TYPES OF SAMPLING PROCEDURES


As stated above, a sample consists of a subset of the population.
Any member of the defined population can be included in a sample. A
theoretical list (an actual list may not exist) of individuals or elements
who make up a population is called a sampling frame. There are five
major sampling procedures.
The first sampling procedure is convenience. Volunteers, members
of a class, individuals in the hospital with the specific diagnosis being
studied are examples of often used convenience samples. This is by
far the most often used sample procedure. It is also by far the most
biases sampling procedure as it is not random (not everyone in the
population has an equal chance of being selected to participate in the
Introduction to Research and Research Methods 13

study). Thus, individuals who volunteer to participate in an exersise


study may be different that individuals who do not volunteer.
Another form of sampling is the simple random sample. In this
method, all subject or elements have an equal probability of being
selected. There are two major ways of conducting a random sample.
The first is to consult a random number table, and the second is to
have the computer select a random sample.
A systematic sample is conducted by randomly selecting a first
case on a list of the population and then proceeding every Nth case
until your sample is selected. This is particularly useful if your list of
the population is long. For example, if your list was the phone book,
it would be easiest to start at perhaps the 17th person, and then select
every 50th person from that point on.
Stratified sampling makes up the fourth sampling strategy. In a
stratified sample, we sample either proportionately or equally to
represent various strata or subpopulations. For example if our strata
were states we would make sure and sample from each of the fifty
states. If our strata were religious affiliation, stratified sampling would
ensure sampling from every religious block or grouping. If our strata
were gender, we would sample both men and women.
Cluster sampling makes up the final sampling procedure. In cluster
sampling we take a random sample of strata and then survey every
member of the group. For example, if our strata were individuals
schools in the St. Louis Public School System, we would randomly
select perhaps 20 schools and then test all of the students within
those schools.
Research Methodology
The system of collecting data for research projects is known as
research methodology. The data may be collected for either theoretical
or practical research for example management research may be
strategically conceptualized along with operational planning methods
and change management.
Some important factors in research methodology include validity
of research data, Ethics and the reliability of measures most of your
work is finished by the time you finish the analysis of your data.
Formulating of research questions along with sampling weather
probable or non probable is followed by measurement that includes
14 Introduction to Research and Research Methods

surveys and scaling. This is followed by research design, which may


be either experimental or quasi-experimental. The last two stages are
data analysis and finally writing the research paper, which is organised
carefully into graphs and tables so that only important relevant data is
shown.
The goal of the research process is to produce new knowledge
or deepen understanding of a topic or issue. This process takes three
main forms (although, as previously discussed, the boundaries between
them may be obscure):
1. Exploratory research, which helps to identify and define a problem
or question.
2. Constructive research, which tests theories and proposes solutions
to a problem or question.
3. Empirical research, which tests the feasibility of a solution using
empirical evidence.
4. There are two ways to conduct research:
5. Primary research
6. Using primary sources, i.e., original documents and data.
7. Secondary research
8. Using secondary sources, i.e., a synthesis of, interpretation of,
or discussions about primary sources.
9. There are two major research designs: qualitative research and
quantitative research. Researchers choose one of these two tracks
according to the nature of the research problem they want to
observe and the research questions they aim to answer:

Qualitative research
Understanding of human behavior and the reasons that govern
such behavior. Asking a broad question and collecting word-type data
that is analyzed searching for themes. This type of research looks to
describe a population without attempting to quantifiably measure
variables or look to potential relationships between variables. It is
viewed as more restrictive in testing hypotheses because it can be
expensive and time consuming, and typically limited to a single set of
research subjects. Qualitative research is often used as a method of
Introduction to Research and Research Methods 15

exploratory research as a basis for later quantitative research


hypotheses.

Quantitative research
Systematic empirical investigation of quantitative properties and
phenomena and their relationships. Asking a narrow question and
collecting numerical data to analyze utilizing statistical methods. The
quantitative research designs are experimental, correlational, and
survey (or descriptive). Statistics derived from quantitative research
can be used to establish the existence of associative or causal
relationships between variables.
The Quantitative data collection methods rely on random sampling
and structured data collection instruments that fit diverse experiences
into predetermined response categories. These methods produce results
that are easy to summarize, compare, and generalize. Quantitative
research is concerned with testing hypotheses derived from theory
and/or being able to estimate the size of a phenomenon of interest.
Depending on the research question, participants may be randomly
assigned to different treatments (this is the only way that a quantitative
study can be considered a true experiment). If this is not feasible, the
researcher may collect data on participant and situational
characteristics in order to statistically control for their influence on
the dependent, or outcome, variable. If the intent is to generalize from
the research participants to a larger population, the researcher will
employ probability sampling to select participants.

Bibliometrics
Bibliometrics is a type of research method used in library and
information science. It utilizes quantitative analysis and statistics to
describe patterns of publication within a given field or body of literature.
Researchers may use bibliometric methods of evaluation to determine
the influence of a single writer, for example, or to describe the
relationship between two or more writers or works. One common
way of conducting bibliometric research is to use the Social Science
Citation Index, the Science Citation Index or the Arts and Humanities
Citation Index to trace citations.
16 Introduction to Research and Research Methods

Laws of Bibliometrics
One of the main areas in bibliometric research concerns the
application of bibliometric laws. The three most commonly used laws
in bibliometrics are: Lotka's law of scientific productivity, Bradford's
law of scatter, and Zipf's law of word occurrence.
Lotka's Law
Lotka's Law describes the frequency of publication by authors in
a given field. It states that " . . . the number (of authors) making n
contributions is about 1/n² of those making one; and the proportion of
all contributors, that make a single contribution, is about 60 percent"
(Lotka 1926, cited in Potter 1988). This means that out of all the
authors in a given field, 60 percent will have just one publication, and
15 percent will have two publications (1/2² times .60). 7 percent of
authors will have three publications (1/3² times .60), and so on.
According to Lotka's Law of scientific productivity, only six percent
of the authors in a field will produce more than 10 articles. Lotka's
Law, when applied to large bodies of literature over a fairly long period
of time, can be accurate in general, but not statistically exact. It is
often used to estimate the frequency with which authors will appear
in an online catalog (Potter 1988).
Bradford's Law
Bradford's Law serves as a general guideline to librarians in
determining the number of core journals in any given field. It states
that journals in a single field can be divided into three parts, each
containing the same number of articles: 1) a core of journals on the
subject, relatively few in number, that produces approximately one-
third of all the articles, 2) a second zone, containing the same number
of articles as the first, but a greater number of journals, and 3) a third
zone, containing the same number of articles as the second, but a still
greater number of journals. The mathematical relationship of the
number of journals in the core to the first zone is a constant n and to
the second zone the relationship is n². Bradford expressed this
relationship as 1:n:n². Bradford formulated his law after studying a
bibliography of geophysics, covering 326 journals in the field. He
discovered that 9 journals contained 429 articles, 59 contained 499
Introduction to Research and Research Methods 17

articles, and 258 contained 404 articles. So it took 9 journals to


contribute one-third of the articles, 5 times 9, or 45, to produce the
next third, and 5 times 5 times 9, or 225, to produce the last third. As
may be seen, Bradford's Law is not statistically accurate, strictly
speaking. But it is still commonly used as a general rule of thumb
(Potter 1988).
Zipf's Law
Zipf's Law is often used to predict the frequency of words within
a text. The Law states that in a relatively lengthy text, if you "list the
words occurring within that text in order of decreasing frequency,
the rank of a word on that list multiplied by its frequency will equal a
constant. The equation for this relationship is: r x f = k where r is the
rank of the word, f is the frequency, and k is the constant (Potter
1988). Zipf illustrated his law with an analysis of James Joyce's
Ulysses. "He showed that the tenth most frequent word occurred
2,653 times, the hundredth most frequent word occurred 265 times,
the two hundredth word occurred 133 times, and so on. Zipf found,
then that the rank of the word multiplied by the frequency of the
word equals a constant that is approximately 26,500" (Potter 1988).
Zipf's Law, again, is not statistically perfect, but it is very useful for
indexers.
Citation Analysis
Another major area of bibliometric research uses various methods
of citation analysis in order to establish relationships between authors
or their work. Here is a definition of citation analysis, and definitions
of co-citation coupling and bibliographic coupling, which are specific
kinds of citation analysis.
Citation Analysis
When one author cites another author, a relationship is established.
Citation analysis uses citations in scholarly works to establish links.
Many different links can be ascertained, such as links between authors,
between scholarly works, between journals, between fields, or even
between countries. Citations both from and to a certain document
may be studied. One very common use of citation analysis is to
determine the impact of a single author on a given field by counting
18 Introduction to Research and Research Methods

the number of times the author has been cited by others. One possible
drawback of this approach is that authors may be citing the single
author in a negative context (saying that the author doesn't know
what s/he's talking about, for instance) (Osareh 1996).
Co-citation Coupling
Co-citation coupling is a method used to establish a subject
similarity between two documents. If papers A and B are both cited
by paper C, they may be said to be related to one another, even though
they don't directly cite each other. If papers A and B are both cited by
many other papers, they have a stronger relationship. The more papers
they are cited by, the stronger their relationship is.
Bibliographic Coupling
Bibliographic coupling operates on a similar principle, but in a
way it is the mirror image of co-citation coupling. Bibliographic coupling
links two papers that cite the same articles, so that if papers A and B
both cite paper C, they may be said to be related, even though they
don't directly cite each other. The more papers they both cite, the
stronger their relationship is.
Web Applications of Bibliometrics
Recently, a new growth area in bibliometrics has been in the
emerging field of webmetrics, or cybermetrics as it is often called.
Webmetrics can be defined as using of bibliometric techniques in
order to study the relationship of different sites on the World Wide
Web. Such techniques may also be used to map out (called "scientific
mapping" in traditional bibliometric research) areas of the Web that
appear to be most useful or influential, based on the number of times
they are hyperlinked to other Web sites.
Survey Methods
The survey is a non-experimental, descriptive research method.
Surveys can be useful when a researcher wants to collect data on
phenomena that cannot be directly observed (such as opinions on
library services). Surveys are used extensively in library and
information science to assess attitudes and characteristics of a wide
range of subjects, from the quality of user-system interfaces to library
Introduction to Research and Research Methods 19

user reading habits. In a survey, researchers sample a population.


Basha and Harter (1980) state that "a population is any set of persons
or objects that possesses at least one common characteristic."
Examples of populations that might be studied are 1) all 1999 graduates
of GSLIS at the University of Texas, or 2) all the users of UT General
Libraries. Since populations can be quite large, researchers directly
question only a sample (i.e. a small proportion) of the population.
Types of Surveys
Data are usually collected through the use of questionnaires,
although sometimes researchers directly interview subjects. Surveys
can use qualitative (e.g. ask open-ended questions) or quantitative
(e.g. use forced-choice questions) measures. There are two basic
types of surveys: cross-sectional surveys and longitudinal surveys.
Much of the following information was taken from an excellent book
on the subject, called Survey Research Methods, by Earl R. Babbie.
Cross-Sectional Surveys
Cross-sectional surveys are used to gather information on a
population at a single point in time. An example of a cross sectional
survey would be a questionaire that collects data on how parents feel
about Internet filtering, as of March of 1999. A different cross-sectional
survey questionnaire might try to determine the relationship between
two factors, like religiousness of parents and views on Internet
filtering.
Longitudinal Surveys
Longitudinal surveys gather data over a period of time. The
researcher may then analyze changes in the population and attempt to
describe and/or explain them. The three main types of longitudinal
surveys are trend studies, cohort studies, and panel studies.
Trend Studies
Trend studies focus on a particular population, which is sampled
and scrutinized repeatedly. While samples are of the same population,
they are typically not composed of the same people. Trend studies,
since they may be conducted over a long period of time, do not have
to be conducted by just one researcher or research project. A researcher
20 Introduction to Research and Research Methods

may combine data from several studies of the same population in


order to show a trend. An example of a trend study would be a yearly
survey of librarians asking about the percentage of reference questions
answered using the Internet.
Cohort Studies
Cohort studies also focus on a particular population, sampled and
studied more than once. But cohort studies have a different focus.
For example, a sample of 1999 graduates of GSLIS at the University
of Texas could be questioned regarding their attitudes toward
paraprofessionals in libraries. Five years later, the researcher could
question another sample of 1999 graduates, and study any changes in
attitude. A cohort study would sample the same class, every time. If
the researcher studied the class of 2004 five years later, it would be a
trend study, not a cohort study.
Panel Studies
Panel studies allow the researcher to find out why changes in the
population are occurring, since they use the same sample of people
every time. That sample is called a panel. A researcher could, for
example, select a sample of UT graduate students, and ask them
questions on their library usage. Every year thereafter, the researcher
would contact the same people, and ask them similar questions, and
ask them the reasons for any changes in their habits. Panel studies,
while they can yield extremely specific and useful explanations, can
be difficult to conduct. They tend to be expensive, they take a lot of
time, and they suffer from high attrition rates. Attrition is what occurs
when people drop out of the study.
Instrument Design
One criticism of library surveys is that they are often poorly
designed and administered (Busha and Harter 1980), resulting in data
that is that is not very accurate, but that is energetically quoted and
used to make important decisions. Surveys should be just as
rigourously designed and administered as any other research method.
Meyer (1998) has identified five preliminary steps that should be taken
when embarking upon any research project: 1) choose a topic, 2)
review the literature, 3) determine the research question, 4) develop a
Introduction to Research and Research Methods 21

hypothesis, and 5) operationalization (i.e., figure out how to accurately


measure the factors you wish to measure). For research using surveys,
two additional considerations are of prime importance: representative
sampling and question design. Much of the following information
was taken from the book Research Methods in Librarianship:
Techniques and Interpretation by Charles H. Busha and Stephen P.
Harter.
Representative Sampling
A sample is representative when it is an accurate proportional
representation of the population under study. If you want to study the
attitudes of UT students regarding library services, it would not be
enough to interview every 100th person who walked into the library.
That technique would only measure the attitudes of UT students who
use the library, not those who do not. In addition, it would only measure
the attitudes of UT students who happened to use the library during
the time you were collecting data. Therefore, the sample would not
be very representative of UT students in general. In order to be a truly
representative sample, every student at UT would have to have had
an equal chance of being chosen to participate in the survey. This is
called randomization.
If you stood in front of the student union and walked up to
students, asking them questions, you still would not have a random
sample. You would only be questioning students who happened to
come to campus that day, and further, those that happened to walk
past the student union. Those students who never walk that way
would have had no chance of being questioned. In addition, you might
unintentionally be biased as to who you question. You might
unconsciously choose not to question students who look preoccupied
or busy, or students who don't look like friendly people. This would
invalidate your results, since your sample would not be randomly
selected.
If you took a list of UT students, uploaded it onto a computer,
then instructed the computer to randomly generate a list of 2 percent
of all UT students, then your sample still might not be representative.
What if, purely by chance, the computer did not include the correct
proportion of seniors, or honors students, or graduate students? In
order to further ensure that the sample is truly representative of the
22 Introduction to Research and Research Methods

population, you might want to use a sampling technique called


stratification. In order to stratify a population, you need to decide
what sub-categories of the population might be statistically significant.
For instance, graduate students as a group probably have different
opinions than undergraduates regarding library usage, so they should
be recognized as separate strata of the population. Once you have a
list of the different strata, along with their respective percentages,
you could instruct the computer to again randomly select students,
this time taking care that a certain percentage are graduate students, a
certain percentage are honors students, and a certain percentage are
seniors. You would then come up with a more truly representative
sample.
Question Design
It is important to design questions very carefully. A poorly designed
questionaire renders results meaningless. There are many factors to
consider. Babbie gives the following pointers:
1. Make items clear (don't assume the person you are questioning
knows the terms you are using).
2. Avoid double-barreled questions (make sure the question asks
only one clear thing).
3. Respondent must be competent to answer (don't ask questions
that the respondent won't accurately be able to answer).
4. Questions should be relevant (don't ask questions on topics that
respondents don't care about or haven't thought about).
5. Short items are best (so that they may be read, understood, and
answered quickly).
6. Avoid negative items (if you ask whether librarians should not be
paid more, it will confuse respondents).
7. Avoid biased items and terms (be sensitive to the effect of your
wording on respondents).
Busha and Harter provide the following list of 10 hints:
1. Unless the nature of a survey definitely warrants their usage,
avoid slang, jargon, and technical terms.
2. Whenever possible, develop consistent response methods.
3. Make questions as impersonal as possible.
Introduction to Research and Research Methods 23

4. Do not bias later responses by the wording used in earlier


questions.
5. As an ordinary rule, sequence questions from the general to the
specific.
6. If closed questions are employed, try to develop exhaustive and
mutually exclusive response alternatives.
7. Insofar as possible, place questions with similar content together
in the survey instrument.
8. Make the questions as easy to answer as possible.
9. When unique and unusual terms need to be defined in questionnaire
items, use very clear definitions.
10. Use an attractive questionnaire format that conveys a professional
image.
As may be seen, designing good questions is much more difficult
than it seems. One effective way of making sure that questions
measure what they are supposed to measure is to test them out first,
using small focus groups.
The Historical Approach to Research
The process of learning and understanding the background and
growth of a chosen field of study or profession can offer insight into
organizational culture, current trends, and future possibilities. The
historical method of research applies to all fields of study because it
encompasses their: origins, growth, theories, personalities, crisis, etc.
Both quantitative and qualitative variables can be used in the collection
of historical information. Once the decision is made to conduct
historical research, there are steps that should be followed to achieve
a reliable result. Charles Busha and Stephen Harter detail six steps for
conducting historical research (91):
1. The recognition of a historical problem or the identification of a
need for certain historical knowledge.
2. The gathering of as much relevant information about the problem
or topic as possible.
3. If appropriate, the forming of hypothesis that tentatively explain
relationships between historical factors.
24 Introduction to Research and Research Methods

4. The rigorous collection and organization of evidence, and the


verification of the authenticity and veracity of information and its
sources.
5. The selection, organization, and analysis of the most pertinent
collected evidence, and the drawing of conclusions; and
6. The recording of conclusions in a meaningful narrative.
In the field of library and information science, there are a vast
array of topics that may be considered for conducting historical
research. For example, a researcher may chose to answer questions
about the development of school, academic or public libraries, the
rise of technology and the benefits/ problems it brings, the development
of preservation methods, famous personalities in the field, library
statistics, or geographical demographics and how they effect library
distribution. Harter and Busha define library history as "the systematic
recounting of past events pertaining to the establishment, maintenance,
and utilization of systematically arranged collections of recorded
information or knowledge….A biography of a person who has in
some way affected the development of libraries, library science, or
librarianship is also considered to be library history. (93)"
There are a variety of places to obtain historical information.
Primary Sources are the most sought after in historical research.
Primary resources are first hand accounts of information. "Finding
and assessing primary historical data is an exercise in detective work.
It involves logic, intuition, persistence, and common
sense…(Tuchman, Gaye in Strategies of Qualitative Inquiry, 252).
Some examples of primary documents are: personal diaries, eyewitness
accounts of events, and oral histories. "Secondary sources of
information are records or accounts prepared by someone other than
the person, or persons, who participated in or observed an event."
Secondary resources can be very useful in giving a researcher a grasp
on a subject and may provided extensive bibliographic information
for delving further into a research topic.
In any type of historical research, there are issues to consider.
Harter and Busha list three principles to consider when conducting
historical research (99-100):
1. Consider the slant or biases of the information you are working
with and the ones possessed by the historians themselves.
Introduction to Research and Research Methods 25

This is particularly true of qualitative research. Consider an


example provided by Gaye Tuchman:
Let us assume that women's letters and diaries are pertinent to
ones research question and that one can locate pertinent examples.
One cannot simply read them….one must read enough examples to
infer the norms of what could be written and how it could be expressed.
For instance, in the early nineteenth century, some (primarily female)
schoolteachers instructed girls in journal writing and read their journals
to do so. How would such instruction have influenced the journals
kept by these girls as adults?…it is useful to view the nineteenth-
century journal writer as an informant. Just as one tries to understand
how a contemporary informant speaks from specific social location,
so too one would want to establish the social location of the historical
figure. One might ask of these and other diaries: What is the
characteristic of middle-class female diary writers? What is the
characteristic of this informant? How should one view what this
informant writes?
Quantitative facts may also be biased in the types of statistical
data collected or in how that information was interpreted by the
researcher.
2. There are many factors that can contribute to "historical episodes".
3. Evidence should not be examined from a singular point of view.
The resources that follow this brief introduction to the historical
method in research provide resources for further in-depth explanations
about this research method in various fields of study, and abstracts of
studies conducted using this method.
Content Analysis
Bernard Berelson defined Content Analysis as "a research
technique for the objective, systematic, and quantitative description
of manifest content of communications" (Berelson, 74). Content
analysis is a research tool focused on the actual content and internal
features of media. It is used to determine the presence of certain
words, concepts, themes, phrases, characters, or sentences within
texts or sets of texts and to quantify this presence in an objective
manner. Texts can be defined broadly as books, book chapters, essays,
interviews, discussions, newspaper headlines and articles, historical
26 Introduction to Research and Research Methods

documents, speeches, conversations, advertising, theater, informal


conversation, or really any occurrence of communicative language.
To conduct a content analysis on a text, the text is coded, or broken
down, into manageable categories on a variety of levels--word, word
sense, phrase, sentence, or theme--and then examined using one of
content analysis' basic methods: conceptual analysis or relational
analysis. The results are then used to make make inferences about the
messages within the text(s), the writer(s), the audience, and even the
culture and time of which these are a part. For example, Content
Analysis can indicate pertinent features such as comprehensiveness
of coverage or the intentions, biases, prejudices, and oversights of
authors, publishers, as well as all other persons responsible for the
content of materials.
Content analysis is a product of the electronic age. Though
content analysis was regularly performed in the 1940s, it became a
more credible and frequently used research method since the mid-
1950's, as researchers started to focus on concepts rather than simply
words, and on semantic relationships rather than just presence (de
Sola Pool, 1959).
Uses of Content Analysis
Due to the fact that it can be applied to examine any piece of
writing or occurrence of recorded communication, content analysis
is used in large number of fields, ranging from marketing and media
studies, to literature and rhetoric, ethnography and cultural studies,
gender and age issues, sociology and political science, psychology
and cognitive science, as well as other fields of inquiry. Additionally,
content analysis reflects a close relationship with socio- and
psycholinguistics, and is playing an integral role in the development
of artificial intelligence. The following list (adapted from Berelson,
1952) offers more possibilities for the uses of content analysis:
1. Reveal international differences in communication content
2. Detect the existence of propaganda
3. Identify the intentions, focus or communication trends of an
individual, group or institution
4. Describe attitudinal and behavioral responses to communications
5. Determine psychological or emotional state of persons or groups
Introduction to Research and Research Methods 27

DISCOURSE ANALYSIS
It is difficult to give a single definition of Critical or Discourse
Analysis as a research method. Indeed, rather than providing a particular
method, Discourse Analysis can be characterized as a way of
approaching and thinking about a problem. In this sense, Discourse
Analysis is neither a qualitative nor a quantitative research method,
but a manner of questioning the basic assumptions of quantitative and
qualitative research methods. Discourse Analysis does not provide a
tangible answer to problems based on scientific research, but it enables
access to the ontological and epistemological assumptions behind a
project, a statement, a method of research, or - to provide an example
from the field of Library and Information Science - a system of
classification. In other words, Discourse Analysis will enable to reveal
the hidden motivations behind a text or behind the choice of a particular
method of research to interpret that text. Expressed in today's more
trendy vocabulary, Critical or Discourse Analysis is nothing more
than a deconstructive reading and interpretation of a problem or text
(while keeping in mind that postmodern theories conceive of every
interpretation of reality and, therefore, of reality itself as a text. Every
text is conditioned and inscribes itself within a given discourse, thus
the term Discourse Analysis). Discourse Analysis will, thus, not provide
absolute answers to a specific problem, but enable us to understand
the conditions behind a specific "problem" and make us realize that
the essence of that "problem", and its resolution, lie in its assumptions;
the very assumptions that enable the existence of that "problem". By
enabling us to make these assumption explicit, Discourse Analysis
aims at allowing us to view the "problem" from a higher stance and to
gain a comprehensive view of the "problem" and ourselves in relation
to that "problem". Discourse Analysis is meant to provide a higher
awareness of the hidden motivations in others and ourselves and,
therefore, enable us to solve concrete problems - not by providing
unequivocal answers, but by making us ask ontological and
epistemological questions.
Though critical thinking about and analysis of situations/texts is
as ancient as mankind or philosophy itself, and no method or theory
as such, Discourse Analysis is generally perceived as the product of
the postmodern period. The reason for this is that while other periods
or philosophies are generally characterized by a belief-system or
28 Introduction to Research and Research Methods

meaningful interpretation of the world, postmodern theories do not


provide a particular view of the world, other that there is no one true
view or interpretation of the world. In other words, the postmodern
period is distinguished from other periods (Renaissance, Enlightenment,
Modernism, etc.) in the belief that there is no meaning, that the world
is inherently fragmented and heterogeneous, and that any sense making
system or belief is mere subjective interpretation - and an interpretation
that is conditioned by its social surrounding and the dominant discourse
of its time. Postmodern theories, therefore, offer numerous readings
aiming at "deconstructing" concepts, belief-systems, or generally held
social values and assumptions. Some of the most commonly used
theories are those of Jacques Derrida (who coined the term
"deconstruction"), Michel Foucault, Julia Kristeva, Jean-Francois
Lyotard, and Fredric Jameson (this extremely brief listing of a few
critical thinkers is neither comprehensive nor reflecting a value
judgment; these are merely some of the most common names
encountered when studying postmodern theories).
Critical thinking, however, is older than postmodern thought, as
the following quote by John Dewey illustrates. Dewey defined the
nature of reflective thought as "active, persistent, and careful
consideration of any belief or supposed form of knowledge in the
light of the grounds that support it and the further conclusion to
which it tends" (Dewey, J. Experience and Education. New York:
Macmillan, 1933. Page 9). When critically evaluating a research project
or text, one should, therefore, not limit oneself to postmodern theories.
Structural Analysis
Structuralism, from which Structural Analysis derives, is the
methodological principle that human culture is made up of systems in
which a change in any element produces changes in the others. Four
basic types of theoretical or critical activities have been regarded as
structuralist: the use of language as a structural model, the search for
universal functions or actions in texts, the explanation of how meaning
is possible, and the post-structuralist denial of objective meaning. In
the field of literature, in which Structuralism and Post-Structuralism
have gained particular importance, Structuralism seeks to explain the
structures underlying literary texts either in terms of a grammar
modeled on that of language or in terms of Ferdinand de Saussure's
Introduction to Research and Research Methods 29

principle that the meaning of each word depends on its place in the
total system of language.
Though limited to literature, this definition from the Dictionary of
Concepts in Literary Criticism and Theory provides an understanding
of what Structuralism or Structural Analysis is about. The French
theorist Roland Barthes expands this definition by characterizing
Structuralism in terms of its reconstitutive activity:
"The goal of all structuralist activity, whether reflexive or poetic,
is to reconstruct an 'object,' in such a way as to manifest thereby the
rules of functioning (the 'functions') of this object. The structure is
therefore actually a simulacrum of the object, but it is a directed,
interested simulacrum, since the imitated object makes something
appear which remained invisible or, if one prefers, unintelligible in the
natural object" (Barthes, 1963).
For Jean-Marie Benoist,
"An analysis is structural if, and only if, it displays the content as
a model, i.e., if it can isolate a formal set of elements and relations in
terms of which it is possible to argue without entering upon the
significance of the given content" (Benoist, 8).
In other words, Structuralism is not concerned with the content
of a text or any other kind of system; rather, it analyzes and explores
the structures underlying the text or system, which make the content
possible. One of the leading principles of Structuralism is that the
form defines the content ("form is content"). That is, that the
underlying structure of a text or system, which presents and organizes
the content, determines the nature of that content as well as its message
or communicated information. Thus Structuralism analyzes how
meaning is possible and how it is transmitted - regardless of the actual
meaning.
According to Claude Levy-Strauss, as well as other Structuralist
thinkers in linguistics, anthropology, psychology, and biology (as well
as other disciplines), the human mind structured to operate in certain
ways and which determines the way we think and operate, regardless
of the discipline we are working, the culture we are living in, or the
language that we speak. The view that there is in man an innate,
genetically, transmitted and determined mechanism that acts as a
structuring force is one underlying premises of Structuralism and,
30 Introduction to Research and Research Methods

though this view is far from reaching consensus among Structuralist


thinkers, it has lead to the belief that there are permanent structures in
our minds that determine who we are and what we can be. In this
sense, this view of Structuralism is simply based on the application of
structuralist principles to the human mind.
Whether these principles can be applied only to texts, science,
research methods, systems, etc., or be expanded to the human mind
remains to be seen. However, this debate illustrates the basic premises
of Structuralism and their universal application. Like Discourse or
Critical Analysis, Structural Analysis (which can be considered part
of Discourse Analysis) may be applied to any discipline. What differs
Structuralism from Discourse Analysis is its scientific claim or, rather,
it's focus on underlying structures instead of content. Through this
focus, Structuralism claims to preserve a certain level of objectivity
in its analysis. Structuralism has turned into Post-Structuralism and
many of the thinkers who were previously considered Structuralists
are now labeled Post-Structuralists. This is the case of Michel Foucault,
Derrida, Barthes, and Lacan, as well as of others. Again, this illustrates
the close kinship between Structuralism and Discourse Analysis and
that theories and philosophies are not easily classified and distinguished
from each other. Suffice it to note here, that Discourse Analysis is a
broader concept than Structuralism and that current theories of
Discourse Analysis rely upon the premises established by Structuralism.
It should also be noted that Structural Analysis plays an important
role in the fields of Engineering and Chemistry and other "hard"
sciences. While the principles are basically the same, structural analysis
in these fields is probably not surrounded by the same controversy
and the term "Structuralism" probably does not apply in the same
manner as in the Humanities and Social Sciences.
Advantages and Disadvantages
Structural Analysis can be used to study any kind of system,
text, or material. It applies equally to the Humanities and Social Sciences
as well as to the "hard" Sciences, though with different connotations.
The methods of Structural Analysis might be different in each discipline.
For example, Structural Analysis in Linguistics or Psychology might
differ from Structural Analysis in Literature or the study of information
retrieval and organization. The basic premises, however, are the same.
Introduction to Research and Research Methods 31

As with all other methods of research the validity of the conclusions


obtained through structural analysis depend on the quality and rigor
of the study. In the Social Sciences, the validity of Structural Analysis
may rest on quantifiable and verifiable research; though this may also
be the case in the Humanities, the construction of the argument might
have more importance. The major advantage of Structural Analysis is
that it enables an awareness to underlying structures and reveals their
limiting and conditioning nature. However, it does not enable analysis
of the content. Another disadvantage is that the search for ultimate
and final structures (especially in Psychology and Anthropology) may
stifle innovation and enhancement (not to mention its limiting character
with regard to human psychology and interaction).

You might also like