You are on page 1of 29

Allama Iqbal Open University Islamabad

Department of English Language and Applied Linguistics


Submmitted by ISHRAT NAZ
Roll No CA565977 Semester Spring,2020

Course Research Methodology ( 5669 )

Address Dist and Tehsil Swabi ,Moh Karam khail ,Post Office Kala khuro.

Phone Number 0314 - 9763829

Submitted to Hafiz Ghulam Mustafa Javeed

Phone Number 0343 – 4431166

ASSIGNMEN
T No. 1
Q.1 Define research (what research is and what research is not) and describe the basic
process/mechanics of research.

Ans. Research : Research is a systematic inquiry to describe, explain, predict, and control the
observed phenomenon. Research involves inductive and deductive methods.”

what research is and what research is not: The popular understanding of the term “research” is incorrect
and somewhat misleading. It is quite common to assume that the word refers to gathering information—
browsing through the Internet or books in search for information about the topic. Some even use
“research” as a marketing ploy to back up claims of product or service superiority. But research is much
more than copying and pasting information: it involves the human mind and requires much thought,
organization and method. Research is the deliberate and planned quest for meaning and truth.

According to the authors, the beginning of research is a problem or a question. The existence of this
question or problem should lead the researcher to formulate a clearly stated and defined goal for the
research. This goal, in turn, should guide the formation of a plan to reach a solution to the problem or an
answer to the question. In order to achieve the goal, the researcher should break the main question or
problem into subunits, making the whole process more manageable. The hypotheses, based on the
question or problem, should guide the whole research process and work in concert with the critical
assumptions of such research. Throughout the research process, data must be gathered and interpreted in
order to try to solve the problem or answer the question. The whole process is repeated as many times as
necessary until the answer to the problem or question is found.

1
The Basic Process of Research:

Step 1: Identify the Problem

The first step in the process is to identify a problem or develop a research question. The research problem
may be something the agency identifies as a problem, some knowledge or information that is needed by
the agency, or the desire to identify a recreation trend nationally.

Step 2: Review the Literature

Now that the problem has been identified, the researcher must learn more about the topic under
investigation. To do this, the researcher must review the literature related to the research problem. This
step provides foundational knowledge about the problem area. The review of literature also educates the
researcher about what studies have been conducted in the past, how these studies were conducted, and the
conclusions in the problem area.

Step 3: Clarify the Problem

Many times the initial problem identified in the first step of the process is too large or broad in scope. In
step 3 of the process, the researcher clarifies the problem and narrows the scope of the study. This can
only be done after the literature has been reviewed. The knowledge gained through the review of
literature guides the researcher in clarifying and narrowing the research project.

Step 4: Clearly Define Terms and Concepts

Terms and concepts are words or phrases used in the purpose statement of the study or the description of
the study. These items need to be specifically defined as they apply to the study. Terms or concepts often
have different definitions depending on who is reading the study. To minimize confusion about what the
terms and phrases mean, the researcher must specifically define them for the study.

Step 5: Define the Population

Research projects can focus on a specific group of people, facilities, park development, employee
evaluations, programs, financial status, marketing efforts, or the integration of technology into the
operations. For example, if a researcher wants to examine a specific group of people in the community,
the study could examine a specific age group, males or females, people living in a specific geographic
area, or a specific ethnic group. Literally thousands of options are available to the researcher to
specifically identify the group to study. The research problem and the purpose of the study assist the
researcher in identifying the group to involve in the study. In research terms, the group to involve in the
study is always called the population. Defining the population assists the researcher in several ways. First,
it narrows the scope of the study from a very large population to one that is manageable. Second, the
population identifies the group that the researcher's efforts will be focused on within the study. This helps
ensure that the researcher stays on the right path during the study. Finally, by defining the population, the
researcher identifies the group that the results will apply to at the conclusion of the study.

2
Step 6: Develop the Instrumentation Plan

The plan for the study is referred to as the instrumentation plan. The instrumentation plan serves as the
road map for the entire study, specifying who will participate in the study; how, when, and where data
will be collected; and the content of the program.

Step 7: Collect Data

Once the instrumentation plan is completed, the actual study begins with the collection of data. The
collection of data is a critical step in providing the information needed to answer the research question.
Every study includes the collection of some type of data—whether it is from the literature or from
subjects—to answer the research question. Data can be collected in the form of words on a survey, with a
questionnaire, through observations, or from the literature.

Step 8: Analyze the Data

All the time, effort, and resources dedicated to steps 1 through 7 of the research process culminate in this
final step. The researcher finally has data to analyze so that the research question can be answered. In the
instrumentation plan, the researcher specified how the data will be analyzed. The researcher now analyzes
the data according to the plan. The results of this analysis are then reviewed and summarized in a manner
directly related to the research questions

Q.2 Differentiate between research question and hypothesis. Distinguish between researchable and
non-researchable with two examples each.

Ans. The difference between a research question and a hypothesis:


A research question is quite simply a question that your research intends to address. The research does
not necessarily need to answer the question in black or white, but it should explore the question,
providing detailed and analytical justifications of how and why it is or isn’t answered.

Good research questions must be:

 Clear and easy to understand


 Specific, with a definite focus
 Answerable – it must be possible to collect the necessary data
 Substantively relevant to your area of study

Unlike a research question, a hypothesis is a statement. A hypothesis is essentially a proposition


(suggestion) about how something might work or behave. The researcher can develop their own
hypothesis on the grounds of informal observation or their own experience if they wish to do so.  They
may also develop it from an examination of the existing literature.

The intention of the research is to prove or disprove the hypothesis. Similarly to research which is based
upon a research question, which do not necessarily need to provide a black and white answer, but you
must ensure that you have covered the issue at length and provided critical analysis of the outcomes.

Just like a research question, good hypotheses must be:

3
 Clear and easy to understand
 Specific, with a definite focus
 Answerable – it must be possible to collect the necessary data
 Substantively relevant to your area of study
 The difference is quite clear. One is a question that you, as a researcher, intend to answer. The
other is a statement that you will either prove or disprove .

Researchable and Non-Researchable question:A researchable research question is one that can
generate a hypothesis that can be tested through a structured and rigorous process of data collection,
analysis and testing, either quantitatively, or qualitatively, or a hybrid of methods. A non-researchable
research question is therefore one that is not formulated to enable a testable hypothesis to be generated.
This does not mean that the topic is not capable of sustaining research. Often it is a matter of recasting the
question so that specific testable hypotheses can be formulated. Non-researchable questions could be too
broad or vague, or they could be questions for which answers are easily obtainable.

1. Researchable problems imply the possibility of empirical investigation


a. What are the achievement and social skill differences between children attending an
academically or socially oriented pre-school program?
b. What is the relationship between teachers' knowledge of assessment methods and their
use of them?
2. Non-researchable problems include explanations of how to do something, vague propositions,
and value-based concerns
a. Is democracy a good form of government?
b. Should values clarification be taught in public schools?
c. Can crime be prevented?
d. Should physical education classes be dropped from the high school curriculum?

Q.3 What sources of information (Primary, Secondary and others) are available to assist in the
literature review? Discuss their importance to link-up it with other parts of the research.

Ans. Sources of information in the literature review:

The Literature refers to the collection of scholarly writings on a topic. This includes peer-reviewed
articles, books, dissertations and conference papers.

 When reviewing the literature, be sure to include major works as well as studies that respond to
major works. You will want to focus on primary sources, though secondary sources can be
valuable as well.

The term primary source is used broadly to embody all sources that are original. Primary sources provide
first-hand information that is closest to the object of study. Primary sources vary by discipline.

 In the natural and social sciences, original reports of research found in academic journals
detailing the methodology used in the research, in-depth descriptions, and discussions of the
findings are considered primary sources of information.
 Other common examples of primary sources include speeches, letters, diaries, autobiographies,
interviews, official reports, court records, artifacts, photographs, and drawings. 

4
A secondary source is a source that provides non-original or secondhand data or information. 

 Secondary sources are written about primary sources.


 Research summaries reported in textbooks, magazines, and newspapers are considered secondary
sources. They typically provide global descriptions of results with few details on the
methodology. Other examples of secondary sources include biographies and critical studies of an
author's work.

Secondary sources of information which are either compiled from or refer to primary
sources of information. The original information having been casually modified selected or
reorganized so as to serve a definite purpose for group of users. Such sources contain
information arranged and organized on the basis of some definite plan. These contain
organized repackaged knowledge rather than new knowledge. Information given in primary
sources is made available in a more convenient form. Due to their very  nature, secondary
sources are more easily and widely available than primary sources. These not only provide
digested information but also serve as bibliographical key to primary sources of
information. The primary sources are the first to appear, these are followed by secondary
sources. It is difficult to find information from primary sources directly. Therefore, one
should consult the secondary sources in the first instance, which will lead one to specific
primary sources.

The above categorization is based on the characteristics of the documents. Primary sources
are more current and accurate than secondary and tertiary. In searching for Information, a
researcher usually starts with secondary and tertiary sources and ends the search with
primary sources. Secondary and tertiary sources contain information in organized form and
these serve as guides or indicators to detailed contents of primary literature. With increasing
amount of literature being produced, it is becoming almost impossible to use primary
sources directly for searching of information. A scholar would also not be able to keep
himself up to date and well informed in his field of specialization without the aid of
secondary and tertiary sources. This goes to show the importance of there sources of
information.

Q.4 Discuss briefly data collection strategies for action research, ethnographic research and case
study with suitable examples.

Ans. Data collection strategies for action research.

CaseStudies
A case study is usually an in-depth description of a process, experience, or structure at a single institution.
In order to answer a combination of ‘what’ and ‘why’ questions, case studies generally involve a mix of
quantitative (i.e., surveys, usage statistics, etc.) and qualitative (i.e., interviews, focus groups, extant
document analysis, etc.) data collection techniques.  Most often, the researcher will analyze quantitative
data first and then use qualitative strategies to look deeper into the meaning of the trends identified in the
numerical data.
Checklists
Checklists structure a person’s observation or evaluation of a performance or artifact. They can be simple
lists of criteria that can be marked as present or absent, or can provide space for observer comments.
These tools can provide consistency over time or between observers. Checklists can be used for

5
evaluating databases, virtual IM service, the use of library space, or for structuring peer observations of
instruction sessions.
Interviews
In-Depth Interviews include both individual interviews (e.g., one-on-one) as well as “group” interviews
(including focus groups). The data can be recorded in a wide variety of ways including stenography,
audio recording, video recording or written notes. In depth interviews differ from direct observation
primarily in the nature of the interaction. In interviews it is assumed that there is a questioner and one or
more interviewees. The purpose of the interview is to probe the ideas of the interviewees about the
phenomenon of interest.
Observation
Sometimes, the best way to collect data through observation. This can be done directly or indirectly with
the subject knowing or unaware that you are observing them. You may choose to collect data through
continuous observation or via set time periods depending on your project. You may interpret data you
gather using the following mechanisms:
1. Descriptive observations: you simply write down what you observe
2. Inferential observations:  you may write down an observation that is inferred by the
subject’s body language and behavior.
3. Evaluative/observation:
You may make an inference and therefore a judgment from the behavior. Make sure you can
replicate these findings.
Surveys/Questionnaires
Surveys or questionnaires are instruments used for collecting data in survey research.  They usually
include a set of standardized questions that explore a specific topic and collect information about
demographics, opinions, attitudes, or behaviors.
Data collection for ethnographic research:
The quality of a process evaluation, as in any research, is dependent on the quality and validity of the data
collected. Data validity is defined here as the closeness of the relationship between the data collected and
reported and the phenomenon being studied. Data may also be collected about retrospective experiences,
often through interviews, over the course of an intervention for example, and participants may not
remember their experiences or behaviours accurately. Furthermore, only selected data may be reported to
the interviewer by trial participants, depending on how a participant views circumstances; this filtering is
inevitable and can be interesting in itself but necessarily limits the data that the researcher has access to
and its validity.

Ethnography studies social and behavioural phenomena in naturalistic settings through participant
observation, where the researcher is embedded in a social world and, thus, uniquely observes behaviours
as they occur in situ. Observation as a key method of ethnography has several benefits. First, data
collection is direct rather than being reported at a later time point in interviews or focus groups and is
unmediated through participant interpretation or the passage of time. This partially overcomes the
problem where practitioners and participants may not remember or report their behaviours in an unbiased
way for various reasons, such as practitioners presenting a professional image to researchers or
participants constructing their own narratives retrospectively (they may, of course, adjust their behaviours
in response to ethnographic observation; this will be discussed below). Second, social groups are
observed directly in ordinary, everyday settings of participants; this method is useful for understanding
how people delivering or receiving an intervention behave in real life, both in settings where interventions
are received and in family or social settings where health behaviours occur or where new behavioural
skills are enacted. This can be valuable for hard-to-reach groups, such as substance abusers, or situations

6
such as youth drinking in town centres , as some health behaviours only occur in specific settings. Third,
the connections between different data on behaviours, events, contexts, and so on can be observed, rather
than being collected atomistically as separate, unrelated items.. They describe, for example, how cigarette
smoking behaviour in young people was related to different types of interaction within social groups,
which served to initiate and reinforce social bonds. This study was not a trial, but this type of information
could be used for a process evaluation; for the example just outlined above, the researcher could
incorporate questions into the interview about social bonds and how these are affected by quit attempts or
further observation on how quit attempts interact with the management of social bonds in peer groups.
This type of information would enhance the ability of a process evaluation to explain how a smoking
intervention operates in conjunction with the social practices of smoking and any effect on trial outcomes.

Ethnography, because it uses observation as a central method, has an advantage in overcoming problems
such as self-report that exist in other qualitative studies which only employ interview and/or focus group
methods. Nonetheless, disadvantages such as bias exist in all methodologies, including ethnography, and
researchers commonly take measures to minimise them. However, an additional benefit of ethnography is
that it usually employs multiple methods, and this approach tends to balance out the strengths and
weaknesses of each method. Ethnography does this not just by using more than one method but by
integrating them in the analysis; this is not always the case in other types of ‘mixed methods’ studies,
including trials that incorporate qualitative studies.

The ethnographer collects naturalistic data through ‘participant observation’, which means that the
researcher must acquire the status of an insider and become part of a social group to some degree to
observe and experience life as an insider would. This makes the method distinct from just ‘observation’.
In order to collect data through participant observation, the researcher must first gain entry into a social
world and also gain acceptance there.

Data collection strategies for case study:

Case study research typically includes multiple data collection techniques and data are collected from
multiple sources. Data collection techniques include interviews, observations (direct and participant),
questionnaires, and relevant documents.The use of multiple data collection techniques and sources
strengthens the credibility of outcomes and enables different interpretations and meanings to be included
in data analysis. This is known as triangulation.
In case study research, the data collected are usually qualitative (words, meanings, views) but can also be
quantitative (descriptive numbers, tables). Qualitative data analysis may be used in theory building and
theory testing. Theory building may use the grounded theory approach. Theory testing typically involves
pattern matching. This is based on the comparison of predicted outcomes with observed data. Qualitative
data analysis is usually highly iterative. Visual displays of qualitative data using matrices (classifications
of data using two or more dimensions) may be used to discover connections between the coded
segments . Data analysis may be undertaken within a case and also between cases in multiple case study
research. Quantitative data is typically presented in descriptive, tabular form and used to highlight
characteristics of case study organizations and interviewees.

7
he case study is a data collection
method in which in-depth descriptive
information
about specific entities, or cases, is
collected, organized, interpreted, and
presented in a
narrative format. The case study report
is essentially a story. The subject of
the case
may be an individual, a family, a
neighborhood, a work group, a
classroom, a school,
an organization, a program, or any
other entity. A case study may also
focus on social
or natural events such as new
supervisors’ first six months on the
job, employees’
8
reactions to the acquisition of their
organization by another company, or
community
response to a natural disaster. As a
data collection approach, it is widely
applied in
sociology, anthropology, psychology,
education, and medicine and offers
much potential
value to performance technology.
Case studies offer rich perspectives
and insights that
can lead to in-depth understanding of
variables, issues, and problems. For
example, the
renowned Swiss developmental
psychologist Jean Piaget based his
theories of
9
childhood intellectual development on
the study of two cases, his own
children (Liebert,
Poulos, & Strauss, 1974).
Implementation of a case study
approach involves a unique degree of
interaction for
participants, the researcher, and the
research audience. The researcher
collaborates
closely with the participant to collect
the data then selects and structures the
ideas to
include in the report, developing
themes, highlighting some ideas,
subordinating or

10
eliminating others, and finally
connecting the ideas and embedding
them in a narrative
context. In this process, the researcher
is sharing the personal meanings of
events and
1
1
relationships both as voiced by the
participant and by the researcher. As
the audience
reads the case study, they in turn,
based on their prior experience and
personal
knowledge, mentally add and subtract
information from the study, shaping
what they
read (Stake, 2005)
11
he case study is a data collection
method in which in-depth descriptive
information
about specific entities, or cases, is
collected, organized, interpreted, and
presented in a
narrative format. The case study report
is essentially a story. The subject of
the case
may be an individual, a family, a
neighborhood, a work group, a
classroom, a school,
an organization, a program, or any
other entity. A case study may also
focus on social
or natural events such as new
supervisors’ first six months on the
job, employees’
12
reactions to the acquisition of their
organization by another company, or
community
response to a natural disaster. As a
data collection approach, it is widely
applied in
sociology, anthropology, psychology,
education, and medicine and offers
much potential
value to performance technology.
Case studies offer rich perspectives
and insights that
can lead to in-depth understanding of
variables, issues, and problems. For
example, the
renowned Swiss developmental
psychologist Jean Piaget based his
theories of
13
childhood intellectual development on
the study of two cases, his own
children (Liebert,
Poulos, & Strauss, 1974).
Implementation of a case study
approach involves a unique degree of
interaction for
participants, the researcher, and the
research audience. The researcher
collaborates
closely with the participant to collect
the data then selects and structures the
ideas to
include in the report, developing
themes, highlighting some ideas,
subordinating or

14
eliminating others, and finally
connecting the ideas and embedding
them in a narrative
context. In this process, the researcher
is sharing the personal meanings of
events and
1
1
relationships both as voiced by the
participant and by the researcher. As
the audience
reads the case study, they in turn,
based on their prior experience and
personal
knowledge, mentally add and subtract
information from the study, shaping
what they
read (Stake, 2005)
15
he case study is a data collection
method in which in-depth descriptive
information
about specific entities, or cases, is
collected, organized, interpreted, and
presented in a
narrative format. The case study report
is essentially a story. The subject of
the case
may be an individual, a family, a
neighborhood, a work group, a
classroom, a school,
an organization, a program, or any
other entity. A case study may also
focus on social
or natural events such as new
supervisors’ first six months on the
job, employees’
16
reactions to the acquisition of their
organization by another company, or
community
response to a natural disaster. As a
data collection approach, it is widely
applied in
sociology, anthropology, psychology,
education, and medicine and offers
much potential
value to performance technology.
Case studies offer rich perspectives
and insights that
can lead to in-depth understanding of
variables, issues, and problems. For
example, the
renowned Swiss developmental
psychologist Jean Piaget based his
theories of
17
childhood intellectual development on
the study of two cases, his own
children (Liebert,
Poulos, & Strauss, 1974).
Implementation of a case study
approach involves a unique degree of
interaction for
participants, the researcher, and the
research audience. The researcher
collaborates
closely with the participant to collect
the data then selects and structures the
ideas to
include in the report, developing
themes, highlighting some ideas,
subordinating or

18
eliminating others, and finally
connecting the ideas and embedding
them in a narrative
context. In this process, the researcher
is sharing the personal meanings of
events and
1
1
relationships both as voiced by the
participant and by the researcher. As
the audience
reads the case study, they in turn,
based on their prior experience and
personal
knowledge, mentally add and subtract
information from the study, shaping
what they
read (Stake, 2005)
19
Q.5 Why experimental research is more effective than a non-experimental research when a researcher is
interested in studying cause and effect relationship?

Ans. Experimental research is more effective than non experimental research:

A predictor variable is the portion of the experiment that is being manipulated to see if it has an effect
on the dependent variable. For example, do people eat more Gouda or cheddar cheese? The predictor
variable in this is the type of cheese. Now, every time you eat cheese, you'll think about predictor
variables. When I say subjects, I just mean the people in the experiment or the people being studied.
Experimental research is when a researcher is able to manipulate the predictor variable and subjects to
identify a cause-and-effect relationship. This typically requires the research to be conducted in a lab, with
one group being placed in an experimental group, or the ones being manipulated, while the other is
placed in a placebo group, or inert condition or non-manipulated group. A laboratory-based experiment
gives a high level of control and reliability.
Non-experimental research is the label given to a study when a researcher cannot control, manipulate or
alter the predictor variable or subjects, but instead, relies on interpretation, observation or interactions to
come to a conclusion. Typically, this means the non-experimental researcher must rely on correlations,
surveys or case studies, and cannot demonstrate a true cause-and-effect relationship. Non-experimental
research tends to have a high level of external validity, meaning it can be generalized to a larger
population.

Differences
So, now that we have the basics of what they are, we can see some of the differences between them.
Obviously, the first thing is the very basis of what they are looking at: their methodology. Experimental
researchers are capable of performing experiments on people and manipulating the predictor variables.
Non-experimental researchers are forced to observe and interpret what they are looking at. Being able to
manipulate and control something leads to the next big difference.
The ability to find a cause-and-effect relationship is kind of a big deal in the world of science! Being able
to say X causes Y is something that has a lot of power. While non-experimental research can come close,
non-experimental researchers cannot say with absolute certainty that X leads to Y. This is because there
may be something it did not observe, and it must rely on less direct ways to measure.
For example, let's say we're curious about how violent men and women are. We cannot have a true
experimental study because our predictor variable for violence is gender. To have a true experimental
study we would need to be able to manipulate the predictor variable. If we had a way to switch men into
women and women into men, back and forth, so that we could see which gender is more violent, then we
could run a true experimental study. But, we can't do that. So, our little experiment becomes a non-
experimental study because we cannot manipulate our predictor variable.
Q.6 Enlist various strategies/methods to analyze data statistically in quantitative research.Discuss
any five in detail.

Ans. Various methods to analyze data statistically in quantitative research:

1. Mean

The arithmetic mean, more commonly known as “the average,” is the sum of a list of numbers divided by
the number of items on the list. The mean is useful in determining the overall trend of a data set or

20
providing a rapid snapshot of your data. Another advantage of the mean is that it’s very easy and quick to
calculate.

Pitfall:

Taken alone, the mean is a dangerous tool. In some data sets, the mean is also closely related to the mode
and the median (two other measurements near the average). However, in a data set with a high number of
outliers or a skewed distribution, the mean simply doesn’t provide the accuracy you need for a nuanced
decision.

2. Standard Deviation

The standard deviation, often represented with the Greek letter sigma, is the measure of a spread of data
around the mean. A high standard deviation signifies that data is spread more widely from the mean,
where a low standard deviation signals that more data align with the mean. In a portfolio of data analysis
methods, the standard deviation is useful for quickly determining dispersion of data points.

Pitfall:

Just like the mean, the standard deviation is deceptive if taken alone. For example, if the data have a very
strange pattern such as a non-normal curve or a large amount of outliers, then the standard deviation
won’t give you all the information you need.

3. Regression

Regression models the relationships between dependent and explanatory variables, which are usually
charted on a scatter plot. The regression line also designates whether those relationships are strong or
weak. Regression is commonly taught in high school or college statistics courses with applications for
science or business in determining trends over time.

Pitfall:

Regression is not very nuanced. Sometimes, the outliers on a scatterplot (and the reasons for them) matter
significantly. For example, an outlying data point may represent the input from your most critical supplier
or your highest selling product. The nature of a regression line, however, tempts you to ignore these
outliers. As an illustration, examine a picture, in which the data sets have the exact same regression line
but include widely different data points.

4. Sample Size Determination

When measuring a large data set or population, like a workforce, you don’t always need to collect
information from every member of that population – a sample does the job just as well. The trick is to
determine the right size for a sample to be accurate. Using proportion and standard deviation methods,
you are able to accurately determine the right sample size you need to make your data collection
statistically significant.

Pitfall:

21
When studying a new, untested variable in a population, your proportion equations might need to rely on
certain assumptions. However, these assumptions might be completely inaccurate. This error is then
passed along to your sample size determination and then onto the rest of your statistical data analysis

5. Hypothesis Testing

Also commonly called t testing, hypothesis testing assesses if a certain premise is actually true for your
data set or population. In data analysis and statistics, you consider the result of a hypothesis
test statistically significant if the results couldn’t have happened by random chance. Hypothesis tests are
used in everything from science and research to business and economic

Pitfall:

To be rigorous, hypothesis tests need to watch out for common errors. For example, the placebo effect
occurs when participants falsely expect a certain result and then perceive (or actually attain) that result.
Another common error is the Hawthorne effect (or observer effect), which happens when participants
skew results because they know they are being studied.

Overall, these methods of data analysis add a lot of insight to your decision making portfolio, particularly
if you’ve never analyzed a process or data set with statistics before. However, avoiding the common
pitfalls associated with each method is just as important. Once you master these fundamental techniques
for statistical data analysis, then you’re ready to advance to more powerful data analysis tools.

Q.7 How would you define a variable? Discuss different kinds (at least five) of variables with
examples.

Ans. Variable

Variables represent the measurable traits that can change over the course of a scientific experiment. In
all there are five basic variable types: dependent, independent, intervening, moderator, controlled and
extraneous variables.

Different kinds of variables

Independent and Dependent Variables

In general, experiments purposefully change one variable, which is the independent variable. But a
variable that changes in direct response to the independent variable is the dependent variable. Say
there’s an experiment to test whether changing the position of an ice cube affects its ability to melt. The
change in an ice cube's position represents the independent variable. The result of whether the ice cube
melts or not is the dependent variable.

Intervening and Moderator Variables

Intervening variables link the independent and dependent variables, but as abstract processes, they are
not directly observable during the experiment. For example, if studying the use of a specific teaching
technique for its effectiveness, the technique represents the independent variable, while the completion
of the technique's objectives by the study participants represents the dependent variable, while the

22
actual processes used internally by the students to learn the subject matter represents the intervening
variables.

By modifying the effect of the intervening variables -- the unseen processes -- moderator variables
influence the relationship between the independent and dependent variables. Researchers measure
moderator variables and take them into consideration during the experiment.

Constant or Controllable Variable

Sometimes certain characteristics of the objects under scrutiny are deliberately left unchanged. These
are known as constant or controlled variables. In the ice cube experiment, one constant or controllable
variable could be the size and shape of the cube. By keeping the ice cubes' sizes and shapes the same,
it's easier to measure the differences between the cubes as they melt after shifting their positions, as
they all started out as the same size.

Extraneous Variables

A well-designed experiment eliminates as many unmeasured extraneous variables as possible. This


makes it easier to observe the relationship between the independent and dependent variables. These
extraneous variables, also known as unforeseen factors, can affect the interpretation of experimental
results. Lurking variables, as a subset of extraneous variables represent the unforeseen factors in the
experiment.

Confounding variables

Another type of lurking variable includes the confounding variable, which can render the results of the
experiment useless or invalid. Sometimes a confounding variable could be a variable not previously
considered. Not being aware of the confounding variable’s influence skews the experimental results.
For example, say the surface chosen to conduct the ice-cube experiment was on a salted road, but the
experimenters did not realize the salt was there and sprinkled unevenly, causing some ice cubes to melt
faster. Because the salt affected the experiment's results, it's both a lurking variable and a confounding
variable.

Q.8 What is the difference between descriptive statistics and inferential statistics?

23
Ans. Descriptive statistics vs. inferential statistics

Accounting students and professional alike need to have a strong understanding of a variety of financial,
statistical, and computational concepts. Analysis of financial data and deriving actionable insights are
especially important. Students seeking to earn an accounting degree online should have a strong
understanding of the concepts that drive different types of statistics. Consider how descriptive statistics
and inferential statistics can both apply to the many roles tied to accounting, as well as the important
differences between them.

What is descriptive statistics?

Descriptive statistics refers to the use of representative or sample sets of data to derive a conclusion or
finding. In descriptive statistics, the determinations reached are only applied to the population or data set
being studied.

Descriptive statistic examples can be found in many industries and situations. In sports, descriptive
statistics could be used in gathering information about a basketball player’s performance based on the
individual numbers tallied, such as points scored, blocks, and rebounds, during a single game or series of
games. With analysis, this data can be used to compare the player to others who also had the same
performance numbers collected. However, the statistics gathered and information generated about these
players should not be applied to all professional basketball players, or all players in general. Descriptive
statistics are limited to the population defined through the data initially gathered. They don’t generally
aim to reach a conclusion or provide proof of a larger point based on the result.

Descriptive statistics could also be used in the academic world in primary, secondary, and higher
education to analyze student grades by teachers and professors. An educator may track performance and
present anonymous group results to an entire class, helping them understand how they performed not only
on a letter grade or points basis but also compared to fellow students. The educator may share the mean,
median, and mode grade to help students contextualize individual performance, as well as measure the
range or standard deviation of the scores. This information may prove helpful to educators, helping them
benchmark performance and compare it to other classes or students.

In terms of finance and accounting, descriptive statistics can be useful to better understand return on
investment, Investopedia explains. Although the results of this type of statistical analysis can’t be counted
upon to predict the future, they describe core tendencies that occur over time.

24
What is inferential statistics?

One simple definition of inferential statistics is taking a sample that is representative of a population and
then drawing a conclusion for that larger group. This requires careful calculations to analyze the statistics
correctly and ensure the connection between population and sample size is accurately represented, often
through additional tests and mathematical work. This form of statistics is frequently used in the social
sciences and draws on a number of complicated techniques such as linear regression analysis and
structural equation modelling, Thought Co. explains. Because the analysis relies on a sample as opposed
to the entire population, a degree of confidence in the result of the statistical analysis is expressed by the
professionals who engaged in the process.

Some inferential statistics examples include determinations about widespread economic and health care
considerations for populations across states or the entire country. Political polling, which sets a sample
size and then extrapolates vote predictions for specific candidates in individual elections, is another way
in which this type of statistics is used. Sentiment polling about everything from political affiliation to
eating preferences and frequency of attending a sporting event or visiting a movie theater are other
examples.

In the world of finance and accounting, inferential statistics are valuable for reaching conclusions in
situations where a full analysis of the data is prohibitive or impossible. This type of statistics may be used
to make determinations about customer groups, especially for large businesses. Many potential uses arise
when professionals consider factors outside of the business itself. While many modern, mature companies
have access to strong data sets about internal operations, that’s not true for external needs. An analysis of
a potential investment or purchase of a competing or complementary organization could draw on
inferential statistics to examine a sample of similar situations involving other businesses and derive
specific results that inform future actions.

Q.9 Discuss different sampling techniques.

Ans. Different sampling techniques: There are the following different sampling techniques.

Probability sampling methods

Probability sampling means that every member of the population has a chance of being selected. It is
mainly used in quantitative research. If you want to produce results that are representative of the whole
population, you need to use a probability sampling technique.

There are four main types of probability sample.

1. Simple random sampling

In a simple random sample, every member of the population has an equal chance of being selected. Your
sampling frame should include the whole population.
To conduct this type of sampling, you can use tools like random number generators or other techniques
that are based entirely on chance.
Example

25
You want to select a simple random sample of 100 employees of Company X. You assign a number to
every employee in the company database from 1 to 1000, and use a random number generator to select
100 numbers.

2. Systematic sampling
Systematic sampling is similar to simple random sampling, but it is usually slightly easier to conduct.
Every member of the population is listed with a number, but instead of randomly generating numbers,
individuals are chosen at regular intervals.

Example

All employees of the company are listed in alphabetical order. From the first 10 numbers, you randomly
select a starting point: number 6. From number 6 onwards, every 10th person on the list is selected (6, 16,
26, 36, and so on), and you end up with a sample of 100 people.

If you use this technique, it is important to make sure that there is no hidden pattern in the list that might
skew the sample. For example, if the HR database groups employees by team, and team members are
listed in order of seniority, there is a risk that your interval might skip over people in junior roles,
resulting in a sample that is skewed towards senior employees.

3. Stratified sampling

Stratified Sampling involves dividing the population into subpopulations that may differ in important
ways. It allows you draw more precise conclusions by ensuring that every subgroup is properly
represented in the sample.

To use this sampling method, you divide the population into subgroups (called strata) based on the
relevant characteristic (e.g. gender, age range, income bracket, job role).

Based on the overall proportions of the population, you calculate how many people should be sampled
from each subgroup. Then you use random or systematic sampling to select a sample from each subgroup.

Example

The company has 800 female employees and 200 male employees. You want to ensure that the sample
reflects the gender balance of the company, so you sort the population into two strata based on gender.
Then you use random sampling on each group, selecting 80 women and 20 men, which gives you a
representative sample of 100 people.

4. Cluster sampling

Cluster Sampling also involves dividing the population into subgroups, but each subgroup should have
similar characteristics to the whole sample. Instead of sampling individuals from each subgroup, you
randomly select entire subgroups.

If it is practically possible, you might include every individual from each sampled cluster. If the clusters
themselves are large, you can also sample individuals from within each cluster using one of the
techniques above.This method is good for dealing with large and dispersed populations, but there is more

26
risk of error in the sample, as there could be substantial differences between clusters. It’s difficult to
guarantee that the sampled clusters are really representative of the whole population.

Example

The company has offices in 10 cities across the country (all with roughly the same number of employees
in similar roles). You don’t have the capacity to travel to every office to collect your data, so you use
random sampling to select 3 offices – these are your clusters.

Non-probability sampling methods

In a non-probability sample, individuals are selected based on non-random criteria, and not every
individual has a chance of being included.

This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias, and you can’t
use it to make valid statistical inferences about the whole population.

Non-probability sampling techniques are often appropriate for exploratory and qualitative research. In
these types of research, the aim is not to test a hypothesis about a broad population, but to develop an
initial understanding of a small or under-researched population.

1. Convenience sampling

A convenience sample simply includes the individuals who happen to be most accessible to the
researcher.This is an easy and inexpensive way to gather initial data, but there is no way to tell if the
sample is representative of the population, so it can’t produce generalizable results.

Example

You are researching opinions about student support services in your university, so after each of your
classes, you ask your fellow students to complete a survey  on the topic. This is a convenient way to
gather data, but as you only surveyed students taking the same classes as you at the same level, the
sample is not representative of all the students at your university.

2. Voluntary response sampling

Similar to a convenience sample, a voluntary response sample is mainly based on ease of access. Instead
of the researcher choosing participants and directly contacting them, people volunteer themselves (e.g. by
responding to a public online survey).Voluntary response samples are always at least somewhat biased, as
some people will inherently be more likely to volunteer than others.

Example

You send out the survey to all students at your university and a lot of students decide to complete it. This
can certainly give you some insight into the topic, but the people who responded are more likely to be
those who have strong opinions about the student support services, so you can’t be sure that their opinions
are representative of all students.

3. Purposive sampling

27
This type of sampling involves the researcher using their judgement to select a sample that is most useful
to the purposes of the research.

It is often used in qualitative research , where the researcher wants to gain detailed knowledge about a
specific phenomenon rather than make statistical inferences. An effective purposive sample must have
clear criteria and rationale for inclusion.

Example

You want to know more about the opinions and experiences of disabled students at your university, so
you purposefully select a number of students with different support needs in order to gather a varied range
of data on their experiences with student services.

4. Snowball sampling

If the population is hard to access, snowball sampling can be used to recruit participants via other
participants. The number of people you have access to “snowballs” as you get in contact with more
people.

Example

You are researching experiences of homelessness in your city. Since there is no list of all homeless people
in the city, probability sampling isn’t possible. You meet one person who agrees to participate in the
research, and she puts you in contact with other homeless people that she knows in the area.

Q. 10 Define research ethics and the approaches used for considering ethical issuses.

Ans. Research ethics


 The application of moral rules and professional codes of conduct to the collection, analysis, reporting,
and publication of information about research subjects, in particular active acceptance of subjects' right to
privacy, confidentiality, and informed consent. Until recently sociologists (and social scientists generally)
often displayed arrogance in their treatment of research subjects, justifying their actions by the search for
truth. This trend is now being redressed, especially in industrial societies, with the adoption of formal
codes of conduct, and greater emphasis on ethical research procedures. Ethical issues are most salient in
relation to case-studies and other research designs which focus on very few cases (with the risk that they
remain identifiable in reports). Public opinion now resists invasions of privacy for genuine research
purposes just as much as for publicity seeking mass media stories, as evidenced by periodic increases in
survey non-response, despite the fact that anonymity is effectively guaranteed in large-scale data
collections.
Approaches to the Study of Ethics Issues

Ethical issues are ones that involve the way things "should be" rather than the way things are. Ethics
involve discussions of moral obligations, but do not necessarily hinge on religious overtones.
The first step in discussing ethical issues is to get all the facts. According to Velasquez et al., "some moral
issues create controversies simply because we do not bother to check the facts."

There are several approaches that are considered in arriving at ethical solutions to dilemmas.
Utilitarian Approach
"ethical actions are those that provide the greatest balance of good over evil"

28
In order to take the utilitarian approach, the problem must be analyzed from several different
perspectives, and the solutions to each must be contemplated to arrive at the one that favors the greater
good.
The Rights Approach
The rights approach is predicated on the notion that humans have the right to choose paths which affect
their destiny because they are human. Furthermore, humans are justified in their expectation that their
rights should be respected. These rights include the right to the truth, the right of privacy, the right to not
be injured, and the right to fulfillment of promises.
Fairness or Justice Approach
The fairness approach assumes that people should be treated equally regardless of their station in life, that
is, they should not be subject to discrimination.
Common Good Approach
The common good approach suggests that ethical actions are those that benefit all members of the
community.
The Virtue Approach
The virtue approach describes an assumption that there are higher orders of goodness to which man
should aspire, and that only moral actions will help us achieve that higher level.
Ethical problem solving involves accumulating all the facts surrounding an issue and considering
what the possible solutions to the problem are, and what benefits and harms result from each and whom
they affect;
what rights each of the parties to the problems has;
what solutions to the problem treat all parties equally;
what course of action promotes the common good;
and, what actions develop moral virtues.

There are three key issues. Research subjects' right to refuse to co-operate with a study is clear-cut in
relation to interview surveys, but is not always observed in relation to case-studies, especially
when covert observation is employed. Research subjects' right for information supplied to researchers to
remain not only anonymous but also confidential in the broader sense is rarely disputed, but again may be
difficult to observe in practice, especially when analyses of study results reveal more than may be
intended. The right to give or withhold informed consent, if necessary after the research has been
completed, ensures that research results are not made public without the subjects' knowing agreement.

29

You might also like