You are on page 1of 32

22 a).

Sampling Methods | Types, Techniques & Examples


When you conduct research about a group of people, it’s rarely possible to

collect data from every person in that group. Instead, you select a sample. The

sample is the group of individuals who will actually participate in the research.

To draw valid conclusions from your results, you have to carefully decide how

you will select a sample that is representative of the group as a whole. This is

called a sampling method. There are two primary types of sampling methods

that you can use in your research:

● Probability sampling involves random selection, allowing you to make

strong statistical inferences about the whole group.

● Non-probability sampling involves non-random selection based on

convenience or other criteria, allowing you to easily collect data.

You should clearly explain how you selected your sample in the methodology

section of your paper or thesis, as well as how you approached minimizing

research bias in your work.

Table of contents

1. Population vs. sample

2. Probability sampling methods

3. Non-probability sampling methods

4. Other interesting articles

5. Frequently asked questions about sampling


Population vs. sample

First, you need to understand the difference between a population and a sample,

and identify the target population of your research.

● The population is the entire group that you want to draw conclusions

about.

● The sample is the specific group of individuals that you will collect data

from.

The population can be defined in terms of geographical location, age, income,

or many other characteristics.

It can be very broad or quite narrow: maybe you want to make inferences about

the whole adult population of your country; maybe your research focuses on

customers of a certain company, patients with a specific health condition, or

students in a single school.


It is important to carefully define your target population according to the

purpose and practicalities of your project.

If the population is very large, demographically mixed, and geographically

dispersed, it might be difficult to gain access to a representative sample. A lack

of a representative sample affects the validity of your results, and can lead to

several research biases, particularly sampling bias.

Sampling frame
The sampling frame is the actual list of individuals that the sample will be

drawn from. Ideally, it should include the entire target population (and nobody

who is not part of that population).

Example: Sampling frameYou are doing research on working conditions at a


social media marketing company. Your population is all 1000 employees of the
company. Your sampling frame is the company’s HR database, which lists the
names and contact details of every employee.

Sample size
The number of individuals you should include in your sample depends on

various factors, including the size and variability of the population and your

research design. There are different sample size calculators and formulas

depending on what you want to achieve with statistical analysis.

Probability sampling methods

Probability sampling means that every member of the population has a chance

of being selected. It is mainly used in quantitative research. If you want to


produce results that are representative of the whole population, probability

sampling techniques are the most valid choice.

There are four main types of probability sample.

1. Simple random sampling


In a simple random sample, every member of the population has an equal

chance of being selected. Your sampling frame should include the whole

population.

To conduct this type of sampling, you can use tools like random number

generators or other techniques that are based entirely on chance.

Example: Simple random samplingYou want to select a simple random sample


of 1000 employees of a social media marketing company. You assign a number
to every employee in the company database from 1 to 1000, and use a random
number generator to select 100 numbers.

2. Systematic sampling
Systematic sampling is similar to simple random sampling, but it is usually

slightly easier to conduct. Every member of the population is listed with a


number, but instead of randomly generating numbers, individuals are chosen at

regular intervals.

Example: Systematic samplingAll employees of the company are listed in


alphabetical order. From the first 10 numbers, you randomly select a starting
point: number 6. From number 6 onwards, every 10th person on the list is
selected (6, 16, 26, 36, and so on), and you end up with a sample of 100 people.
If you use this technique, it is important to make sure that there is no hidden

pattern in the list that might skew the sample. For example, if the HR database

groups employees by team, and team members are listed in order of seniority,

there is a risk that your interval might skip over people in junior roles, resulting

in a sample that is skewed towards senior employees.

3. Stratified sampling
Stratified sampling involves dividing the population into subpopulations that

may differ in important ways. It allows you draw more precise conclusions by

ensuring that every subgroup is properly represented in the sample.

To use this sampling method, you divide the population into subgroups (called

strata) based on the relevant characteristic (e.g., gender identity, age range,

income bracket, job role).

Based on the overall proportions of the population, you calculate how many

people should be sampled from each subgroup. Then you use random or

systematic sampling to select a sample from each subgroup.

Example: Stratified samplingThe company has 800 female employees and 200
male employees. You want to ensure that the sample reflects the gender balance
of the company, so you sort the population into two strata based on gender.
Then you use random sampling on each group, selecting 80 women and 20 men,
which gives you a representative sample of 100 people.

4. Cluster sampling
Cluster sampling also involves dividing the population into subgroups, but each

subgroup should have similar characteristics to the whole sample. Instead of

sampling individuals from each subgroup, you randomly select entire

subgroups.

If it is practically possible, you might include every individual from each

sampled cluster. If the clusters themselves are large, you can also sample

individuals from within each cluster using one of the techniques above. This is

called multistage sampling.

This method is good for dealing with large and dispersed populations, but there

is more risk of error in the sample, as there could be substantial differences

between clusters. It’s difficult to guarantee that the sampled clusters are really

representative of the whole population.

Example: Cluster samplingThe company has offices in 10 cities across the


country (all with roughly the same number of employees in similar roles). You
don’t have the capacity to travel to every office to collect your data, so you use
random sampling to select 3 offices – these are your clusters.

Non-probability sampling methods

In a non-probability sample, individuals are selected based on non-random

criteria, and not every individual has a chance of being included.

This type of sample is easier and cheaper to access, but it has a higher risk of

sampling bias. That means the inferences you can make about the population are
weaker than with probability samples, and your conclusions may be more

limited. If you use a non-probability sample, you should still aim to make it as

representative of the population as possible.

Non-probability sampling techniques are often used in exploratory and

qualitative research. In these types of research, the aim is not to test a

hypothesis about a broad population, but to develop an initial understanding of a

small or under-researched population.


22 b Sampling Error in Research And how to
Reduce Sampling Error
Sampling error can be defined as a statistical error that occurs when a researcher
fails to select a sample that is representative of the entire population. When
sampling error occurs, the results obtained from the sample are not reflective of
the results that would be obtained from the target population itself. Therefore,
the findings of the study are less generalizable to the target population.

The only way to completely eliminate sampling error from a study is by


observing every element in a population, which is not feasible and is even
impossible in some cases. Therefore, sampling error cannot be completely
avoided as no sample will ever be fully representative of the target population.
However, by having an understanding of sampling error, we can estimate the
size of it and take measures to minimize it, so as to make the findings of our
study as generalizable to the larger population as possible.

Types of Sampling Errors

Sampling errors can be caused by a range of different causes. By having an


understanding of what causes sampling error, we can take measures to minimize
it.

The following is a list of the five most common types of sampling errors:

1. Sample Frame Error


Sample frame error occurs when the sample is selected from the wrong
population data. Therefore, in such cases, the sample frame does not represent
the population of interest from which the researcher thinks they are sampling.
This error generally includes targeting the wrong population segments or
completely missing out on certain demographics within the correct segments.

2. Selection Error
This error occurs when participants themselves opt to be a part of the study, and
therefore only those who are interested participate in the survey. If researchers
overlook respondents who didn’t initially respond, the outcome of the study will
not be reflective of the target market. If instead, the researcher decides to follow
up with the respondents that didn’t initially participate in the survey, the
outcome is very likely to change.

3. Population Specification Error


This is a type of sample design issue that is caused when a researcher fails to
clearly outline who they want to survey and therefore does not have a clear idea
of their target population. When you don’t have a clearly defined target
population, you may end up selecting inappropriate elements to be a part of
your sample group. This error is generally the result of a lack of knowledge on
which group(s) would be of most use and relevance to the study.

4. Non-Response Error
Non-response errors occur from the failure to obtain responses from all units in
the selected sample group. The decrease in the sample size and amount of
information collected will result in a larger standard error. Additionally, a bias is
introduced at the risk of non-respondents differing from the respondents within
the selected sample. Many reasons could cause this, for example, a percentage
of the sample group may not use the channel through which the survey was
conducted. The extent of non-response error can be checked by using follow-up
surveys through additional channels to obtain responses from those respondents
who didn’t initially respond to the survey.
5. Sampling Errors
Sampling errors occur when there is a lack of representativeness of the target
population in the sample group. This is generally the result of poor sample
designing. Therefore, this error can be minimized or eliminated through careful
sample designing and by ensuring the sample size is large enough to reflect the
entire population.

Example of Sampling Error

To gain a deeper understanding of sampling error, let’s take a look at a real-life


example where a study had a large sampling error. We will also take a look at
what caused this sampling error.

In the 1936 presidential election, Alfred Landon, the Republican governor of


Kansas was pitted against the incumbent President, Franklin D. Roosevelt. At
the time, Literary Digest was one of the most respected magazines and had
accurately predicted the winners of multiple presidential elections within the
previous decades. For this election, Literary Digest conducted a poll about the
election, and with the data collected, they predicted that Landon would win the
election with 57% of the votes while Roosevelt will lose with 43%.

The actual outcome of the election was jarringly different, with 62% of the
votes going to Roosevelt and 38% going to Landon.

In this case, the sampling error was a shocking 19% even though this was one of
the largest and most expensive polls conducted by Literary Digest and had a
sample size of around 2.4 million people.
This large sampling error was caused specifically due to sampling frame error,
as the sample frame was from telephone directories and car registrations.
However, at the time, many Americans did not own cars and phones and the
ones who did were largely Republicans. For this reason, the results wrongly
predicted a Republican Victory.

How to Estimate the Sampling Error?

The margin of error that is seen in survey results is an estimate of sampling


error. The following formula can be used to calculate your sampling error:

Sampling Error= Z x (σ/n)

where,

Z = Z score value based on the confidence interval (approx=1.96)

σ = Population standard deviation

n = Size of the sample

It is important to note that as this value is simply an estimate, there is a small


chance (5% or less) that the margin of error is more than what is stated in the
report.

Ways to Reduce Sampling Errors

There are many different measures that can be taken to reduce the 5 types of
sampling error.

Let’s explore a few of the most effective ways to do so:

1. Select a Larger Sample Size


When you select a larger sample size, your sample size gets closer to the actual
population size. This makes the sample more representative of your target
population and reduces the margin of error.

2. Improve Sample Design


You can reduce your sampling error by improving your sample design and
accounting for the different sub-populations within your target population. For
example, if a specific demographic makes up 40% of your target population,
then you should ensure that 40% of your sample group’s population is also
made up of this demographic.

This can be done by using a type of probability sampling known as stratified


random sampling. In this method of sampling, a population is first divided into
homogeneous sub-groups known as strata before simple random sampling is
used to select elements from each stratum. This ensures that the sample group
has a similar composition to that of the target population, and is, therefore, more
representative of it.

3. Study your Target Population


Before you select a sample, it is integral that you have a thorough understanding
of your target population and its demographic mix. Study your target population
well so that you can clearly and accurately outline who makes up your target
population so that this subpopulation can be targeted effectively.

To gain a more comprehensive understanding of sampling error, watch this


video by Elon University’s Political Science Professor Kenneth Fernandez
where he defines sampling error and how to reduce it:
Conclusion;

Sampling error is the arch nemesis of a research. It ruins the credibility of your
research outcomes and leads to wasted effort. Thankfully, there are many ways
to control and prevent these sampling error as mentioned in the article.

Stay cautious of these types of sampling errors to avoid them from sneaking into
your research.
23 a What is Primary Data Collection? Types,
Advantages, and Disadvantages
Primary data collection involves gathering information before secondary or
tertiary sources are consulted.
This type of data can be collected through a variety of methods, such as
interviews and surveys. Primary sources provide an understanding that experts
rely on, even as they support their conclusions with more extensive studies.
Read all about primary data in this blog post.

What is primary data collection?

Primary data collection is the process of collecting data from a live source, such
as a human being. The goal of primary data collection is to collect data that is as
accurate and complete as possible. This data can be used to improve the quality
of life for people and the environment. There are two types of primary data
collection: online and offline.

What are the types of primary data collection?

Basic types of primary data collection include online, offline, and


self-collection.

OFFLINE PRIMARY DATA COLLECTION

Offline primary data collection includes offline surveys, interviews, offline


quizzes, delphi technique, focus groups and observations.

● The Delphi Technique is a survey method that uses a panel of experts to

make decisions.
● In focus groups, participants discuss an issue or product in a group
setting. Focus groups are qualitative methods that involve interviewing a
group of people about their opinions or experiences.
● Interviews are one-on-one conversations with respondents. Personal
Interview is the most common way to collect data in terms of verbal
responses.
● Quizzes are quantitative methods that involve testing students on specific
information.
● Observation method is used when the study relates to behavioral science.
Direct observations are qualitative methods that involve watching and
recording the behavior of people in natural settings.

ONLINE PRIMARY DATA COLLECTION

Online primary data collection includes web scraping, online quizzes, and

online surveys.

SELF-COLLECTION

Self-collection includes using social media to collect information.

Illustration of the most common primary data collection types


Primary data collection methods

Primary data collection is the process of collecting data from a real-world


source, like a customer or user. This can be done manually or through
automated means.

There are three main types of primary data collection: qualitative, quantitative,
and mixed mode. Each has its own advantages and disadvantages.

Qualitative primary data collection is the most important type for research
because it allows you to collect rich information that can be used to improve
your products or services. However, it’s difficult to interpret and use this data in
a way that’s useful for business decisions. Quantitative primary data collection
is good for measuring how people are using your product or service, but it
doesn’t allow you to understand their thoughts and feelings very well. Mixed
mode primary data collection combines elements of both qualitative and
quantitative methods; this makes it easier to get valuable insights without
sacrificing accuracy.

QUALITATIVE PRIMARY DATA COLLECTION

Qualitative primary data collection is a form of research that involves collecting


data in an unstructured manner. This type of data collection can be helpful in
gaining a more in-depth understanding of a topic or phenomenon. It can be
useful when you want to explore an issue from many different perspectives. It
can also be helpful when you want to gather information about people’s
opinions and feelings about a topic or phenomenon. Additionally, qualitative
primary data collection can be helpful when you want to explore how people
think about a particular issue or problem.
Qualitative primary data collection is a type of data collection that uses
interviews, focus groups, and surveys to collect information from people. It’s
often used in research projects to gather feedback on products or services. This
kind of data can be useful for understanding how people use products or how
they think about problems. Qualitative primary data can also help you design
better products or services.

QUANTITATIVE PRIMARY DATA COLLECTION

Quantitative primary data collection is the process of collecting data that can be
measured.

There are two types of quantitative primary data collection: online surveys and
observation studies.

Quantitative primary data collection has several advantages, including the


ability to measure how people behave in natural settings, the ability to track
changes over time, and the potential for large-scale studies.

It has its own set of drawbacks, including low response rates and difficulty
getting accurate results.

Quantitative primary data collection refers to any form of research where you
collect information that can be measured (usually using a numeric scale). There
are two main types – Online Surveys & Observational Studies – both with their
own set of benefits and drawbacks. With Online Surveys in particular there’s no
need for respondents to leave their homes or face any awkward questions –
they’re able to take part from anywhere with an internet connection! But while
response rates may be high due to this accessibility, it’s often difficult getting
accurate results as many people choose not answer surveys because they find
them intrusive or inconvenient rather than ignorant/uninterested in the topic at
hand! Lastly whilst observational studies have been around for a long time, they
often suffer from low response rates as people are less likely to volunteer for
them or feel comfortable about being observed.

Examples of primary data

Primary data is data that is collected directly from the users of a product or
service. It includes information such as how people use the product or service,
what they say about it, and what they buy.

Primary data collection can be used to improve your products and services by
understanding how people use them. This data can help you make changes to
the product or service that will improve user experience services in order for
you to make improvements.

Businesses can benefit from primary data collection in a number of ways: by


improving customer satisfaction rates, by identifying new revenue
opportunities, and by making better product decisions.

REAL-LIFE EXAMPLES: PUTTING PRIMARY DATA COLLECTION


INTO PRACTICE

To grasp the real-world significance of primary data collection, let’s delve into a
few illustrative examples. These case studies demonstrate how organizations
and researchers have harnessed the power of primary data to drive impactful
change and innovation.
1. Healthcare and Patient Outcomes:
In the healthcare sector, primary data collection plays a pivotal role in
improving patient outcomes. By directly engaging with patients and
collecting data on their symptoms, treatment experiences, and recovery
progress, healthcare providers can tailor treatment plans, reduce hospital
readmissions, and enhance overall patient care.
2. Market Research and Consumer Insights:

Market research relies heavily on primary data collection to unearth

consumer preferences and market trends. Companies conduct surveys,

focus groups, and interviews to gain insights into consumer behavior.

This data informs product development, marketing strategies, and pricing

decisions.

3. Environmental Conservation and Observation:


Environmental scientists employ primary data collection techniques to
monitor ecological changes. Through field observations, data on species
populations, habitat health, and climate patterns are gathered. This
information is pivotal for conservation efforts and sustainable resource
management.
4. Education and Student Success:
Educational institutions utilize primary data collection to enhance student
success. Surveys and assessments help identify areas where students may
be struggling or excelling. This data guides curriculum adjustments,
support services, and educational policies.
Advantages and disadvantages of primary data collection

WHAT ARE THE ADVANTAGES?

Primary data collection has many advantages over traditional data collection
methods.

● Specific Relevance: Primary data can be designed to provide precisely the


information needed for a specific research question or objective. This
allows for targeted data collection.
● Timeliness: Primary data is typically more up-to-date than secondary data
since it can be collected in real-time. This is especially important in
fast-paced markets or industries.
● Control: During primary data collection, researchers have full control
over the data collection process, including sample selection, question
types, and data collection methods. This enables better quality control and
data validity.
● Adaptability: Researchers can tailor data collection methods to suit their
specific needs, whether through surveys, interviews, observations, or
experiments. This allows flexibility in capturing different types of
information.
● Uniqueness: Since primary data is collected exclusively for the specific
research project, it is typically unique and not available to competitors.
This can provide a competitive advantage.
● In-Depth Insights: Primary data often allows for deeper insights into the
behavior, attitudes, and preferences of the target audience. This can help
make more informed decisions.
● Context Understanding: Primary data collection allows researchers to
better understand the context and circumstances under which the data was
gathered. This is important for interpreting the results correctly.
● Research Continuity: In some cases, primary data collection can serve as
a foundation for future research and enable continuous data collection to
track trends over time.

WHAT ARE THE DISADVANTAGES?

There are a few disadvantages to primary data collection. One disadvantage is


that it can be time-consuming and difficult to collect accurate information.
Another disadvantage is that it can be invasive and disruptive, often requiring
people to take time away from their normal activities. And finally, it may not be
representative of your entire audience, and you may not have access to all the
relevant information.

The advantages and disadvantages of different types of primary data collections


should be considered when designing a study or when selecting a sampling
method for your research project.
23 b Difference between Questionnaire and Schedule

What is a Questionnaire?

A questionnaire is a research instrument used by any researcher as a tool to


collect data or gather information from any source or subject of his or her
interest from the respondents. It has a specific goal to understand topics from
the respondent’s point of view. The questionnaire consists of a set of written or
printed questions with a choice of answers, devised for survey or statistical
studies. It is the most popular type of primary data collection which can be used
to gather both quantitative data ( in form of numerals) and qualitative data (in
form of words, figures) or mixed data which is a continuation of both
quantitative and qualitative data.

What is a Schedule?

A schedule is a formalized arrangement of inquiries, proclamations, statements,


and spaces for replies given to the enumerators who pose inquiries to the
respondents and note down the responses. The enumerators personally visit the
informants with the schedule and ask them questions from the given set of
questions in the sequence in which the questions are prepared and record their
lies in the provided space. The enumerators play a major role in collecting data
through schedules as they have to explain the aim and proper interpretation of
the questions to the respondents so that they can give accurate and proper
answers. The most common example of using a schedule to collect data is
Population Census.
Difference between Questionnaire and Schedule

Basis Questionnaire Schedule

A questionnaire is a research A schedule is a formalized


instrument used by any arrangement of inquiries,
researcher as a tool to collect proclamations, statements,
Meaning data or gather information and spaces for replies given
from any source or subject of to the enumerators who pose
his or her interest from the inquiries to the respondents
respondents. and note down the responses.

A questionnaire is filled by the A schedule is filled by an


Filled by
respondents. enumerator.

The response rate of a The response rate of a


Response Rate
questionnaire is low. schedule is high.

It is economical in terms of It is expensive in terms of


Cost
time, effort, and money. time, effort, and money.
Comparatively small areas
A large area can be covered
Coverage can be covered through a
through a questionnaire.
schedule.

As the enumerator visits the


Respondent’s The identity of the respondent
informant personally, his
Identity is unknown.
identity is known.

The success of a schedule


Dependency of The success of a questionnaire depends upon the honesty and
Success depends upon its quality. competence of the
enumerator.

A questionnaire is used only A schedule can be used in


Usage when the people are literate both cases when people are
and cooperative. literate and illiterate.
24 a Essentials and Pre-Requisites for
Interpretation
To interpret means to explain the meaning of data. Interpretation of data is that
important part of the research which is associated with the drawing of inferences from
the collected data. It is very useful and important part of research because it makes
possible the use of research data. Figures themselves have no utility. It is the
interpretation that makes it possible to utilize the collected data. If data properly
interrelated, it gives right judgment and conclusions. Correct interpretation leads to
correct forecasting too. Right interpretation helps in correct decisions and suitable
policy making. Interpretation from data is an art and it requires good knowledge of
statistics, wisdom and vast experience.
For drawing right and correct conclusions for the data, the data must be handled
carefully and following precautions must be taken to have good interpretation.

The Data Should be Homogenous


Homogenous means of the same or similar nature. It means the collection of data
should be made on the same parameters or units, otherwise comparison will not be
possible and drawing conclusions would be difficult. For example if the data regarding
buying capacity is collected then it must be in the terms of rupees only (and not in
dollars). Homogeneity of data should also take into account the similarity of features
in the two different areas. For example the data regarding Mumbai and Nagpur. If the
data is not homogenous, then the conclusions drawn can easily lead to wrong findings.

The Data Should be Adequate


adequate means sufficient. Conclusions based on inadequate data cannot be right and
proper, therefore the data must be full and complete. Incomplete data also makes it
difficult to analyze the data properly. For example if data is collected only about
consumers expenses without knowing their incomes it may lead to inappropriate
conclusions.

The Data Should be Suitable


suitability of data is very important right from drafting questions, collection of
answers and interpretation. For example, in order to study the buying behavior of car
users, rich consumers should be consulted. Data regarding poor consumers is not
suitable for this product and therefore may lease to false or wrong conclusions.

The Data Should be Scientifically Analyzed


For drawing proper conclusions, the most appropriate and advanced statistical
technique should be used to analyze the data. It will lead to correct and fast
conclusions.

Data Analysis Involves Critical Thinking


Data analysis involves critical thinking. This is done only after collecting all the data
and always focused on the research problem. The data are described and interpreted in
detail leading to the ultimate conclusion. Tables, graphs and illustrations are used to
present the data more clearly and economically.
In brief, analysis involves examination and evaluation of some phenomenon by
dividing it into some constituent parts and identifying the relationships among the
parts in the context of the whole. You then interpret the relationships to explain or
make some intended generalization governing the behavior of the phenomenon.
The researcher summarizes the main findings of his study and the implications.
Conclusions summarize the main results of the research and describe what they mean
for the general field. Briefly describe what you did, consider suggesting future
research to follow up where your research ended.
24 b Methods of Testing Hypothesis
Hypothesis testing is a formal procedure for investigating our ideas about the
world using statistics. It is most often used by scientists to test specific
predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

State your research hypothesis as a null hypothesis and alternate hypothesis


(Ho) and (Ha or H1).

Collect data in a way designed to test the hypothesis.

Perform an appropriate statistical test.

Decide whether to reject or fail to reject your null hypothesis.

Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing
a hypothesis will always follow some version of these steps.

Step 1: State your null and alternate hypothesis

After developing your initial research hypothesis (the prediction that you want
to investigate), it is important to restate it as a null (Ho) and alternate (Ha)
hypothesis so that you can test it mathematically.
The alternate hypothesis is usually your initial hypothesis that predicts a
relationship between variables. The null hypothesis is a prediction of no
relationship between the variables you are interested in.

Hypothesis testing example

You want to test whether there is a relationship between gender and height.
Based on your knowledge of human physiology, you formulate a hypothesis that
men are, on average, taller than women. To test this hypothesis, you restate it as:

H0: Men are, on average, not taller than women.

Ha: Men are, on average, taller than women.


25 a Diffrent Types of Research Report
Various report writing types in research methodology are higher up or lower
down in the hierarchy. Horizontal reports support organizational cooperation. A
lateral report is a communication channel between divisions within the same
organizational level, such as the production and finance departments.

SUBMISSION/PROPOSAL REPORTS

The proposal reports mainly focus on problem-solving. An outline of how one


organization may satisfy the demands of another is a proposal report in research
methodology. The majority of federal organizations use “requests for
proposals,” or RFPs, to publicize their demands. Potential vendors create
proposal reports outlining how they can fulfill the needs stated in the RFP.

REGULAR REPORTS

Periodic reports are sent out regularly. Control over management is directed and
served. They are preprinted forms and computer-generated forms. Data make
periodic reports more consistent.

INTERNAL AND EXTERNAL REPORTS

Internal reports are distributed within the organization. The external reports are
circulated outside the corporation such as annual reports for businesses.

FUNCTIONAL REPORTS
This category includes reports that are categorized according to their intended
use. Accounting reports, marketing reports, financial reports, and several other
reports are included in functional reports. Almost all reports fall into one of
these categories or another.

The most typical report-writing types are discussed above. Call our professional
report writers in Chennai to write a perfect report for your organization.
25. B Content of Research Reports
Research Report:

A document prepared by a researcher after analyzing the information gathered


typically based on surveys, or qualitative methods is a research report. The
format of the report is in a clear and structured way for effective relaying of
information. The information should be accurate in the research report as it is a
first-hand document of the research process.

Purpose of the study: This includes the purpose for which the study is chosen to
report. It must have the background of the problem. It also involves the
importance of the research topic and for which purpose the researcher
conducted the research.

Statement of the problem: The research report is different from other reports,
and it should clearly state the nature of the problem and how involved
researchers can solve it on an effective basis.

Review of literature: The literature review involves the earlier research material
by which the researcher takes information for a report. It consists of the author's
name, book name, year of publication, and publisher of the report.

Methodology: It includes various aspects of the problem of the topic. It is


explained with different concepts in the study. It has two sources of data that is,
primary data and secondary data. Primary data collect information through
questionnaires or interviews. Secondary data is collected from the reports.
Interpretation: In the interpretation of data, all the sources of primary data
should interpret systematically. It also involves tabulation and diagrams of the
study. The data is selected from the questionnaire, and it has to be reported as
per the study's objectives.

Conclusion: It is based on the collection of data of the study. It includes the


suggestion for the report with the proper logical and statistical reasoning. It
should have a precise report analysis of the objectives of the study.

Bibliography: It includes the list of references for the study. Authors/


researchers should give the reference books, articles, and projects in a specific
manner. It should be convenient and satisfactory for the viewer of the report.

You might also like