You are on page 1of 9

Module 1: Introduction to Business Analytics Every manager has the responsibility to plan, coordinate, and lead

their organization to better performance. It is the managers’


Three developments spurred recent explosive growth in the use of responsibility to make strategic, tactical, or operational decisions.
analytical methods in business applications:
Strategic decisions:
First development: - Involve higher-level issues concerned with the overall
direction of the organization
Technological advances—scanner technology, data collection through - Define the organization’s overall goals and aspirations for
e-commerce, Internet social networks, and data generated from the future
personal electronic devices—produce incredible amounts of data for
businesses Tactical decisions:
- Concern how the organization should achieve the goals and
Businesses want to use these data to improve the efficiency and objectives set by its strategy
profitability of their operations, better understand their customers, - Are usually the responsibility of midlevel management
price their products more effectively, and gain a competitive
advantage. Operational decisions:
- Affect how the firm is run from day to day
Second development: - Are the domain of operation managers who are the closest
to the customers
On-going research has resulted in numerous methodological
developments, including: Regardless of the level within the firm, decision making can be defined
as the following process:
Advances in computational approaches to effectively handle and 1. Identify and define the problem
explore massive amounts of data 2. Determine the criteria that will be used to evaluate
alternative solution
Faster algorithms for optimization and simulation 3. Determine the set of alternative solutions
4. Evaluate the alternatives
More effective approaches for visualizing data 5. Choose an alternative

Third development: Step 1 decision, identifying and defining the problem, is the most
critical. Only if the problem is well-defined, with clear metrics of
The methodological developments were paired with an explosion in success of failure (step 2), can a proper approach for solving the
computing power and storage capability problem (steps 3 and 4) be devised. Decision making includes with
the choice of one of the alternatives (step 5).
Better computing hardware, parallel computing, and cloud computing
have enabled businesses to solve big problems faster and more Common approaches to making decisions:
accurately than ever before - Tradition (“We’ve always done it this way”)
- Intuition (“gut feeling”)
- Rules of thumb (“As the restaurant owner, I schedule twice
the number of waiters and cooks on holidays”)
- Using the relevant data available

WHAT IS BUSINESS ANALYTICS?


What makes decision making difficult and challenging? Uncertainty is
probably the number one challenge. If we knew how much the
demand will be for our product, we could do a much better job of
planning and scheduling production. Also, we often face such an
enormous number of alternatives to make decision that we cannot
evaluate them all. What is the best combination of stocks to help me
meet my financial objectives? How should an airline price its tickets so
as to maximize revenue?
Business analytics:
- Scientific process of transforming data into insight for
making better decisions
- Used for data-driven or fact-based decision making, which
is often seen as more objective than other alternatives for
decision making
- Is an entire discipline that encompasses technology, work
processes, human and organizational factors, and it is
evolutionary
- Is not a product or tool, a project, a collection of reports or
dashboards or visualizations
- Is the utilization of organizational (and sometimes external)
data to provide timely, accurate, high-value, actionable
DECISIONS (ERLJalao, 2019)

Business Analytics History

Business Analytics used data as the basis for decision making which
is often seen as more objective. Data are being processed
scientifically to convert them to insight that can be used for better
planning, quantifying risk and lastly choosing the best course of action
from the developed alternatives.

Advantages of Business Analytics:


1. Eliminate guesswork
2. Get faster answer to your question
3. Get insight into customer behaviour
4. Get key business metrics reports when and where you need
Tools of business analytics can aid decision making by: them
- Creating insights from data But for us accountants - “Why is it important?”
- Improving our ability to more accurately forecast for planning Accountants use data analytics to help businesses uncover
- Helping us quantify risk valuable insights within their financials, identify process
- Yielding better alternatives through analysis and improvements that can increase efficiency, and better manage risk.
optimization ―Accountants will be increasingly expected to add value to the
business decision making within their organizations and for their
Module 2 & 4 **same content**
clients,‖
Business analytics is an interactive process of solving a business Auditors, both those working internally and externally, can shift from a
problem, finding and analyzing the data required to obtain a solution, sample-based model to employ continuous monitoring where much
and interpreting the results to provide recommendations for decision larger data sets are analyzed and verified. The result: less margin of
making. There are four types of business analytics that involve error resulting in more precise recommendations.
from simple reports to the most advanced optimization Tax accountants use data science to quickly analyze complex taxation
techniques. These are usually implemented in stages which are questions related to investment scenarios. In turn, investment
interrelated with each other that offer various insights.
decisions can be expedited, which allows companies to respond faster
The diagram below was being discussed by the Analytics Association to opportunities to beat their competition — and the market — to the
of the Philippines: punch.
Accountants who assist, or act as, investment advisors use big data to
find behavioral patterns in consumers and the market. These patterns
can help businesses build analytic models that, in turn, help them
identify investment opportunities and generate higher profit margins.
Descriptive analytics: Encompasses the set of techniques that - With the help of diagnostic analysis in the sales domain, one
describes what has happened in the past; examples: can update the marketing strategies which would otherwise
- Data queries diminish the total revenue.
- Reports
In a time-series data of sales, diagnostic analytics would help you
- Descriptive statistics
understand why the sales have decreased or increased for a specific
- Data visualization (including data dashboards)
- Data-mining techniques year or so. However, this type of analytics has a limited ability to give
- Basic what-if spreadsheet models actionable insights. It just provides an understanding of causal
relationships and sequences while looking backward.
Data query Predictive analytics
A request for information with certain characteristics from a database. Consists of techniques that use models constructed from past data to
For example: A query to a logistics company (i.e. LBC) database might predict the future or ascertain the impact of one variable on another.
be for all records of shipments to a particular distribution center the With the help of predictive analysis, we determine the future outcome.
month of May. This query provides descriptive information about these Based on the analysis of the historical data, we are able to forecast
parcels such as the number parcels, how much was included in each the future. It makes use of descriptive analysis to generate predictions
parcel, the date of each parcel and so on. A report summarizing about the future. With the help of technological advancements and
relevant information for management might be conveyed by the use of machine learning, we are able to obtain predictive insights about the
descriptive statistics (means, measures of variations, etc.) and data- future.
visualization tools (tables, charts, and maps) then a simple descriptive Techniques used in Predictive Analytics:
statistics and data-visualization techniques can be used to find Linear regression a regression analysis in which relationship
patterns or relationships in a large database. between the independent variables and the dependent variables are
Data dashboards approximated by a straight line.
Collections of tables, charts, maps, and summary statistics that are Time series analysis is a set of observations on a variable measured
updated as new data become available at successive points in time or over successive periods of time.
Uses of dashboards: Data mining is used to find patterns or relationships among elements
- To help management monitor specific aspects of the of the data in a large database; often used in predictive analytics
company’s performance related totheir decision-making Simulation involves the use of probability and statistics to construct a
responsibilities computer model to study the impact of uncertainty on a decision
- For corporate-level managers, daily data dashboards might
Survey data and past purchase behavior may be used to help predict
summarize sales by region, current inventory levels, and
the market share of a new product.
other company-wide metric
Prescriptive analysis
- Front-line managers may view dashboards that contain
metrics related to staffing levels, local inventory levels, and Combines insights from all of the above analytical techniques. It
short-term sales forecasts indicates a course of action to take in which the output of prescriptive
model is a decision. Prescriptive analytics allows companies to make
Data mining decisions based on them. It makes heavy usage of Artificial
The use of analytical techniques for better understanding patterns and Intelligence in order to facilitate companies into making careful
relationships that exist in large data sets. For example, by analyzing business decisions. It is also referred to as the final frontier of data
text on social network platforms such as Facebook and Twitter, data- analytics.
mining techniques are used by companies to better understand their Tools used in Prescriptive Analytics:
customers. By categorizing certain words as positive or negative and Rule-based models are types of prescriptive models that rely on a
keeping track of how often those words appear in tweets, a company rule or set of rules. For example, we may develop a model to predict
like Samsung can better understand how its customers are feeling the probability that a person will default on a loan. If we create a rule
about a product like the Galaxy Flip Z 5G. that says if the estimated probability of default is more than 0.6, we
Diagnostic analytics should not award a loan. This predictive model with the rule becomes
Used to determine why something happened in the past. At times, a prescriptive model.
businesses are required to think critically about the nature of data and Optimization models are models that give the best decision subject
understand the descriptive analysis in depth. In order to find issues in to the constraints of the situation.
the data, we need to find anomalous patterns that might contribute
towards the poor performance of our model.
With diagnostic analysis, you are able to diagnose various problems
that are exhibited through your data. Businesses use this technique to
reduce their losses and optimize their performances. Some of the
examples where businesses use diagnostic analysis are:
- Businesses implement diagnostic analysis to reduce latency
in logistics and optimize their production process.
Simulation optimization combines the use of probability and
statistics to model uncertainty with optimization techniques to find Whoever could tame the massive amounts of raw, unstructured
good decisions in highly complex and highly uncertain settings. information would open a treasure chest of insights about consumer
Decision analysis can be used to develop an optimal strategy when behavior, business operations, natural phenomena and population
a decision maker is faced with several decision alternatives and an changes never seen before.
uncertain set of future events. It employs utility theory that assigns
values to outcomes based on the decision maker’s attitude toward risk, Traditional data warehouses and relational databases could not
loss, and other factors. handle the task. Innovation was needed. In 2006, Hadoop was created
The computations include optimization of some functions that are by engineers at Yahoo and launched as an Apache open source
related to the desired outcome. For example, while calling for a Grab project. The distributed processing framework made it possible to
car online, the application uses GPS to connect you to the correct run big data applications on a clustered platform. This is the main
driver from among a number of drivers found nearby. Hence, it difference between traditional vs big data analytics.
optimizes the distance for faster arrival time. Recommendation
engines also use prescriptive analytics. At first, only large companies like Google and Facebook took
Major industrial players like Facebook, Netflix, Amazon, and Google advantage of big data analysis. By the 2010s, retailers, banks,
are using prescriptive analytics to make key business decisions. manufacturers and healthcare companies began to see the value of
Furthermore, financial institutions are gradually leveraging the power also being big data analytics companies.
of this technique to increase their revenue.
Large organizations with on-premises data systems were initially best
Module 3: Big data suited for collecting and analyzing massive data sets. But Amazon
Big data analytics has transformed the way industries perceived data. Web Services (AWS) and other cloud platform vendors made it
Big data is any set of data that is too large or too complex to be easier for any business to use a big data analytics platform. The ability
handled by standard data-processing techniques and typical desktop to set up Hadoop clusters in the cloud gave a company of any size the
software. freedom to spin up and run only what they need on demand.

Traditionally, companies made use of statistical tools and surveying to A big data analytics ecosystem is a key component of agility, which is
gather data and perform analysis on the limited amount of information. essential for today’s companies to find success. Insights can be
Most of the times, the deductions and inferences that were produced discovered faster and more efficiently, which translates into
based on the information were not adequate and did not lead to immediate business decisions that can determine a win.
positive results. Because of this, companies had to incur losses.

However, with the advancements in technology and a massive PART II: THE FOUR (4) V’S OF BIG DATA
increase in the computational capabilities contributed by High-
Performance Computing, industries are able to expand their domain of IBM data scientists break big data into four dimensions:
knowledge. What comprised of a few gigabytes in the past is now in
the size of quintillions. This is contributed by the massive expanse in 1. VOLUME - is the SECOND DIMENSION OF BIG DATA,
mobile phones, IoT devices and other internet services. To make volume refers to the QUANTITY OF DATA. With internet era
sense of this, industries have resorted to Big Data Analytics. the data is generated by machines, human interaction on
social sites and other platforms, so the volume of data
PART I: BRIEF HISTORY OF BIG DATA IN ANALYTICS generated every day is humongous. IBM estimates that 2.5
quintillion bytes of data is created each day.
The advent of big data analytics was in response to the rise of big data, 2. VARIETY - The variety of data is the FIRST BIG DATA
which began in the 1990s. Long before the term “big data” was coined, DIMENSION. Variety refers to collecting data from
the concept was applied at the dawn of the computer age when VARIOUS SOURCERS (human and machine) and include
businesses used large spread sheets to analyze numbers and look for data from sources like, social media, credit card usage,
trends. website visits, retail shops, hospitals, mobiles, sensors, log
files, security cameras, etc. As data is captured from the
The sheer amount of data generated in the late 1990s and early 2000s variety of sources and multiple data types like structured,
was fueled by new sources of data. The popularity of search engines semi-structured and unstructured from internal systems and
and mobile devices created more data than any company knew external systems so it becomes very important to integrate
what to do with. Speed was another factor. The faster data was these multiple data types.
created, the more that had to be handled. A recent study by 3. VELOCITY - The THIRD BIG DATA DIMENSION deals with
International Data Corporation (IDC) projected that data creation the SPEED OF DATA which flows from various sources like
would grow tenfold globally by 2020. social media and internal business processes. In the internet
era the flow of data from social media is massive and The research process can be broken down into seven steps, making
continuous so handling the velocity of such amount of data it more manageable and easier to understand. This module will give
and coming up with meaningful information helps the you an idea of what’s involved at each step in order to give you a better
organization in making key business decisions. overall picture of where you will be going, and what to expect at each
4. VERACITY - is the FOURTH ATTRIBUTE which refers to step.
the ABNORMALITY OF DATA. How much of the data can
be trusted as it is when decisions have to be taken. This Overview of Research Process
dimension focuses on how to integrate data from different The research process is broadly summarized in below:
sources into a consistently high-quality data which can be
helpful in making the meaningful decision for a business.

PART III: BIG DATA ANALYTICS TOOLS

Big data analytics requires a software framework for distributed


storage and processing of big data.

The following tools are considered big data analytics software


solutions:

- Hadoop: An open-source programming environment that


supports big data processing through distributed storage Initial Observation
and processing over multiple computers The first step in Figure 1.1 was to come up with a question that needs
- MapReduce: A programming model used within Hadoop an answer. A lot of specific endeavor starts by observing something in
that performs two major steps: the map step and the reduce the world and wondering why it happens. Having a casual observation
step for processing massive amounts of unstructured data about the world, the person needs to collect some data to see whether
in parallel across a distributed cluster. this observation is true and not just a biased observation. To do this,
- Apache Kafka: Scalable messaging system that lets users the person/observer needs to definer one or more variable that would
publish and consume large numbers of messages in real likely to be measured from the observation.
time by subscription.
- HBase: Column-oriented key/value data store that runs run Example:
on the Hadoop Distributed File System. Scenario: Whenever I buy a coffee from Starbucks, there are a lot of
- Hive: Open source data warehouse system for analyzing people staying there for an hour or two or even greater. One question
data sets in Hadoop files. I am constantly perplexed by is why is it that people stay there for an
- Pig: Open source technology for parallel programming of hour or even more?
MapReduce jobs on Hadoop clusters.
- Spark: Open source and parallel processing framework Variable/s: Occupation (1), Income (2), Gender (3), or
for running large-scale data analytics applications across Personality/Motive of the buyer (4).
clustered systems.
- YARN: Cluster management technology in second- Generating theories and testing them
generation Hadoop The next logical thing is to explain these data. One explanation could
be that people with a higher income are more likely to buy and stay in
Data security is the protection of stored data from destructive forces Starbucks. This is a theory. Another possibility can also be due to
or unauthorized users, and is of critical importance to companies. For occupation. If they are employees, it can be they have a meeting or if
example, credit card transactions are potentially very useful for a student, they simply want to study. This is theory. We can verify our
understanding consumer behavior, but compromise of these data original observation by collecting data, and we can collect more data
could lead to unauthorized use of the credit card or identity theft. to test our theories. We can make some predictions from these two
theories. The first is the number of people who has a higher income is
Module 5: Research Process likely to stay longer in Starbucks than those who are low income
The research process involves identifying, locating, assessing, and earner. Another prediction is that those who are studious are likely to
analyzing the information you need to support your research question, stay longer in Starbucks. A prediction from theory, like this one, is
and then developing and expressing your ideas. These are same skills known as hypothesis. We could test this hypothesis by conducting a
you need any time you write a report, proposal, or put together a survey. Imagine that we collected these data presented in Table 1.1.
presentation.
The qualitative method permits a flexible and iterative approach. The
value of qualitative research can best be understood by examining its
characteristics. One of the primary advantages of qualitative research
is that it is more open to the adjusting and refining of research ideas
as an inquiry proceeds.
The researcher should not attempt to manipulate the research setting
but rather seeks to understand naturally occurring phenomena in their
naturally occurring states. Inductive reasoning, as opposed to
deductive reasoning, is common in qualitative research, along with
content or holistic analysis in place of statistical analysis (Meyer et al.,
1995).
You can utilize interview questionnaires that were commonly used in
conducting an audit for the data gathering process to get qualitative
data.
The research process involves identifying, locating, assessing, and
analyzing the information you need to support your research question,
and then developing and expressing your ideas. These are same skills
you need any time you write a report, proposal, or put together a
presentation.
In determining the research question, some of the points you need to
consider are:
1. Be clear. It must be understandable to you and to others.
2. Be researchable. It should be capable of developing into a
A table of the number of the people in staying in Starbucks manageable research design, so data may be collected in
relation to it.
based on their Income or Reason for Staying.
3. Connect with established theory and research. There
should be a literature on which you can draw to illuminate
In total, 100 consumers of Starbucks are being asked using two how your research question(s) should be approached.
variables, Income and Reason for Staying. Our first hypothesis is that 4. Be neither too broad nor too narrow. It must contain brief
those with higher income are more likely to stay longer in Starbucks. explanation of the narrowing process and how a research
We can use the frequency of measures (mean, median, and mode) to question, purpose statement, and hypothesis are
describe the table. interconnected.

Upon selecting a research problem, select a general topic that’s


interesting to you. Once identified, you’ll need to narrow it.
The data support our hypothesis. Therefore, my initial observation Doing some exploratory reading is a big advantage because it will help
such that income and occupation is a factor in staying longer in you to support the research you want to pursue and to gain additional
Starbucks was verified by data, then my theory was being tested using insights in order to develop a research question, purpose statement
specific hypothesis that were also verified using data. Data are very and hypothesis, if applicable.
important. If a particular data gathered seem to contradict the theory, Data collection is defined as the procedure of collecting, measuring
it is called Falsification, which is the act of disproving hypothesis or and analyzing accurate insights for research using standard validated
theory. techniques. A researcher can evaluate their hypothesis on the basis
of collected data. In most cases, data collection is the primary and
Module 6 most important step for research, irrespective of the field of research.
We have identified in the previous unit that data collection is vital for The approach of data collection is different for different fields of study,
testing theories. When we collect data, we need to decide on two depending on the required information. The most critical objective of
things: (1) what to measure, (2) how to measure it. data collection is ensuring that information-rich and reliable data is
In a research, descriptive method of research was used to gather collected for statistical analysis so that data-driven decisions can be
information about the existing condition. The purpose of employing made for research.
this method is to describe the nature of a situation as it exists at the Independent and Dependent Variables
time of the study, and to explore the cause/s of particular phenomena. To test hypotheses, we need to measure variables. Variables are just
The researcher opted to use this kind of research considering the things that can change or vary, either between people, location or even
desire of the researcher to obtain first-hand data from the respondents time. Most hypotheses can be expressed in terms of two (2) variables:
so as to formulate rational and sound conclusions and a proposed cause and a proposed outcome.
recommendations for the study.
For example, if we take the scientific statement ‘Coca-Cola is an Validity and Reliability
effective spermicide’ then proposed cause is ‘Coca-Cola’ and the One way to try to ensure that measurement error is kept to a minimum
proposed effect is dead sperm. Both the cause and the outcome are is to determine properties of the measure that give us confidence that
variables; for the cause, we could vary the type of drink and for the it is doing its job properly.
outcome, these drinks will kill different amounts of sperm. The key to Validity - the first property, which is whether an instrument actually
testing such statements is how to measure these variables. measures what it sets out to measure
Criterion Validity – is whether the instrument is measuring what it
claims to measure. In an ideal world, you could assess this by relating
scores on your measure to real-world observations. For example, we
could take an objective measure of how helpful lecturers were and
compare these observations to student’s rating.
Content Validity – with self-report measures/questionnaires we can
also assess the degree to which individual items represent the
construct being measured, and cover the full range of the construct.
Levels of Measurement Reliability – the second property, which is whether an instrument can
The relationship between what is being measured and the numbers be interpreted consistently across different situations.
that represent what is being measured is known as the level of Test-retest reliability – validity is a necessary but not sufficient
measurement. Broadly speaking, variables can be categorical or condition of a measure. A second consideration is reliability, which is
continuous, and can have different levels of measurement. the ability of the measure to produce the same results under the same
conditions. To be valid the instrument must first be reliable. The
easiest way to assess reliability is to test the same group of people
twice: a reliable instrument will produce similar scores at both points
in time. Sometimes, however, you will want to measure something that
varies over time (e.g. exam scores, productivity rate). Statistical
methods can also be used to determine reliability.
Data collection procedures
Data collection is a methodical process of gathering and analyzing
specific information to proffer solutions to relevant questions and
evaluate the results. It focuses on finding out all there is to a particular
subject matter. Data is collected to be further subjected to hypothesis
Measurement Error testing which seeks to explain a phenomenon.
We have seen that to test hypotheses we need to measure variables Hypothesis testing eliminates assumptions while making a
and it is important to measure it accurately. Ideally we want our proposition from the basis of reason.
measure to be calibrated such that values have the same meaning For collectors of data, there is a range of outcomes for which the data
over time and across situations such as weight or it can be directly is collected. But the key purpose for which data is collected is to put a
measured such as profit, weight or height, or indirect measures such researcher in a vantage position to make predictions about future
as self-report, questionnaires and computerized tasks. probabilities and trends.
There will often be discrepancy between the numbers we use to The core forms in which data can be collected are primary and
represent the thing we’re measuring and the actual value of the thing secondary data. While the former is collected by a researcher through
we’re measuring (i.e the value we would get if we could measure it first-hand sources, the latter is collected by an individual other than the
directly). This discrepancy is known as Measurement Error. user.
For example, imagine that you know as an absolute truth that you Primary Data Collection
weight 83 kg. Then one day, when you went for a physical check-up, Primary data collection by definition is the gathering of raw data
your actual weight is 80 kg, hence there is a difference of 3 kg using collected at the source. It is a process of collecting the original data
the measurement tool such as weighing scale. Measurement tools collected by a researcher for a specific research purpose. It could be
such as weighing scales could produce a very small amount of further analyzed into two segments; qualitative research and
measurement errors. Self-report measures do produce measurement quantitative data collection methods.
error because factors other than the one you are trying to measure will Qualitative Research Method
influence how people respond to your measures. For example, when The qualitative research methods of data collection does not involve
someone is trying to ask you whether you have stolen a ballpen from the collection of data that involves numbers or a need to be deduced
a classmate, would you tell the truth or make a lie? through a mathematical calculation, rather it is based on the non-
quantifiable elements like the feeling or emotion of the researcher. An
example of such a method is an open-ended questionnaire.
Quantitative Method gathered. It involves adding measurement to a study or research. An
Quantitative methods are presented in numbers and require a example would be sourcing data from an archive.
mathematical calculation to deduce. An example would be the use of Observation
a questionnaire with close-ended questions to arrive at figures to be This is a data collection method by which information on a
calculated mathematically. Also, methods of correlation and phenomenon is gathered through observation. The nature of the
regression, mean, mode and median. observation could be accomplished either as a complete observer, an
Secondary Data Collection observer as a participant, a participant as an observer or as a complete
Secondary data collection, on the other hand, is referred to as the participant. This method is a key base of formulating a hypothesis.
gathering of second-hand data collected by an individual who is not Focus Groups
the original user. It is the process of collecting data that already exists, The opposite of quantitative research which involves numerical based
be it already published books, journals and/or online portals. In terms data, this data collection method focuses more on qualitative research.
of ease, it is much less expensive and easier to collect. It falls under the primary category for data based on the feelings and
Your choice between Primary data collection and secondary data opinions of the respondents. This research involves asking open-
collection depend on the nature, scope and area of your research as ended questions to a group of individuals usually ranging from 6-10
well as its aims and objectives. people, to provide feedback.
Data Collection Methods Combination Research
Interview This method of data collection encompasses the use of innovative
An interview is a face-to-face conversation between two individuals methods to enhance participation to both individuals and groups. Also
with the sole purpose of collecting relevant information to satisfy a under the primary category, it is a combination of Interviews and Focus
research purpose. Groups while collecting qualitative data. This method is key when
1. Structured interviews - a verbally administered addressing sensitive subjects.
questionnaire. In terms of depth, it is surface level and is
usually completed within a short period. For speed and PPT PRESENTATION REFERENCE:
efficiency, it is highly recommendable, but it lacks depth.
2. Semi-structured interviews - several key questions subsist
which cover the scope of the areas to be explored. It allows
a little more leeway for the researcher to explore the subject
matter.
3. Unstructured interviews - an in-depth interview that allows
the researcher to collect a wide range of information with a
purpose. An advantage of this method is the freedom it gives
a researcher to combine structure with flexibility even though
it is more time-consuming.

Questionnaires
This is the process of collecting data through an instrument consisting Conceptual Mapping
of a series of questions and prompts to receive a response from
individuals it is administered to. Questionnaires are designed to collect
data from a group.

For clarity, it is important to note that a questionnaire isn't a survey,


rather it forms a part of it. A survey is a process of data gathering
involving a variety of data collection methods, including a
questionnaire.
On a questionnaire, there are three kinds of questions used. They are;
fixed-alternative, scale, and open-ended. With each of the questions
tailored to the nature and scope of the research.
Reporting
By definition, data reporting is the process of gathering and submitting Data Collection
data to be further subjected to analysis. The key aspect of data 1. What to measure?
reporting is reporting accurate data because of inaccurate data 2. How to measure?
reporting leads to uninformed decision making.
Existing Data What to measure? (Initial Observation)
This is the introduction of new investigative questions in addition The researcher should not attempt to manipulate the research setting
to/other than the ones originally used when the data was initially but rather seeks to understand naturally occurring phenomena in their
naturally occurring states (Meyer et al., 1995).
Data Collection Procedures
1. Primary Data Collection
a. Qualitative Research
b. Quantitative Research
2. Secondary Data Collection

Data Collection Method


1. Interview
a. Structure Interviews
b. Semi-structured Interviews
c. Unstructured Interviews
2. Questionnaires
a. Fixed-Alternative (Multiple Choices)
b. Scale
c. Open-ended (respondents express their opinion)

You might also like