You are on page 1of 11

WOLKITE UNIVERSITY

COLLEGE OF COMPUTING AND INFORMATICS

DEPARTMENT OF INFORMATION SYSTEMS

TITLE: - Research methods in information systems

Individual Assignment

NAME IDNO

BAYISA ABABA CIR/088/10

Submission Date: 2/3/2021

Submitted To: Mr. Musa A

1. Background History of Researches of computing up to now

Technology has been growing so exponentially over recent years, there has been a steadily
increasing demand for bright graduates to come in and help to transform areas ranging from data
infrastructure to cyber security.

He designed the Analytical Engine and it was this design that the basic framework of
the computers of today are based on.

When we see this change one by one

1983: The CD-ROM hit the market, able to hold 550 megabytes of pre-recorded data. That same
year, many computer companies worked to set a standard for these disks, making them able to be
used freely to access a wide variety of information.

1993: With an attempt to enter the handheld computer market, Apple releases Newton. Called
the “Personal Data Assistant”, it never performed the way Apple President John Scully had
hoped, and it was discontinued in 1998.

2011: Google releases the Chromebook, a laptop that runs on Google Chrome OS.

Also in 2011, the Nest Learning Thermostat emerges as one of the first Internet of Things,
allowing for remote access to a user’s home thermostat by use of their smartphone or tablet. It
also sent monthly power consumption reports to help customers save on energy bills.

2012: On October 4, Facebook hits 1 billion users, as well as acquires the image-sharing social
networking application Instagram

Also in 2012, the Raspberry Pi, a credit-card-sized single-board computer is released, weighing
only 45 grams.

2014: The University of Michigan Micro Mote (M3), the smallest computer in the world, is
created. Three types were made available, two of which measured either temperature or pressure,
and one that could take images. Additionally, the Apple Pay mobile payment system is
introduced.

2015: Apple releases the Apple Watch, which incorporated Apple’s iOS operating system and
sensors for environmental and health monitoring. Almost a million units were sold on the day of
its release.

This release was followed closely by Microsoft announcing Windows 10.


2016: The first reprogrammable quantum computer is created.

2019: Apple announces iPadOS, the iPad's very own operating system, to better support the
device as it becomes more like a computer and less like a mobile device.

So, what’s next?

I don’t have the answer to what awaits us in regards to computers. One thing is for sure -- in
order to keep up with the world of tech, the growing need for cyber security, and our constant
need for the next big thing, computers aren’t going anywhere.

2. Framework for statistical and operational design

Statistical framework design

General concept:-

 Description of a particular field of statistics in terms of its content and scope, actors and
units, concepts and definitions, classifications, relationships between elements and links
to other statistical frameworks.
 It may also include indicators, sources, methods and/or model surveys.

Operational design framework

 An operational design framework is a guide to a company's policies, goals, standards,


procedures and training.
 The framework sets out the way the company does business and promotes a corporate
culture and identity.
 Each operational framework contains different elements.

3. Hypothesis testing
What Is Hypothesis Testing?

Hypothesis testing is an act in statistics where by an analyst tests an assumption


regarding a population parameter.

The purpose of hypothesis testing is to determine whether there is enough statistical evidence in favor of
a certain belief, or hypothesis, about a parameter.

Hypothesis testing is used to assess the plausibility of a hypothesis by using sample data.
Such data may come from a larger population, or from a data-generating process.

What does hypothesis mean?


 A hypothesis is a suggested solution for an unexplained occurrence that does not fit
into current accepted scientific theory.
 The basic idea of a hypothesis is that there is no pre-determined outcome
The hypothesis test is used to evaluate the results from a research study in the following

 The treatment is administered to the sample.


 A sample is selected from the population.
 After treatment, the individuals in the sample are measured.

The purpose of the hypothesis test is to decide between two explanations:

 The difference between the sample and the population can be explained by sampling
error (there does not appear to be a treatment effect)
 The difference between the sample and the population is too large to be explained
by sampling error (there does appear to be a treatment effect).

What are the six steps of hypothesis testing?


Step 1: Specify the Null Hypothesis. ...
Step 2: Specify the Alternative Hypothesis. ...
Step 3: Set the Significance Level (a)...
Step 4: Calculate the Test Statistic and Corresponding P-Value. ...
Step 5: Drawing a Conclusion.

How Hypothesis Testing Works


 Statistical analysts test a hypothesis by measuring and examining a random sample of
the population being analyzed.
 In hypothesis testing, an analyst tests a statistical sample, with the goal of providing
evidence on the plausibility of the null hypothesis.
 All analysts use a random population sample to test two different hypotheses: the null
hypothesis and the alternative hypothesis.
 However, one of the two hypotheses will always be true.

Steps of Hypothesis Testing four step process


1. The first step is for the analyst to state the two hypotheses so that only one can be right.
2. The next step is to formulate an analysis plan, which outlines how the data will be
evaluated.
3. The third step is to carry out the plan and physically analyze the sample data.
4. The fourth and final step is to analyze the results and either reject the null hypothesis, or
state that the null hypothesis is plausible, given the data.

4. Measures of association

The measures of association refer to a wide variety of coefficients (including bivariate correlation


and regression coefficients) that measure the strength and direction of the relationship between
variables; these measures of strength, or association, can be described in several ways, depending on
the analysis.

There are certain points that a researcher should know in order to better understand the measures of
statistical association.
 First, the researcher should know that measures of association are not the same as measures
of statistical significance.  It is possible for a weak association to be statistically significant; it is also
possible for a strong association to not be statistically significant.

 For measures of association, a value of zero signifies that no relationship exists.   In a
correlation analysis, if the coefficient (r) has a value of one, it signifies a perfect relationship on the
variables of interest. In regression analyses, if the standardized beta weight (β) has a value of one, it
also signifies a perfect relationship on the variables of interest.

Measures of Association

A measure of association quantifies the relationship between exposure and disease among
the two groups. Exposure is used loosely to mean not only exposure to foods,
mosquitoes, a partner with a sexually transmissible disease, or a toxic waste dump.

The measures of association described in the following section compare disease


occurrence among one group with disease occurrence in another group. Examples of
measures of association include risk ratio (relative risk), rate ratio, odds ratio, and
proportionate mortality ratio
the most commonly used measure of association between dietary intake and disease risk is
relative risk.

Measures of association provide a means of summarizing the size of the association between two
variables.

 Most measures of association are scaled so that they reach a maximum numerical value
of 1 when the two variables have a perfect relationship with each other.
 The researcher calculates the observed value of the measure of association, and if the
measure is different enough from 0, the test shows that there is a significant relationship
between the two variables.
Measures of association based on the chi square statistic.
 Measures of association based on the chi square statistic. That statistic is a function not
only of the size of the relationship between the two variables, but also of the sample size
and the number of rows and columns in the table. This statistic can be adjusted in various
ways, in order to produce a measure of association.
 Measure of association, in statistics, any of various factors or coefficients used to
quantify a relationship between two or more variables. Measures of association are used
in various fields of research but are especially common in the areas of epidemiology and
psychology, where they frequently are used to quantify relationships between exposures
and diseases or behaviors.
5. Exploring, displaying , and examining data

Definition Data exploration

 Data exploration refers to the initial step in data analysis in which data analysts use data
visualization and statistical techniques to describe dataset characterizations, such as size,
quantity, and accuracy, in order to better understand the nature of the data.
 Data exploration techniques include both manual analysis and automated data exploration
software solutions that visually explore and identify relationships between different data
variables, the structure of the dataset, the presence of outliers, and the distribution of data
values in order to reveal patterns and points of interest, enabling data analysts to gain
greater insight into the raw data.

What Can I Use Data Exploration For?

 In any situation where you have a massive set of information, data exploration can help cut it
down to a manageable size and focus efforts to optimize your analysis.
 Most data analytics software includes visualization tools and charting features that make
exploration at the outset significantly easier, helping reduce data by rooting out information that
isn’t required, or which can distort results in the long run.

Data Exploration Tools

Manual data exploration methods entail either writing scripts to analyze raw data or manually
filtering data into spreadsheets. Graphical displays of data, such as bar charts and scatter plots,
are valuable tools in visual data exploration.

A popular tool for manual data exploration is Microsoft Excel spreadsheets, which can be
used to create basic charts for data exploration, to view raw data, and to identify the
correlation between variables.

Important Data Exploration

Humans process visual data better than numerical data, therefore it is extremely challenging for
data scientists and data analysts to assign meaning to thousands of rows and columns of data
points and communicate that meaning without any visual components.

Data visualization in data exploration leverages familiar visual cues such as shapes, dimensions,
colors, lines, points, and angles so that data analysts can effectively visualize and define the
metadata, and then perform data cleansing.

Data Examination vs Data Exploration

 Data examination and data exploration are effectively the same process. Data
examination assesses the internal consistency of the data as a whole for the purpose of
confirming the quality of the data for subsequent analysis.
 Internal consistency reliability is an assessment based on the correlations between
different items on the same test.
 This assessment gauges the reliability of a test or survey that is designed to measure the
same construct for different items.

Displaying data

Displaying data in research is the last step of the research process.

 It is important to display data accurately because it helps in presenting the findings of the
research effectively to the reader.
 The purpose of displaying data in research is to make the findings more visible and make
comparisons easy.
 When the researcher will present the research in front of the research committee, they
will easily understand the findings of the research from displayed data.
 The readers of the research will also be able to understand it better. Without displayed
data, the data looks too scattered and the reader cannot make inferences.

There are basically two ways to display data: tables and graphs.

 The tabulated data and the graphical representation both should be used to give more
accurate picture of the research.
 In quantitative research it is very necessary to display data, on the other hand in
qualitative data the researcher decides whether there is a need to display data or not.
 The researcher can use an appropriate software to help tabulate and display the data in the
form of graphs.
 Microsoft excel is one such example, it is a user-friendly program that you can use to
help display the data.

Four phases are outlined:

1. The first step is to clarify the metadata associated with the information you are about to
examine.
 This means that you need to know some basic things about the way the data were
collected.
 Use this information to help write the “methodology” section of your report (of course
you will need to draw upon additional information to provide details about the
methodology when you are actually writing this section).
 However, this form will provide a good place to start. Some of this information may
already be recorded in the codebook.

2. The next step is to identify which variables you are interested in examining.

 This is a critical step because the temptation is to look at the relationship between
everything. Statistically this is considered “fishing” for good results.
 Avoid fishing at all costs. It wastes time and leads to a dramatic increase in the sampling
error.

3. The third step is to select the appropriate statistics for the variables you want to examine.

 This procedure is dependent on a number of characteristics of the data.

4. The final step is to create tables of your findings.

 These tables along with appropriate graphs or charts will then be inserted into your report
along with your interpretations of the results.

Examining data in research

To improve your data analysis skills and simplify your decisions, execute these five steps in your
data analysis process:

Step 1: Define Your Questions

Step 2: Set Clear Measurement Priorities

Step 3: Collect Data

Step 4: Analyze Data

Step 5: Interpret Results.

You might also like