You are on page 1of 32

CH1: Introduction

http://www.chris-kimble.com/Courses/World_Med_MBA/Types-of-Information-System.html

Models of DSS
A decision support systems consists of three main components, namely database, software
system and user interface. The software system consists of various mathematical and analytical
models that are used to analyze the complex data, thereby producing the required information. A
model predicts the output in the basis of different inputs or different conditions, or finds out the
combination of conditions and input that is required to produce the desired output. A decision
support system may compromise different models where each model performs a specific
function. The selection of models that must be included in a decision support system family
depends on user requirements and the purposes of DSS. Some of the commonly used
mathematical and statistical models are as follows:-
● Statistical Models: They contain a wide range of statistical functions, such as mean,
median, mode, deviations etc. These models are used to establish, relationships between
the occurrences of an event and various factors related to that event. It can, for example,
relate sale of product to differences in area, income, season, or other factors. In addition
to statistical functions, they contain software that can analyze series of data to project
future outcomes.
● Sensitivity Analysis Models: These are used to provide answers to what-if situations
occurring frequently in an organization. During the analysis, the value of one variable is
changed repeatedly and resulting changes on other variables are observed. The sale of
product, for example, is affected by different factors such as price, expenses on
advertisements, number of sales staff, productions etc. Using a sensitivity model, price of
the product can be changed (increased or decreased) repeatedly to ascertain the sensitivity
of different factors and their effect on sales volume. Excel spreadsheets and Lotus 1-2-3
are often used for making such analysis.
● Optimization Analysis Models: They are used to find optimum value for a target variable
under given circumstances. They are widely used for making decisions related to
optimum utilization of resources in an organization. During optimization analysis, the
values for one or more variables are changed repeatedly keeping in mind the specific
constraints, until the best values for target variable are found. They can, for example,
determine the highest level of production that can be achieved by varying job
assignments to workers, keeping in mind that some workers are skilled and their job
assignment cannot be changed. Linear programming techniques and Solver tool in
Microsoft excel are mostly used for making such analysis.
● Forecasting Models: They use various forecasting tools and techniques, including the
regression models, time series analysis, and market research methods etc., to make
statements about the future or to predict something in advance. They provide information
that helps in analyzing the business conditions and making future plans. These systems
are widely used for forecasting sales.
● Backward Analysis Sensitivity Models: Also known as goal seeking analysis, the
technique followed in these models is just opposite to the technique applied in sensitivity
analysis models. In place of changing the value of variable repeatedly to see how it
affects other variables, goal seeking analysis sets a target value for a variable and then
repeatedly changes other variables until the target value is achieved. To increase the
production level by 40 percent using the backward sensitivity analysis, for example, first,
the target value for the production level can be set and then the required changes to made
in other factors, such as the amount of raw material, machinery and tools, number of
production staff, etc., to achieve the target production level.

CH2: IT Infrastructure

Meaning & Types of Modulation

Modulation is a process through which audio, video, image or text information is added to an
electrical or optical carrier signal to be transmitted over a telecommunication or electronic
medium. Modulation enables the transfer of information on an electrical signal to a receiving
device that demodulates the signal to extract the blended information.
Modulation is primarily used in telecommunication technologies that require the transmission of
data via electrical signals. It is considered the backbone of data communication because it
enables the use of electrical and optical signals as information carriers. Modulation is achieved
by altering the periodic waveform or the carrier. This includes carrying its amplitude, frequency
and phase. Modulation has three different types:
1. Amplitude Modulation (AM): Amplitude of the carrier wave is modulated.
Amplitude modulation, is a form of modulation used for radio transmissions for broadcasting and
two way radio communication applications. Although one of the earliest used forms of
modulation it is still used today, mainly for long, medium and short wave broadcasting and for
some aeronautical point to point communications.
2. Frequency Modulation (FM): Frequency of the carrier wave is modulated.
While changing the amplitude of a radio signal is the most obvious method to modulate it, it is
by no means the only way. It is also possible to change the frequency of a signal to give
frequency modulation or FM. Frequency modulation is widely used on frequencies above 30
MHz, and it is particularly well known for its use for VHF FM broadcasting.
Although it may not be quite as straightforward as amplitude modulation, nevertheless frequency
modulation, FM, offers some distinct advantages. It is able to provide near interference free
reception, and it was for this reason that it was adopted for the VHF sound broadcasts. These
transmissions could offer high fidelity audio, and for this reason, frequency modulation is far
more popular than the older transmissions on the long, medium and short wave bands.
In addition to its widespread use for high quality audio broadcasts, FM is also used for a variety
of two way radio communication systems. Whether for fixed or mobile radio communication
systems, or for use in portable applications, FM is widely used at VHF and above.

3. Phase Modulation (PM): Phase of the carrier is modulated.


As the name implies, phase modulation, PM uses variations in phase for carrying the modulation.
As phase and frequency are interrelated, this relationship carries forwards into phase modulation
where it has many commonalities with frequency modulation. As a result the term angle
modulation is often use to describe both.
Phase modulation, PM is sometimes used for analogue transmission, but it has become the basis
for modulation schemes used for carrying data. Phase shoft keying, PSK is widely used for data
communication.
Phase modulation is also the basis of a form of modulation known as quadrature amplitude
modulation, where both phase and amplitude are varied to provide additional capabilities.

A modem is a common example/implementation of a modulation technique in which the data is


modulated with electrical signals and transmitted over telephone lines. It is later demodulated to
receive the data.
Cloud Services
1. The term cloud services is a broad category that encompasses the myriad IT resources
provided over the internet.
2. The first sense of cloud services covers a wide range of resources that a service provider
delivers to customers via the internet, which, in this context, has broadly become known
as the cloud.
3. Characteristics of cloud services include
a. self-provisioning and elasticity; that is, customers can provision services on an on-
demand basis and shut them down when no longer necessary.
b. In addition, customers typically subscribe to cloud services, under a monthly billing
arrangement, for example, rather than pay for software licenses and supporting server and
network infrastructure upfront.
c. In many transactions, this approach makes a cloud-based technology an operational
expense, rather than a capital expense.
d. From a management standpoint, cloud-based technology lets organizations access
software, storage, compute and other IT infrastructure elements without the burden of
maintaining and upgrading them.

The usage of cloud services has become closely associated with common cloud offerings, such
as software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service
(IaaS).
SaaS is a software distribution model in which applications are hosted by a vendor or service
provider and made available to customers over a network, typically the internet. Examples
include G Suite -- formerly Google Apps -- Microsoft Office 365, Salesforce and Workday.
PaaS refers to the delivery of operating systems and associated services over the internet without
downloads or installation. The approach lets customers create and deploy applications without
having to invest in the underlying infrastructure. Examples include Amazon Web Services'
Elastic Beanstalk, Microsoft Azure -- which refers to its PaaS offering as Cloud Services -- and
Salesforce's App Cloud.
IaaS involves outsourcing the equipment used to support operations, including storage, hardware,
servers and networking components, all of which are made accessible over a network. Examples
include Amazon Web Services, IBM Bluemix and Microsoft Azure. SaaS, PaaS and IaaS are
sometimes referred to collectively as the SPI model.

Server Infrastructure
Every website, web application, and mobile application is fueled by a stack of back-end
technologies—a combination of hardware and software that makes a site available to end users.
A traditional back-end stack has a database, a server, an operating system, and applications
written in server-side scripting languages.
Modern server architecture isn’t limited to physical machines in data centers—widespread,
affordable access to cloud computing takes aspects of back-end infrastructure (or, entire
applications) and hosts them off-site in the cloud. The most popular server architecture approach
is a hybrid approach—with part of the server workload hosted on-site, and other parts hosted in
the cloud.
1.The server acts as the lifeblood of the network.
2.These high-powered computers provide shared resources that networks need to run, including
file storage, security and encryption, databases, email, and web services.
3.A server can house your website files, your database, and your server-side software.
4.It also communicates with the network, processing requests from browsers, executing files,
processing data, and sending all of the requested information back to a browser.
5.The server provides both web and application services, and shared network resources.
6.Server architecture often involves a layered approach, leveraging dedicated servers, virtualized
servers, application servers, and the cloud.
7.Every server has an operating system and server software that gives it all the utilities it needs to
run.

1. File Server
In the client/server model, a file server is a computer responsible for the central storage and
management of data files so that other computers on the same network can access the files.
A file server allows users to share information over a network without having to physically
transfer files by floppy diskette or some other external storage device. Any computer can be
configured to be a host and act as a file server.
In its simplest form, a file server may be an ordinary PC that handles requests for files and sends
them over the network.
In a more sophisticated network, a file server might be a dedicated network-attached storage
(NAS) device that also serves as a remote hard disk drive for other computers, allowing anyone
on the network to store files on it as if to their own hard drive.

2. Exchange Server
It is Microsoft's email, calendaring, contact, scheduling and collaboration platform. In the 1990s,
email evolved into a business-critical application, leading to development of user-friendly
enterprise solutions with improved features and connectivity. Microsoft Exchange Server 2013,
allows a user to deliver email, contacts and calendar to a PC, mobile device or browser. MXS
features include:
1.Outlook Web App: Helps users access voicemail, email, SMS texts, instant messaging (IM)
and more via standard browsers
2.Exchange ActiveSync: Allows mobile users to access a universal inbox with voicemail, email,
IM and smartphone messages
3.Retention, Discovery and Email Archiving: Help reduce expenditures and simplify the
maintenance of business communication processes
4.Backup and Disaster Recovery: Features a unified solution for disaster recovery and backup by
offering an automatic, quick, database-level recovery from server, database and network failures.
5.Deployment Flexibility: Can be deployed in the cloud, on premise or both
6.Sensitive Content Monitoring: Can be used to track sensitive email content and prevent illegal
content distribution
7.Voicemail: Provides users with single inbox access to email and voicemail, both of which can
be managed from a single platform
8.Advanced Protection: Employs several integrated encryption and anti-spam technologies and
sophisticated anti-virus solutions.
9.Always-On: Facilitates quicker failover times and multiple volume support, as well as a
monitoring system that enables automated failure recovery
10.Exchange Administration Center: Allows administrators to delegate server permissions and
access based on job function without granting total access to the management interfaces or larger
enterprise.

3. Application Server
A server program in a computer in a distributed network that provides the business logic for an
application program. It is frequently viewed as part of a three-tier application, consisting of a
graphical user interface (GUI) server, an application (business logic) server, and a database and
transaction server. It divides an application into:
A first-tier, front-end, Web browser-based graphical user interface, usually at a personal
computer or workstation
A middle-tier business logic application or set of applications, possibly on a local area network
or intranet server
A third-tier, back-end, database and transaction server, sometimes on a mainframe or large server
The application server is the middleman between browser-based front-ends and back-end
databases and legacy systems

4. Database Server
It is similar to data warehouse where the website store or maintain their data and information. A
Database Server is a computer in a LAN that is dedicated to database storage and retrieval. The
database server holds the Database Management System (DBMS) and the databases. A database
server can be defined as a server dedicated to providing database services. Such a server runs the
database software. A database server is useful for organizations that have a lot of data to deal
with on a regular basis. If you have client-server architecture where the clients need process data
too frequently, it is better to work with a database server. Some organizations use the file server
to store and process data. A database server is much more efficient than a file server

5. Proxy Server
•The word proxy means "to act on behalf of another," and a proxy server acts on behalf of the
user. All requests from clients to the Internet go to the proxy server first.
•A proxy server is computer that functions as an intermediary between a web browser (such as
Internet Explorer) and the Internet. Proxy servers help improve web performance by storing a
copy of frequently used web pages.
The proxy evaluates the request, and if allowed, re-establishes it on the outbound side to the
Internet.

Advantages of a Proxy Server

While using proxy server it assigns a temporary address for all the data that are being passed
during internet access. It doesn’t reveal true identification of the computer. When you access
internet then there are many cookies, scripts and other programs running on the internet which is
designed to track your IP address. But by the use of proxy server you can hide such elements.
Topologies
Network Topologies
It refers to the arrangement of the computers on the network or shape of the network. Topology
describes where the cables are run and where the workstations, nodes, routers, and gateways are
located.

1. Star Network
A star network is a local area network (LAN) in which all nodes (workstations or other devices)
are directly connected to a common central computer. Every workstation is indirectly connected
to every other through the central computer. In some star networks, the central computer can also
operate as a workstation.

•Advantages:
1.Minimal line cost as only n-1 lines are required for connecting n nodes.
2.If any of the local computer fails, the remaining portion of the network is unaffected
3.Transmission delay between 2 nodes do not increase by adding new nodes to the network

•Disadvantages:
1.The system crucially depend on the central node. If host fails the entire network fails
2.The traffic between workstations is high, an undue burden is placed on the central node

2. Ring Network
A ring network is a local area network (LAN) in which the nodes (workstations or other devices)
are connected in a closed loop configuration. Adjacent pairs of nodes are directly connected.
Other pairs of nodes are indirectly connected, the data passing through one or more intermediate
nodes.
•Advantages:
1.It works well where there is no central-site computer system
2.More reliable than a star network because communication is not dependent on a single host
computer.
•Disadvantages:
1.Here communication delay is directly proportional to the number of nodes in the network.
2.Breakdown of any one station on the ring can disable entire network
3.The ring network requires more complicated software than star network

3. Bus Network
Bus topology is a network type in which every computer and network device is connected to
single cable. When it has exactly two endpoints, then it is called Linear Bus topology.

Features of Bus Topology


1. It transmits data only in one direction.
2. Every device is connected to a single cable
Advantages of Bus Topology
1. It is cost effective.
2. Cable required is least compared to other network topology.
3. Used in small networks.
4. It is easy to understand.
5. Easy to expand joining two cables together.
Disadvantages of Bus Topology
4. Cables fails then whole network fails.
5. If network traffic is heavy or nodes are more the performance of the network decreases.
6. Cable has a limited length.
7. It is slower than the ring topology.

8. Mesh
It is a type of physical network design where all the devices in a network are connected to each
other with many redundant connections

It is a point-to-point connection to other nodes or devices. All the network nodes are connected
to each other. Mesh has n(n-1)/2 physical channels to link n devices.
There are two techniques to transmit data over the Mesh topology, they are :
1. Routing
2. Flooding
9. Hybrid
It is two different types of topologies which is a mixture of two or more topologies. For example
if in an office in one department ring topology is used and in another star topology is used,
connecting these topologies will result in Hybrid Topology (ring topology and star topology). It
is a topology containing zero or more nodes that are linked together in a hierarchical fashion.
Top most node is called the root. Client/Server networks may use a combination of star, ring, and
bus approach

Star network is more centralized, while the bus network more decentralized

Stages of ERP evolution


Enterprise resource planning (ERP) has evolved as a strategic tool, an outcome of over four
decades. This is because of continuous improvements done to the then available techniques to
manage business more efficiently and also with developments and inventions in information
technology field.

Material Requirement Planning (MRP)


MRP was the fundamental concept of production management and control in the mid-1970s and
considered as the first stage in evolution of ERP. Assembly operations involving thousands of
parts such as automobile manufacture led to large inventories. The need to bring down the large
inventory levels associated with these industries led to the early MRP systems that planned the
order releases. Such planned order releases ensured proper time phrasing and accurate planning
of the sub-assembly items, taking into account complex sub-assembly to assembly relationships
characterized by the Bill of Materials.
Manufacturing Resources Planning II (MRP- II)

A natural evolution from the first generation MRP systems was the manufacturing planning
systems MRP II that addressed the entire manufacturing function and not just a single task within
the manufacturing function. MRP II went beyond computations of the materials requirement to
include loading and scheduling. MRP II systems could determine whether a given schedule of
production was feasible, not merely from material availability but also from other resource point
of view.

Enterprise Resource Planning (ERP)

The nineties saw unprecedented global competition, customer focus and shortened product life
cycles. To respond to these demands corporations had to move towards agile (quick moving)
manufacturing of products, continuous improvements of process and business process
reengineering. This called for integration of manufacturing with other functional areas including
accounting, marketing, finance and human resource development.

Activity-based costing would not be possible without the integration of manufacturing and
accounting. Mass customization of manufacturing needed integration of marketing and
manufacturing. Flexible manufacturing with people empowerment necessitated integration of
manufacturing with the HRD function. In a sense the 1990s truly called integration of all the
functions of management. ERP systems are such integrated information systems build to meet
the information and decision needs of an enterprise spanning all the functions of management.

Extended ERP (E-ERP)

Further developments in the enterprise resource planning system concept have led to evolution of
extended ERP (E- ERP) or web - enabled ERP. With globalization on one hand and massive
development in the internet technology on the other, need for web based IT solution was felt.
Thus E- ERP is development in the field of ERP which involves the technology of Internet and
World Wide Web (WWW) to facilitate the functions of an organization around the web.

Enterprise Resource Planning II (ERP- II)

ERP II is the advanced step of E-ERP. It is the software package which has strengthened the
original ERP package by included capabilities like customer relationship management,
knowledge management, workflow management and human resource management. It is a web
friendly application and thus addresses the issue of multiple office locations.

The most recent evolution in ERP is cloud hosted solutions, where clients can get up and running
on cloud hosted ERP solutions within a short period of time and at a much lower or affordable
cost.

CHAPTER 3- DECISION MAKING

Intelligence Phase
This is the first step towards the decision-making process. In this step the decision-maker
identifies/detects the problem or opportunity. A problem in the managerial context is detecting
anything that is not according to the plan, rule or standard. An example of problem is the
detection of sudden very high attrition for the present month by a HR manager among workers.
Opportunity seeking on the other hand is the identification of a promising circumstance that
might lead to better results. An example of identification of opportunity is-a marketing manager
gets to know that two of his competitors will shut down operations (demand being constant) for
some reason in the next three months, this means that he will be able to sell more in the market.
Thus, we see that either in the case of a problem or for the purpose of opportunity seeking the
decision-making process is initiated and the first stage is the clear understanding of the stimulus
that triggers this process. So, if a problem/opportunity triggers this process then the first stage
deals with the complete understanding of the problem/opportunity. Intelligence phase of
decision-making process involves:
Problem Searching: For searching the problem, the reality or actual is compared to some
standards. Differences are measured & the differences are evaluated to determine whether there
is any problem or not.
Problem Formulation: When the problem is identified, there is always a risk of solving the
wrong problem. In problem formulation, establishing relations with some problem solved earlier
or an analogy proves quite useful.
Design Phase
Design is the process of designing solution outlines for the problem. Alternative solutions are
designed to solve the same problem. Each alternative solution is evaluated after gathering data
about the solution. The evaluation is done on the basic of criteria to identify the positive and
negative aspects of each solution. Quantitative tools and models are used to arrive at these
solutions. At this stage the solutions are only outlines of actual solutions and are meant for
analysis of their suitability alone. A lot of creativity and innovation is required to design
solutions.
Choice Phase
It is the stage in which the possible solutions are compared against one another to find out the
most suitable solution. The 'best' solution may be identified using quantitative tools like decision
tree analysis or qualitative tools like the six thinking hats technique, force field analysis, etc.
This is not as easy as it sounds because each solution presents a scenario and the problem itself
may have multiple objectives making the choice process a very difficult one. Also uncertainty
about the outcomes and scenarios make the choice of a single solution difficult.

Conditions/Environments of Uncertainty:

Managers make problem‐solving decisions under three different conditions: certainty, risk, and
uncertainty. All managers make decisions under each condition, but risk and uncertainty are
common to the more complex and unstructured problems faced by top managers.

Decisions are made under the condition of certainty when the manager has perfect knowledge of
all the information needed to make a decision. This condition is ideal for problem solving. The
challenge is simply to study the alternatives and choose the best solution.

When problems tend to arise on a regular basis, a manager may address them through standard or
prepared responses called programmed decisions. These solutions are already available from
past experiences and are appropriate for the problem at hand. A good example is the decision to
reorder inventory automatically when stock falls below a determined level. Today, an increasing
number of programmed decisions are being assisted or handled by computers using decision‐
support software.

Structured problems are familiar, straightforward, and clear with respect to the information
needed to resolve them. A manager can often anticipate these problems and plan to prevent or
solve them. For example, personnel problems are common in regard to pay raises, promotions,
vacation requests, and committee assignments, as examples. Proactive managers can plan
processes for handling these complaints effectively before they even occur.

Risk
In a risk environment, the manager lacks complete information. This condition is more
difficult. A manager may understand the problem and the alternatives, but has no guarantee how
each solution will work. Risk is a fairly common decision condition for managers.

When new and unfamiliar problems arise, nonprogrammed decisions are specifically tailored
to the situations at hand. The information requirements for defining and resolving nonroutine
problems are typically high. Although computer support may assist in information processing,
the decision will most likely involve human judgment. Most problems faced by higher‐level
managers demand nonprogrammed decisions. This fact explains why the demands on a
manager's conceptual skills increase as he or she moves into higher levels of managerial
responsibility.

A crisis problem is an unexpected problem that can lead to disaster if it's not resolved
quickly and appropriately. No organization can avoid crises, and the public is well aware of
the immensity of corporate crises in the modern world. The Chernobyl nuclear plant explosion in
the former Soviet Union and the Exxon Valdez spill of years past are a couple of sensational
examples. Managers in more progressive organizations now anticipate that crises, unfortunately,
will occur. These managers are installing early‐warning crisis information systems and
developing crisis management plans to deal with these situations in the best possible ways.

Uncertainty
When information is so poor that managers can't even assign probabilities to the likely outcomes
of alternatives, the manager is making a decision in an uncertain environment. This condition is
the most difficult for a manager. Decision making under conditions of uncertainty is like being a
pioneer entering unexplored territory. Uncertainty forces managers to rely heavily on creativity
in solving problems: It requires unique and often totally innovative alternatives to existing
processes. Groups are frequently used for problem solving in such situations. In all cases, the
responses to uncertainty depend greatly on intuition, educated guesses, and hunches — all of
which leave considerable room for error.
These unstructured problems involve ambiguities and information deficiencies and often occur
as new or unexpected situations. These problems are most often unanticipated and are addressed
reactively as they occur. Unstructured problems require novel solutions. Proactive managers are
sometimes able to get a jump on unstructured problems by realizing that a situation is susceptible
to problems and then making contingency plans. For example, at the Vanguard Group,
executives are tireless in their preparations for a variety of events that could disrupt their mutual
fund business. Their biggest fear is an investor panic that overloads their customer service
system during a major plunge in the bond or stock markets. In anticipation of this occurrence, the
firm has trained accountants, lawyers, and fund managers to staff the telephones if needed.

CH4: Data Warehousing

Steps in Datamining

There are various steps that are involved in mining data as shown in the picture.

1. Data Integration: First of all the data are collected and integrated from all the different
sources.
2. Data Selection: We may not all the data we have collected in the first step. So in this
step we select only those data which we think useful for data mining.
3. Data Cleaning: The data we have collected are not clean and may contain errors,
missing values, noisy or inconsistent data. So we need to apply different techniques to get
rid of such anomalies.
4. Data Transformation: The data even after cleaning are not ready for mining as we need
to transform them into forms appropriate for mining. The techniques used to accomplish
this are smoothing, aggregation, normalization etc.
5. Data Mining: Now we are ready to apply data mining techniques on the data to discover
the interesting patterns. Techniques like clustering and association analysis are among the
many different techniques used for data mining.
6. Pattern Evaluation and Knowledge Presentation: This step involves visualization,
transformation, removing redundant patterns etc from the patterns we generated.
7. Decisions / Use of Discovered Knowledge: This step helps user to make use of the
knowledge acquired to take better decisions.

ETL
Extract data from a source, Transform the data while in transit and then load the data into a
target’s storage of choice. These movements of data can be scheduled on a regular basis, or
triggered to occur. The types of projects ETL tools are used for vary greatly as these tools were
built to be very flexible. Some common projects would be rolling up transaction data for
business people to work with which are commonly called as data marts or data warehouses,
migrating application data from old systems to new ones, integrating data from recent corporate
mergers & acquisitions, and integrating data from external suppliers or partners (3rd party data).
The purchase and use of an ETL tool is a very strategic move, even if it's intended for a small
tactical project to begin with.

Q7. Explain Data warehousing & the process of ETL (Talk about how ETL helps Data
Warehousing, functions & importance of ETL)

The term "Data Warehouse" was first coined by Bill Inmon in 1990. It is a subject oriented,
integrated, time-variant, and non-volatile collection of data which helps analysts to take informed
decisions in an organization. A data warehouse is created to hold data drawn from several data
sources, maintained by different units, together with historical and summary transactions.
Mainframe computers are often used and while data warehouses grow continually, scalability is
an issue. It is a large repository database requiring large storage capacity that supports
management decision making
–Typically relational
–Data is collected from transactional databases
You need to load your data warehouse regularly so that it can serve its purpose of facilitating
business analysis. To do this, data from one or more operational systems needs to be extracted
and copied into the data warehouse. The challenge in data warehouse environments is to
integrate, rearrange and consolidate large volumes of data over many systems, thereby providing
a new unified information base for business intelligence.
The process of extracting data from source systems and bringing it into the data warehouse is
commonly called ETL, which stands for extraction, transformation, and loading. ETL represent
the three phases in transferring data from a transactional database to a data warehouse.
Extraction of Data
During extraction, the desired data is identified and extracted from many different sources,
including database systems and applications. Very often, it is not possible to identify the specific
subset of interest, therefore more data than necessary has to be extracted, so the identification of
the relevant data will be done at a later point in time. Depending on the source system's
capabilities (for example, operating system resources), some transformations may take place
during this extraction process. The size of the extracted data varies from hundreds of kilobytes
up to gigabytes, depending on the source system and the business situation. The same is true for
the time delta between two (logically) identical extractions: the time span may vary between
days/hours and minutes to near real-time. Web server log files, for example, can easily grow to
hundreds of megabytes in a very short period of time.
Process of copying relevant data from a variety of transactional databases for inclusion in a DW.
May occur at regular intervals (e.g., weekly, monthly) to add new data. Data from incompatible
databases, flat files, text documents, etc. must be filtered through appropriate API (application
programming interfaces) as needed.
Transformation of Data
Data extracted from transactional databases must be cleaned (“scrubbed”) and transformed
before loading into a DW. Format differences across different tables/databases must be
reconciled. Missing or misspelled data values must be resolved. Erroneous data are identified
using application programs, and scrutinized/ corrected by DW analysts using system-generated
exception reports. Transaction-level data is aggregated by business dimensions. This phase is a
key step in the construction of a data warehouse as the data warehouse is very sensitive to data
errors.

•Data loading:
This is the final stage in the process of transferring data, here, the extracted, cleaned, and
transformed data is loaded into data warehouse at a predetermined data refresh frequency.
Q8. What is data mining and how does it help to analyze data using OLAP

Data mining techniques are used to analyze the data in data warehouse and identify patterns,
classes of profitable customers to target. A process of searching for unknown relationships or
information in large databases or data warehouses, using intelligent tools such as neural
computing, predictive analytics techniques, or advanced statistical methods. Data mining is the
process of extracting hidden patterns from data. These patterns can be rules, affinities,
correlations, trends or prediction models. As more data is gathered, with the amount of data
doubling every three years data mining is becoming an increasingly important tool to transform
this data into information. It is commonly used in a wide range of profiling practices, such as
marketing, surveillance, fraud detection and scientific discovery.
•4 Stages of Data
–Data
–Information
–Knowledge
–Intelligence

The two components of data mining are as follows


•Knowledge Discovery
Concrete information gleaned from known data. Data you may not have known, but which is
supported by recorded facts.
(ie: Diapers and beer example)
•Knowledge Prediction
Uses known data to forecast future trends, events, etc. (ie: Stock market predictions)

How data mining helps analyze data using OLAP

Drill Up
Drill-up is an operation to gather data from the cube either by ascending a concept hierarchy for
a dimension or by dimension reduction in order to receive measures at a less detailed granularity.
So that to see a broader perspective in compliance with the concept hierarchy a user has to group
columns and unite the values. As there are fewer specifics, one or more dimensions from the data
cube will be deleted, when this OLAP operation is run. In some sources drill up and roll up
operations in OLAP come as synonyms, so this variant is also possible.
Here’s a typical example of a Drill-up or roll up OLAP operations example:
Drill down
OLAP Drill-down is an operation opposite to Drill-up. It is carried out either by descending a
concept hierarchy for a dimension or by adding a new dimension. It lets a user deploy highly
detailed data from a less detailed cube. Consequently, when the operation is run, one or more
dimensions from the data cube must be appended to provide more information elements.
Have a look at an OLAP Drill-down example in use:
Slice
The next pair we are going to discuss is slice and dice operations in OLAP. The Slice OLAP
operations takes one specific dimension from a cube given and represents a new sub-cube, which
provides information from another point of view.It can create a new sub-cube by choosing one or
more dimensions. The use of Slice implies the specified granularity level of the dimension.
OLAP Slice example will look the following way:
Dice
OLAP Dice emphasizes two or more dimensions from a cube given and suggests a new sub-
cube, as well as Slice operation does. In order to locate a single value for a cube, it includes
adding values for each dimension.
The diagram below shows how Dice operation works:
CH 5
Q9. Discuss Information Systems in an airline industry
Operations system supports airlines through all stages of day to day operations. Operations
system gives you complete control and perspective of your operation using the best and most
accurate data available. Afterwards all data is easily made available for both analytical and
regulatory purposes.

Operations Planning
The Operations module ensures that you can access your operational data efficiently and
accurately. This information is immediately created and validated in the powerful Graphics tool,
giving you a complete overview of your operations plan.

Flight Watch
IT allows you to track the progress and monitor all your flight operations, mapping predicted
operations against flight data as it arrives. Users can make real time changes to the schedule as
delays and changes occur, giving you complete control throughout the operation of flights.
Post Flight
After fights have taken place it gives extensive reporting and analysis, measuring predictions
versus actuals and covering everything from load factors to airfield parking information.
Executive reports and daily management reports are extensively used in the Boardroom.
Crewing
Crewing module is a toolkit to manage your crew efficiently and effortlessly. Using a wide range
of features you can create a roster that meets both FTL requirements and your own criteria. The
roster can be monitored using the Graphics tool and modified live as necessary should any
changes occur.
Flight Time Limitations
Crew regulations are both complex and manifold in the airline business. The software can be
adapted for acclimatisation rules, local times and in-flight relief.
Rostering
A fully automated rostering capability to create work blocks and rosters for crew members. This
can be viewed through the Crewing graphics tool to give a full overview of work patterns and
shifts over weeks and months. From within this tool blocks of work can be moved, reassigned
and managed.
Crew Portal
Allows users to view work rosters when published. The app is accessible on PC and mobile
devices anywhere in the world to access information specific to the user. Other features include
duty validation, leave request and more to enable swift communication between flight crew and
management.
Commercial
Multiple Plans
Compare multiple flying programmes and assess the benefits and drawbacks of each plan. A
myriad of reports can be run on individual plans letting you draw comparisons on any aspect of
the plan. When used in conjunction, Costing the financial impact of different plans can also be
assessed.
Commercial Graphics
Flying programmes can be viewed on the Graphics interface, with sectors colour-coded and
separated for each aircraft, with further details listed within the sector.
Payload Options
This can be configured for a variety of payloads ranging from cargo loads to charter seat
allocation. Projections for pax and revenue can be added using the reporting and analysis tools
from the flying programme. Seat Brokering options are also available for buying and selling seat
allocation which can also be analysed against other flying programme versions.
Engineering
Technical Logs
Allows engineering crews at different locations to have instant access to the most up to date
information concerning parts.
Inventory Management
Allows aircrew to report defective parts for maintenance via the post-flight reports. This delivers
immediate feedback to control who can get a swift response from Engineering.
Maintenance Schedules
Schedule maintenance rotas for both inventory and crew. By keeping track of when parts need
maintenance airlines can attend to faulty items quickly as well as make efficient use of
maintenance staff.

Q10. Discuss Information Systems in a banking industry

The customers choose a bank mainly on the following three factors:


i. The ease of doing business.
ii. The quality of personnel and service.
iii. The range of the financial services.

The unique service in banking mostly means solving the customers’ problems in the financial
matters, and the single most widely used measure of quick service is the elapsed time of
transaction execution.

The MIS in banking industry revolves around these aspects.


a. The customer of the bank would like to know the status of the account very fast to make
decisions on withdrawals or payments. He is interested in obtaining the loan assistance
for his particular need with a reasonable rate of interest.
b. Some customers would be interested in tax consulting and tax planning.
c. Mother group of customers would be interested in investment guidance for investing in
stocks and securities.
d. To avoid the inconvenience of going to some places for payment of small amounts,
customers need service at the counter to pay electricity bills, telephone bills, taxes and
duties to the local bodies and the Government.

Hence, the MIS is to be designed to identify, decide and develop a service strategy for offering a
distinctive service to the broad range of customers seeking a variety of service demands.

The following points should be taken care of while designing an MIS for a bank:

1. Customer database
The service expectations and perceptions revolve around the following factors:
i. Customer — individuals, company, institutions, etc.
i i . Operator — housewife, employee, the officer of the organization.
i i i. The range of service — savings, credit checking and payment, other financial services.
iv. Class of customers — income group, corporate bodies, etc.
v. Working hours — morning, afternoon, evening, etc.

The management of the bank should create a customer database and analyze the needs of the
customers from time to time to create suitable service package.

2. Service to the account holders


The customers (account holders) need constant advice on the status and its operations. Most of
the customers use their account for routine payments affecting the balance. Many times the
account holds a large amount and it is not transacted for any purpose.

The MIS should give following reports to the management:


a. The non-moving account
b. The account was having the balance of more than, say Rs.50,000.
c. The account was going down below minimum balance.
d. The regular payments not made.
e. The routine credits not arrived.
f. The defaults on loan repayment.
g. The delays on crediting cheque amounts.
h. A sudden rise and fall in the account movement.
I. The account holders were giving 80% business to take personal care of their service
expectations and perceptions (the CRM perspective).
Based on these reports, the management of the bank should alert or warn the customer to act on
his account to correct the situation. The personal and individual account holders need such a
service badly as they have to manage their domestic or business activities in a tight money
situation. The MIS built around such demands would help not only the bank manager but also
the account holder.

3. Service for business promotions


The bank finances can be utilized in some ways to increase the banking operations by offering
credit to the right kind of customers. It is, therefore, necessary to study the trend in the business
industry and solicit the customers from the upcoming and growing business sector. The MIS
should concentrate on data collection from various sources to analyze and conclude the future
corporate strategy. Such information will help the banker to move out to talk to the customer to
obtain business for the bank. Such support will also reduce the risk of the account going into the
red and bad debt.

4. The index monitoring system


One more feature of the MIS is to monitor the variety of indices and ratios related to banking
operations, which are internal to the banking business. Some of these ratios fulfill the legal needs
like the Cash Reserve Ratio (CRR)/ Statutory Liquidity Ratio (SLR); some meet the policy needs
like the priority sector ratio to total advances and so on. It is necessary to build the MIS
applications to support the bank manager in making decisions to keep different norms and ratios
within the acceptable limits. He should also get support through Decision Support Service to
handle the problem of not meeting these legal standards.

5. Human resource upgrade


There is a lot of human aspect in the banking operations. With computerisation, the service may
become faster or quicker, but still, it requires a human touch and skill. It is, therefore, necessary
to upgrade the expertise and knowledge of the bank employees to offer proper service to the
customers. The MIS should identify such needs and offer help to the management in designing
training courses for the employees to improve their knowledge about banking and the financial
world. In the banking industry, the traditional methods of real performance are at odds with good
service. It is, therefore, necessary to set the internal standards, accuracy, responsiveness and
timeliness. The systems and the resources provided to meet these standards need monitoring, and
the MIS will provide feedback on these standards so they can be regulated and controlled. The
MIS measures these standards and gives feedback on achievement or non-achievement.

CH6
Q11. What is firewall and why is it important for cyber security? What security measures can
you put in the firewall to enhance/upgrade security?
A firewall is a network security device that monitors incoming and outgoing network traffic and
decides whether to allow or block specific traffic based on a defined set of security rules.
Firewalls have been a first line of defense in network security for over 25 years. They establish a
barrier between secured and controlled internal networks that can be trusted and untrusted
outside networks, such as the Internet.
A firewall is a network security system, either hardware- or software-based, that uses rules to
control incoming and outgoing network traffic.
A firewall acts as a barrier between a trusted network and and an untrusted network. A firewall
controls access to the resources of a network through a positive control model. This means that
the only traffic allowed onto the network is defined in the firewall policy; all other traffic is
denied.
When organizations began moving from mainframe computers to the client-server model, the
ability to control access to the server became a priority.
The growth of the Internet and the resulting increased connectivity of networks meant that this
type of filtering was no longer enough to keep out malicious traffic as only basic information
about network traffic is contained in the packet headers.
Digital Equipment Corp. shipped the first commercial firewall (DEC SEAL in 1992) and firewall
technology has since evolved to combat the increasing sophistication of cyber attacks.
It involves a combination of hardware and software, that acts as a filter or barrier between a
private network and external computers or networks. The network administrator defines rules for
access. The firewall will then examine data passing into or out of a private network, and
subsequently decide whether to allow the transmission based on users’ IDs, the transmission’s
origin and destination, and the transmission’s contents.

The possible actions after examining packet are as follows:


•Reject the incoming packet
•Send a warning to the network administrator
•Send a message to the packet’s sender that the attempt failed
Allow the packet to enter (or leave) the private network

There are three main types of firewalls:


•Packet-filtering firewalls
•Application-filtering firewalls
•Proxy servers

The following security measures can be put in a firewall to enhance security measures:

Intrusion Detection Systems


•Protect against both external and internal access
•Placed in front of a firewall
•Prevent against DoS attacks
•Monitor network traffic
•“Prevent, detect, and react” approach
•Require a lot of processing power and can affect network performance

Q12. Discuss any four important cyber laws under the Information Security Act

Section 72 of IT Act 2000 : Breach of confidentiality and privacy.

Save as otherwise provided in this Act or any other law for the time being in force, any person
who, in pursuant of any of the powers conferred under this Act, rules or regulations made there
under, has secured access to any electronic record, book, register, correspondence, information,
document or other material without the consent of the person concerned discloses such electronic
record, book, register, correspondence, information, document or other material to any other
person shall be punished with imprisonment for a term which may extend to two years, or with
fine which may extend to one lakh rupees, or with both
•Case 1: https://indiankanoon.org/doc/176435660/
•Case 2: https://indiankanoon.org/doc/32361358/

Section 65(B) of Indian Evidence Act. :Admissibility Of Electronic Records.

(1) Notwithstanding anything contained in this Act, any information contained in an electronic
record which is printed on a paper, stored, recorded or copied in optical or magnetic media
produced by a computer (hereinafter referred to as the computer output) shall be deemed to be
also a document, if the conditions mentioned in this section are satisfied in relation to the
information and computer in question and shall be admissible in any proceedings, without further
proof or production of the original, as evidence of any contents of the original or any fact stated
therein of which direct evidence would be admissible.
(2) The conditions referred to in the Sub-section (1) in respect to the computer output shall be
following, namely:
(a) the computer output containing the information was produced by computer during the period
over which computer was used regularly to store or process information for the purposes of any
activities regularly carried on over that period by the person having lawful control over the use of
computer.
(b) during the said period the information of the kind contained in the electronic record or of the
kind from which the information so contained is derived was regularly fed into the computer in
the ordinary course of the said activities.
(c) throughout the material part of the said period, the computer was operating properly or, if not,
then in respect of any period in which it was not operating properly or was out of operation for
that part of the period, was not such to affect the electronic record or the accuracy of its contents.
(d) The information contained in the electronic record reproduces or is derived from such
information fed into computer in ordinary course of said activities.
(3) Where over any period, the function of storing and processing information for the purposes of
any activities regularly carried on over that period as mentioned in Clause (a) of Sub-section (2)
was regularly performed by the computers, whether-
(a) by a combination of computer operating over that period, or
(b) by different computers operating in succession over that period; or
(c) by different combinations of computers operating in succession over that period of time; or
(d) in any other manner involving successive operation over that period, in whatever order, of
one or more computers and one or more combinations of computers,
all the computers used for that purpose during that period shall be treated for the purpose of this
section as constituting a single computer Page 3097 and any reference in the section to a
computer shall be construed accordingly.

Section 65 of IT Act 2000. Tampering with computer source documents.

•Whoever knowingly or intentionally conceals, destroy, or alter any computer source code used
for a computer, computer programme, computer system or computer network, when the
computer source code is required to be kept or maintained by law for the time being in force,
shall be punishable with imprisonment up to three years, or with fine which may extend up to
two lakh rupees, or with both.
•Explanation – For the purposes of this section, “computer source code” means the listing of
programmes, compute commands, design and layout and programme analysis of computer
resource in any form.

Section 45 – Residuary Penalty


•Whoever contravenes any rules or regulations made under this Act, for the contravention of
which no penalty has been separately provided, shall be liable to pay a compensation not
exceeding twenty-five thousand rupees to the person affected by such contravention or a penalty
not exceeding twenty-five thousand rupees.

Q. Cybercrime and criminals


Cybercrime is a fast-growing area of crime. More and more criminals are exploiting the speed,
convenience and anonymity of the Internet to commit a diverse range of criminal activities that
know no borders, either physical or virtual, cause serious harm and pose very real threats to
victims worldwide.
Before the Internet, criminals had to dig through people's trash or intercept their mail to steal
their personal information. Now that all of this information is available online, criminals also use
the Internet to steal people's identities, hack into their accounts, trick them into revealing the
information, or infect their devices with malware.
Most cyber crimes are committed by individuals or small groups. However, large organized
crime groups also take advantage of the Internet. These "professional" criminals find new ways
to commit old crimes, treating cyber crime like a business and forming global criminal
communities.
Criminal communities share strategies and tools and can combine forces to launch coordinated
attacks. They even have an underground marketplace where cyber criminals can buy and sell
stolen information and identities.
It's very difficult to crack down on cyber criminals because the Internet makes it easier for
people to do things anonymously and from any location on the globe. Many computers used in
cyber attacks have actually been hacked and are being controlled by someone far away. Crime
laws are different in every country too, which can make things really complicated when a
criminal launches an attack in another country.
A cybercriminal is an individual who commits cyber crimes, where he/she makes use of the
computer either as a tool or as a target or as both.

Cybercriminals use computers in three broad ways:


● Select computer as their target: These criminals attack other people's computers to
perform malicious activities, such as spreading viruses, data theft, identity theft, etc.
● Uses computer as their weapon: They use the computer to carry out "conventional
crime", such as spam, fraud, illegal gambling, etc.
● Uses computer as their accessory: They use the computer to save stolen or illegal data.
Cybercriminals often work in organized groups. Some cybercriminal roles are:
● Programmers: Write code or programs used by cyber criminal organization
● Distributors: Distribute and sell stolen data and goods from associated cybercriminals
● IT experts: Maintain a cybercriminal organization's IT infrastructure, such as servers,
encryption technologies and databases
● Hackers: Exploit systems, applications and network vulnerabilities
● Fraudsters: Create and deploy schemes like spam and phishing
● System hosts and providers: Host sites and servers that possess illegal contents
● Cashiers: Provide account names to cybercriminals and control drop accounts
● Money mules: Manage bank account wire transfers
● Tellers: Transfer and launder illegal money via digital and foreign exchange methods
● Leaders: Often connected to big bosses of large criminal organizations. Assemble and
direct cybercriminal teams, and usually lack technical knowledge.
Clearly, there is much overlap between roles, but as cybercrime becomes a greater issue, more
specialization is being seen as organized crime gets in the picture. For example, hackers were
once more often than not hobbyists who broke into systems for personal gratification. While
white-hat hacking hasn't disappeared, it's much more common now to see hackers as
professionals who sell their services to the highest bidder.

Q. Role of Computer in Crime


At the core of the definition of computer crime is activity specifically related to computer
technologies. Computers serve in several different roles related to criminal activity. The three
generally accepted categories speak in terms of computers as communication tools, as targets,
and as storage devices.

The computer as a communication tool presents the computer as the object used to commit the
crime. This category includes traditional offenses such as fraud committed through the use of a
computer. For example, the purchase of counterfeit artwork at an auction held on the Internet
uses the computer as the tool for committing the crime. While the activity could easily occur
offline at an auction house, the fact that a computer is used for the purchase of this artwork may
cause a delay in the detection of it being a fraud. The use of the Internet may also make it
difficult to find the perpetrator of the crime.

A computer can also be the target of criminal activity, as seen when hackers obtain unauthorized
access to Department of Defense sites. Theft of information stored on a computer also falls
within this category. The unauthorized procuring of trade secrets for economic gain from a
computer system places the computer in the role of being a target of the criminal activity.

A computer can also be tangential to crime when, for example, it is used as a storage place for
criminal records. For example, a business engaged in illegal activity may be using a computer to
store its records. The seizure of computer hard drives by law enforcement demonstrates the
importance of this function to the evidence gathering process.

In some instances, computers serve in a dual capacity, as both the tool and target of criminal
conduct. For example, a computer is the object or tool of the criminal conduct when an
individual uses it to insert a computer virus into the Internet. In this same scenario, computers
also serve in the role of targets in that the computer virus may be intended to cripple the
computers of businesses throughout the world.

The role of the computer in the crime can also vary depending upon the motive of the individual
using the computer. For example, a juvenile hacker may be attempting to obtain access to a
secured facility for the purpose of demonstrating computer skills. On the other hand, a terrorist
may seek access to this same site for the purpose of compromising material at this location.
Other individuals may be infiltrating the site for the economic purpose of stealing a trade secret.
Finally, unauthorized computer access may be a display of revenge by a terminated or
disgruntled employee.

You might also like