You are on page 1of 7

4. MIS, DSS, EIS: def, distinction, examples.

The term decision support system (also referred to as "management decision system" or "strategic planning
system") is a recent addition to the vocabulary of systems developers and users. It has been subject to a wide
range of definitions. At one end is the narrow, specific definition of the DSS as "an interactive, computer-based
system which supports managers in making unstructured decisions" [26]. At the other end is the more global
assessment that a DSS is an aid in decision making and implementation; succinctly put, the DSS serves as an
"executive mind-support system" [15]. It is one of the many tools a manager can choose to aid in daily decision-
making activities. The decision support system is not an "automated manager," but rather a "right-hand
person," standing ready to support, not replace the manager. The hallmark of a good decision support system is
not its sophistication or efficiency, but its ability to increase the effectiveness of the manager who employs it.
A DSS supports a semi-structured decision; it is designed to support a decision maker in any (but not every)
stage of the decision-making process. The decision maker can choose those functions of the DSS that support
some stage while not using other functions. For instance, a decision maker may require the use of data but not
models. The key to successful human-machine cooperation is this principle of specified DSS support.
Early DSS literature defined DSS-appropriate problems as those that are semi-structured. The degree of problem
structure is, indeed, central to DSS. AL one extreme, if a decision problem can be completely structured (no
human judgment is required), then an SDS can replace the human decision maker. At the other extreme, if no
structure can be imposed on the problem, decision support is impossible. It is only between these extremes
that DSS is relevant.
Components of a DSS
Before delineating the typical components of a DSS, we wish to remind you of the fundamental premise of the
systems approach, Systems, regardless of their specific contexts, share a common set of elements. A DSS, being
a system, has a specific environment, mainly the users (decision makers) and their decision situations; the system
impacts the user by providing some outputs
(functions); the system consumes some resources (inputs); and, the system has some internal structure,
comprising components and their interrelationship (linkages, arrangements).
The purpose of a DSS is to support a decision-making process. DSS can support decision makers in many ways,
including the following:
 Retrieving single items of information (i.e., a view)
 Providing a mechanism for ad hoc data analysis (e.g., statistical models)
 Providing prespecified aggregations of data (e.g., accounting models)
 Estimating the consequences of proposed decisions (i.e., sensitivity analysis, causal models)

MIS

The module of the organizational information system that provides management information for decision makers
will be termed management information system (MIS). It can be defined in terms of its application to the
different classes of decisions.
1. It is an information system that makes structured decisions. For example, a computer program can
process inventory transactions and automatically reorder optimal replenishments.
2. It supports the process of making unstructured or semi-structured decisions by performing some of the
phases of the decision-making process and providing supporting information for other phases. For
example, computer programs can report production cost overruns, thus performing the intelligence phase.

Two types of logical components of the MIS are distinguished.


1. Structured decisions system (SDS), which makes the structured decision.
1. 2. Decision support system (DSS), which supports unstructured and semi-structured decisions.
CHARACTERISTICS
A modern MIS will always rely to some extent on computer technology (although the computer is not a
prerequisite). MIS existed before the invention of the computer, but due to the quantity and complexity of data in
modern organizations, it is inconceivable that, without computers, information for decision makers could be
generated within a reasonable time.
2. A well-designed MIS relies heavily on human decision-making processes. The decision makers in the
organization receive various degrees of support from the MIS, ranging from relatively simple aggregation of data
to utilization of the computer as a problem solver.
3. An MIS is heavily dependent on a database. The concept of a database applies when the computerized files of
an organization are integrated in a way that facilitates easy access to all information items by all users, regardless
of physical or functional residence of both data and users.
4. An MIS relies heavily on a model base. The model base contains the pro- grams with which the data is
organized and processed. The programs generate programmed decisions and programmed models that simulate
or take part in human decision-making processes. While the database and model base may be developed
independently, they are basically complementary: the database provides the model base designers with input
needed for their models; the model base indicates the type of data that should be incorporated in the database.
The database and model base make the MIS, in a sense, a model of the real organization.
5. A well-designed MIS relies on a communications network that provides interactive, fast access by managers to
the information stored in the database and to the models in the model base.

EIS
Executive information systems (EIS) provide a variety of internal and external information to top managers in a
highly summarized and convenient form. EISS are becoming an important tool of top-level control in many
organizations. They help an executive spot a problem, an opportunity, or a trend. We have encountered the use of
an EIS in the extensive Case Study in Chapter 2 and you have seen how such a system helps executives to
identify a problem, find its source, and establish a course of action leading to the solution. Some EISS also have
forecasting capabilities that can be used in an "automatic-pilot" fashion: They access the database and adjust their
forecasts to the changing data. With these capabilities, EIS becomes a strategic planning tool.
EISS serve people whose time is at a premium and who are responsible for the long-term vision of the company's
future in the competitive marketplace. These executives develop long-term strategic plans for the company and
exercise strategic control by monitoring the organization's performance. Executive information systems therefore
have these characteristics:
1. BISS provide immediate and easy access to information reflecting the key success factors of the company and
its units.
2. "User-seductive" interfaces, presenting information through color graphics or video, allow an EIS user to grasp
trends at a glance. These systems generally are used directly (without intermediaries) by senior managers who
cannot be expected to deal with complicated interfaces. Simple point-and-click devices and touch screens make
keyboards unnecessary. Little or no training is needed to use the system.
3. EISS provide access to a variety of databases, both internal and external, through a uniform interface. Indeed,
the fact that the system consults multiple databases is often transparent to the users. Along with tabular and
graphical numerical information, textual informal information should be available as well. Such information may
include notes, comments, opinions, and interpretations. External databases are available via the Internet and the
electronic information services discussed in Section 8.4. Using them, EISS can provide news, data from
securities markets, trade and industry data, and other up-to-date data and information. Through the use of
intelligent agents (see Chapter 8) on the Internet's World Wide Web, it is possible to bring up-to-date information
on the competitive environment into EIS in a systematic fashion (King and O'Leary 1996).
4. Both current status and projections should be available from EIS. Indeed, it is frequently desirable to
investigate different projections for the future. In particular, planned projections may be compared with the
projections derived from actual results (as we saw in the Case Study for Chapter 2).
5. An EIS should allow easy tailoring to the preferences of the particular user or group of users (such as the chief
executive's cabinet or the corporate board).

5)Business process and BPR.


Reengineering, or BPR as it is commonly known, became a buzzword in the 1990s, spurring a great interest in
process design. The essence of the reengineering philosophy is to achieve drastic improvements by completely
redesigning core business processes; that is, by rethinking the way business is conducted. To relate this to the
distinction between process design and implementation, reengineering advocates radical design changes and fast
revolutionary implementation to achieve drastic improvements.
6. EIS should offer the capability to "drill down" into the data. In other words, it should be possible to see
increasingly detailed data behind the summaries.
EISS can be best understood by contrasting them with DSSS, which they complement. While DSSS are primarily
used by middle and lower-level managers to project the future, EISS primarily serve the control needs of higher-
level management. The relationship between these two types of information systems, EIS and DSS, is shown in
Figure 10.14.
Seen in the light of the structure of a decision-making process, EISS primarily assist top management in
uncovering a problem or an opportunity. Analysts and middle managers can subsequently use a DSS to suggest a
solution to the problem. More recently, EIS-type applications are coming into use by middle managers as well.

Although the frequency of failure is considerable, organizations that approach the design of business processes
with understanding, commitment, and strong executive leadership are likely to reap large benefits. This implies
that companies must be absolutely clear about what they are trying to accomplish when designing effective
business processes and how they will go about doing it. Many experts believe that a considerable number of
failures associated with the reengineering of business processes can be attributed directly to misunderstanding the
underlying philosophy behind BPR.

6. Decision making:phases and the importance of each phase.

THREE PHASES IN DECISION MAKING PROCESS


You can define decision making as the process of choosing between alternatives to achieve a goal. But if you
closely look into this process of selecting among available alternatives, you will be able to identify three
relatively distinct stages. Put into a time framework, you will find:
1 The past, in which problems developed, information accumulated, and the need for a decision was perceived;
2 The present, in which alternatives are found and the choice is made; and
3 The future, in which decisions will be carried out and evaluated.
Herbert Simon, the well-known Nobel laureate decision theorist, described the activities associated with three
major stages in the following way:
1Intelligence Activity: Borrowing from the military meaning of intelligence Simon describes this initial phase as
an attempt to recognise and understand the nature of the problem, as well as search for the possible causes;
2 Design Activity: During the second phase, alternative courses of action are developed and analysed in the light
of known constraints; and
3 Choice Activity: The actual choice among available and assessed alternatives is made at this stage.
If you have followed the nature of activities of these three phases, you should be able to see why the quality of
any decision is largely influenced by the thoroughness of the intelligence and design phases.
Henry Mintzberg and some of his colleagues (1976) have traced the phases of some decisions actually taken in
organisations. They have also come up with a three-phase model as shown in Figure I.

1 The identification phase, during which recognition of a problem or opportunity arises and a diagnosis is made.
It was found that severe immediate problems did not have a very systematic, extensive diagnosis but that milder
problems did have.
2 The development phase, during which there may be a search for existing standard procedures, ready-made
solutions or the design of a new, tailor-made solution. It was found that the design process was a grouping, trial
and error process in which the decision-makers had only a vague idea of the ideal solution.

3 The selection phase, during which the choice of a solution is made. There are three ways of making this
selection: by the judgement of the decision maker, on the basis of experience or intuition rather than logical
analysis; by analysis of the alternatives on a logical, systematic basis; and by bargaining when the selection
involves a group of decision makers. Once the decision is formally accepted, an authorisation is made.
Note that the decision making is a dynamic process and there are many feedback loops in each of the phases.
These feedback loops can be caused by problems of timing, politics, disagreement among decision-makers,
inability to identify an appropriate alternative or to implement the solution or the sudden appearance of a new
alternative etc. So, though on the surface, any decision-making appears to be a fairly simple three-stage process,
it could actually be a highly complex dynamic process.

8) Decision types.

decisions may be characterized according to the degree of uncertainty of the problem involved. Three categories
are detected in this respect:
1. Deterministic-Decisions that are made under certainty. For example, if we know all the grades that a student
has achieved in his or her studies, we can decide whether the student is entitled to graduate. Deterministic
decisions can often be made by a programmed algorithm; a sequence of operations. that, through a clear route
and a definite termination rule, derives the final decision. Hence, such decisions are frequently called algorithmic
decisions and are likely to be structured.
2. Probabilistic-Unstructured decisions that abide by rules of statistics and probability and hence can become
algorithmic (or programmed). Normally this would be a decision made under risk; there is no certainty, yet the
prob- abilities of the relevant events are known. For example, an insurance company has to calculate the
premium for a life insurance policy applicant. The premium depends mainly on the applicant's date of death,
which is a random event. However, the company uses a mortality table that is based on the mortality experience
of a large population of past insureds. The insurance company uses the death probability relevant to the
applicant's age to deter- mine the premium. In fact, in many insurance companies, ready-made computer
programs read in the applicant's age and the face value of the policy; locate the death probability in the mortality
table; add factors of interest rate, expenses, and profit; and finally print out a policy with premium notice. Such
programs illustrate how probabilistic decisions can be transformed to algorithmic decisions.
3. Random-Unstructured decisions that are made under complete uncertainty; probabilities of events are not
known, and sometimes even the events are not well defined. For

example, how should a city council react to a major earthquake hitting the city? Random decisions are heuristic.
They are based on experience and common sense. They cannot be programmed, though they may be supported
by computerized systems. In such a situation the decision maker would be pleased upon reaching a feasible
solution rather than an optimal solution.

We will now examine information requirements for the different categories of decisions.
Decision Types and Information Systems
It is obvious that information requirements vary according to the type of decisions to be supported by an
information system. Structured decisions require well- defined and clearly designed information, such as
exception reporting, and ac- count balances. It is relatively easy to design such systems. In many cases the
computer program actually substitutes for human decisions. We are all familiar with computer warnings for not
paying the electricity bill or credit card accounts. The decision to send this warning is fully programmed.
Computer programs cannot substitute for unstructured decisions. They can only support such decisions by
providing the decision maker with more data, by screening alternative scenarios, and by reducing the degree of
uncertainty. In- formation system failures occur when people misunderstand this fact of life and attempt to
impose highly formal information systems on situations where they are not suitable.

9. E Govt, E Com.
Page 127 book-E GOVT. PAGE 103 BOOK-(E COM)
10. Ethics:Principal ethical issues, how these issues can be handled.
The principal ethical issues of concern with regard to information systems have been identified by Richard
Mason (1986) as the issues of privacy, accuracy, property, and access (PAPA, for short). These are the issues that
will be discussed throughout the chapter.
We have shown these principal ethical issues as the four circles in Figure 17.3. As you may see in the figure, we
can trace these issues to their sources: (a) the pervasive role and immense capabilities of systems for collecting,
storing, and accessing information in our information society, (b) the complexity of information technology, and
(c) the intangible nature of information and software. The figure also shows the specific individual rights whose
potential violation brings the issues to a head.

We will now proceed to consider the four ethical issues in the following sections.
17.4 PRIVACY-Privacy is the most important ethical issue raised by information systems. Privacy is the right of
individuals to retain certain information about themselves without disclosure and to have any information
collected about them with their consent protected against unauthorized access. When our privacy is invaded, we
are embarrassed, diminished, and perceive a loss of autonomy-a loss of control over our lives. Invasion of
privacy is a potent threat in an information society. Individuals can be deprived of opportunities to form desired
professional and personal relationships, or can even be politically neutralized through surveillance and gathering
of data from the myriad databases that provide information about them.
Concern about privacy had existed for many years before the computer-based information technology entered
human affairs. But computers and related technologies create possibilities to endanger privacy that had not
existed before. Massive databases containing minute details of our lives can be assembled at a reasonable cost
and can be made accessible anywhere and at any time over telecommunications networks.
Collection, storage, and dissemination of records concerning individuals from computer databases are necessary
to our business, government-indeed, to the very fabric of our lives. Yet the quality of our lives has to be protected
by legislation and by an ethical approach to the acquisition and use of these records. Several laws regulating
record keeping are in force in the United States. The most prominent of these are the Fair Credit Reporting Act of
1970, which limits access to the credit information collected by credit agencies and gives individuals the right to
review them, and the Privacy Act of 1974, which bars federal agencies from allowing the data they collect to be
used for purposes other than those for which they were collected. However, both legislative acts contain
loopholes that defy their already feeble enforcement. Gaps in legislation and enforcement make it difficult to
protect privacy through the legal system and leave much of the privacy issue in the domain of ethics.
The Privacy Act serves as a guideline for a number of ethics codes adopted by various organizations. It is also
being looked upon as a model for protecting the privacy of electronic medical records ("Electronic Threats"
1997). The act specifies the limitations on the data records that can be kept about individuals. The following are
the principal privacy safeguards specified:
• No secret records should be maintained about individuals.
• No use can be made of the records for other than the original purposes without the individual's consent.
• The individual has the right of inspection and correction of records pertaining to
him or her.
• The collecting agency is responsible for the integrity of the record-keeping system.

ACCURACY
Pervasive use of information in our societal affairs means that we have become more vulnerable to
misinformation. Accurate information is error-free, complete, and relevant to the decisions that are to be based
on it. With respect to the last issue, the concern about the accuracy of information about individuals is frequently
related to a concern for privacy. An inaccurate credit report can prevent you from getting a credit card or a job.
When people rely on inaccurate information, other people may be deprived of their right to due process. An
incorrect medical record can threaten your life. A weather report that incorrectly forecast the trajectory of a storm
because the data from a failed buoy were unavailable to the computerized model did send a sailor to his death
(Mason 1986). French police officers, in hot pursuit of a car recorded as stolen in their database, opened fire and
wounded a law-abiding driver. The records of the car recovery by the legitimate owner and of the subsequent
sale of the car were missing from the database.
Accuracy problems have wider societal implications. A claim has been made that the absence of proper controls
in some computerized election systems may threaten basic constitutional rights. Indeed, several examples of
irregularities have been reported (Neumann 1995). For example, in Ventura County, California, yes and no votes
were reversed by an information system on all the state propositions in the 1992 elections. After a series of
errors, computer-based elections had to be abandoned in Toronto, Canada.

PROPERTY
The right to property is largely secured in the legal domain. However, intangibility of information is at the source
of dilemmas which take clarity away from the laws, moving many problems into the ethical domain. At issue
primarily are the rights to intellectual property: the intangible property that results from an individual's or a
corporation's creative activity. As plummeting costs of computer hardware render much of it a relatively
inexpensive commodity, the value of many information systems resides largely in their software, databases, and
knowledge bases. Yet while all of us would label taking someone's laptop without permission theft, few of us
would even consider the contents of its storage.
Intellectual property is protected in the United States by three mechanisms: copyright, patent, and trade secret.
These means serve to protect the investment of an innovator and to ensure public good by encouraging disclosure
so that further innovations can be made by others. Indeed, the copyright and patent laws are designed to assist in
public disclosure and thus further technological progress.

Because the legal system trails the pace of technology and because ethical guidance is sought in framing the legal
issues, many controversies spill over into the ethical domain. To "honor property
rights" is one of the eight general moral imperatives of the ACM Code of Ethics. The legal system and the
ethicists are grappling with the following unresolved issues:
• To what extent can information be considered property?
• What makes one software product distinct from another?
• Can the look-and-feel of software be protected as property? The emerging interpretation of the copyright law
protects the way the program looks to its user on the screen and the way it works, rather than the specific code of
the program, but controversy on the subject persists.
• Would computer-generated works have a human author? If not, how would the property rights be protected?
The issues arising with regard to expert systems go beyond those of the rights of the software or knowledge
engineer. How do we account for the intellectual property of the experts, whose knowledge is the fundamental
resource that becomes "disemminded" (as in disembodied) in those systems? Yet another issue is that of property
rights to electronic collections of data, such as directories. At this time, the courts have ruled that the copyright
law protects only the collections that display some creativity in the selection or arrangements of the data and does
not protect such labor-intensive but nonoriginal collections as telephone white pages.
ACCESS
It is a hallmark of an information society such as ours that most of its workforce is employed in the handling of
information, and most of the goods and services available for consumption. Information technology does not
have to be a barrier to opportunities. Quite to the contrary, when deployed purposefully it can provide
opportunities that were not accessible before. For example, electronic mail is being successfully used to teach
functionally illiterate adults to read and write (Chira 1992). Internet access can bring some of the contents of the
world's libraries to a remote location. Thus, computer literacy can become the road to traditional literacy and to
education.
With the phenomenal growth of personal computers, the Internet, and on-line information services, the
accessibility of information technology has grown vastly. But it has not grown equally. Economic inequality is,
of course, a major reason. But it is not the only one. A broad issue of access arises in relation to people with
disabilities. Information technologies can be potent tools in bringing the handicapped into the social and
economic mainstream. They can also be a barrier to employment or enjoyment of equal access to societal
benefits.

You might also like