Professional Documents
Culture Documents
Most of us experience the effect of different systems every day - getting to work
or school using a transport system, for example, or making a call using the telephone
system. Computerised systems have transformed the world in countless
ways, not least in increased connectivity between people, as provided by
the Internet and the World Wide Web, for example.
Information
Data
Context
Information
Context
Knowledge
Figure 1.1
This suggests that there is a process by which Information and then Knowledge is
produced, somewhat analogous to a manufacturing process whereby a raw material, in
this case ‘Data’, is refined and worked on to produce an end-product. But what are the
components, which convert data into information?
Data and information are an important part of a computer system. A computer exists to
convert data into information.
Data becomes information when it is used for a purpose, which is of value to the
individual or organization. Take, as an example, the following list into which animals
are divided:
b. Embalmed ones
d. Suckling pigs
e. Mermaids
f. Fabulous ones
Information Systems
Whatever the name, the flavor is the same. Please note that in our definition
information systems are not platform specific (hardware, software, language etc...).
They might be called also domain systems, abstract systems, general systems etc. The
interesting fact is that these systems may be seen also as "knowledge objects", one of
the two views related to knowledge management. This view enables the way to
"conceptual objects reuse".
The latest lap-top computers can hold and process vast amounts of data but the capacity
of our 'neck-top' computer has not altered over the last few hundred years!
Exposure to new ideas and concepts from outside the business can stimulate managers to
adopt and adapt these concepts in their own business environment. There is a danger for
managers, who stay in one job with one company for a long time, that they become insular in
their approach and resistant to change. The working environment, as we move into the 21st
Century, requires a workforce that is multi-skilled and able to adapt and keep up to date with
change. There is much that organizations themselves can do to ensure that their employees have
access to new information and learning opportunities. It is also the case that employees will
have to take greater charge of their own personal development in the new organization
structures being created in the Information Age.
If managers do not have the right information about how the business is
performing then they are unlikely to be able to make the right decisions (or are more
likely to, make the wrong ones).
Managers need:
• Information when they want it (at a time that suits their workflow)
• Real-time information (so decisions are timely and relate directly to what is
happening)
Applications support organizational goals and such goals are held by those for
whom the success of the firm is critical; for this discussion, they are called stakeholders.
This group includes boards of directors, shareholders, unions, communities, and
suppliers. The goals represent desirable conclusions for clients. Clients may be inside or
outside the organization. The classes of individuals that have direct interaction with the
application are called direct users. Assisting these role players are developers, who may
work for an organization, a client or a user, and they may be direct employees or
contracted specialists.
Introduction
• Select and use terminals and printers from multiple leading manufacturers, all
in one system.
• Combine 5250 and 3270 intelligent terminal emulation with data collection
functionality using a GUI development environment that involves no host code
rewrites.
• Run the same transactions on your radio-frequency devices that you run on
fixed devices and PCs, including intelligent emulation.
• Select from a variety of automatic, dynamic servers and data backup schemes,
which are available to ensure minimum downtime and maximum productivity.
Users with the proper database permissions may download data from the database
to another location and upload changes to these on-line files as soon as changes are
required. When the transaction files are no longer being "actively" used, they are not
deleted. Instead they are archived in a special data storage facility called a data
warehouse.
It uses only data that is internal to the company. There are some possible
exceptions, however. Inter -branch banking uses TPS as well as direct deposit of
employee's pay from various companies and government institutions.
The outputs of TPS provide operational details, summary reports, and exception
reports, which help supervise and control routine operations. Exception reporting
provides a feedback loop to the operational manager about any unusual or unexpected
activity that may require attention. Transaction processing systems maybe divided into
major functional areas. Accounting provides systems to handle general ledger, payroll,
and accounts. Marketing has TP systems to handle sales, promotion and advertising
activity. Manufacturing requires transaction processing systems and operational
feedback from production processes. Manufacturing reporting systems must often be
designed as real-time systems to allow operational managers and supervisors to closely
monitor on-going operations. Human resources needs systems to support its daily
operations in recruiting, maintaining employee records, providing benefits, and
monitoring occupational conditions. Finance requires maintaining information on cash
reserves and investment holdings, and the monitoring of changes to tax and fiscal
regulations.
Many systems, such as order processing, may involve processes that cross
funtional boundaries in the enterprise. For example, an order may concern sales,
When transaction data grows old, it can still be of value to the organization.
However, it is not used frequently and takes up valuable disk space, so such data is
often moved to other lower cost storage areas. In the past, archived data was difficult to
access because the data archives were not well organized. Today, such archives are
placed in data warehouses, which are data storage facilities that are managed by special
database programs designed for that purpose. Analysts can then make use of on-line
analytical processing [OLAP] tools to investigate either the current transaction data or
the data that has been retired to the data warehouse facility. Using the company's data
archives to determine trends and to extract other useful information is often referred to
as data mining.
For less structured problems, and most often at higher management levels,
decision support systems play a role in modeling from a wide variety of sources, both
internal and external. Such problems are often called unstructured problems although
that term can be misleading. The structuredness of a problem must be viewed as a
continuum running from the fairly unstructured to the highly structured types of
problems. No problem that can be analyzed by any technique can be considered as
totally unstructured.
The MIS field deals with all the information and problem solving activity of a
modern, successful organization. The MIS discipline brings together the various
business areas, computer science, and quantitative analysis techniques. This program
provides the theory and methodology to analyze, design, implement, and manage an
organization's information technology and systems.
assets. Without proper information channels, a business would lose its advantage over
other companies.
In the past, many people were turned off by the idea of using a personal computer
in the everyday workplace. Many more people are starting to get comfortable around
computers. New programs are always emerging that combine powerful applications
with ease of usage. For instance, Microsoft, easily the biggest computer software
provider in the world, has numerous software programs that even a beginner can tackle
without much commotion. MIS specialists must take and apply their knowledge of both
computers and people to the common business. They need to design programs that
appeal to and motivate employees. MIS specialists must learn an overview of the
organizational, strategic, and technical issues surrounding the management of
information in businesses today. By building a solid base at businesses everywhere,
MIS specialists help prepare future managers to manage information as a resource and
to identify opportunities for using information as a competitive advantage.
• Payroll
• Personnel
• Accounting
• Inventory
It is an information reporting system, which receives data from all sections of the
business. The system is connected to a more complex network than a Transaction
Processing System.
The group of tools and techniques, together with their definitions and descriptions
of their interrelations, that assist in the specification and recording of an analyst's
judgments about evidence, and inferences going into a hazard identification decision.
Decision Support Systems (DSS) is a dynamic and rapidly changing field which
touches on a wide range of computing topics. Decision Support Systems have been
defined as:
Computer Based Systems are systems that help decision makers confront ill
structured problems through direct interaction with data and analysis models.
The emphasis is on problem solving tasks, which are semi-structured, ie. they
combine human judgement with the use of computing tools and techniques. DSS do not
replace managerial judgement but rather provide support for decision making - the final
agent remains the human.
Computer applications for management support are increasing and the availability
of microcomputers has dramatically increased the number of systems on managers
desks.
About 22% of the US work force is now in the office, with that percentage rising.
Labor costs account for about 70% of the total office costs in our economy and salary
costs are increasing about 6% each year. During the past 15 years there has been
relatively little increase in productivity of the office work force, in contras with the
manufacturing sector where the average productivity has more than doubled. The cost
of new technology aimed at increasing the productivity of office workers is going
down, while the capabilities of office automation systems have been rapidly increasing.
These two terms are now being used by people who are considering the potential
payoffs of office automation to their organizations.
The value-added approach deals with far more fundamental issues than the
replacement of some support staff positions with word processing pools. Its focus upon
individuals and groups of managers and professionals as targets for productivity
improvement brings with it opportunities for significant increases in organizational
effectiveness and major cost benefits in the largest segment of the office cost spectrum.
For any organization, management choices at many levels will affect the balance
between reducing total office costs (cost displacement) or increasing the total office
effectiveness (value-added effects).
Word processing applications have, until recently, been equated with the term
office automation. It is interesting to note that, on the average, typing tasks comprise
only 30% of the secretaries' and typists' work -- and thus account for only about 2% of
the total office salaries. The next few years will see a very rapid growth in the
introduction of advanced technology into offices and in applications with more impact
on managers and other non-clerical people, bringing with it broadened perceptions of
what office automation really includes.
people and organizations work -AUGMENTING their capabilities and increasing both
the quantity and quality of their contributions.
Business network redesign concerns itself with how multiple enterprises work
together; consideration is given as to how information is structured and handled as it
crosses organisational boundaries.
Business scope redefinition consists of applications that change the nature of the
business. Interdepartmental information systems use a shared or corporate database
facility. Procedures and practices are consistent and coordinated by the nature of the
database structure. Departments are provided with different views of the information
but the underlying data is consistent. Data captured in one department may be the
source of information for the activities of another department. Because such systems are
standardized, changes to data and the information produced have to be carefully
controlled. There is a need for enterprise systems management.
Business process redesign can have dramatic impact for an enterprise; This is attributed
to the change of culture, the redesign of work, the retraining or the loss of staff.
Introduction
Organization
It could be said that the role of the organization is to support creative individuals
while providing an environment where they can create knowledge.
b. Is composed of people.
The major difference between personal and workgroup information systems is the
need for workgroup systems to control the shared use of resources. This must be
achieved without inconvenience to individual members of the group. Granularity refers
to the size of the shared information resource. Large granularity means that group
members share large information sources with the possibility of delays due to
contention problems but with small information systems adminstration overheads.
Conversely, small granularity produces few contentions but administration overheads
rise accordingly.
The major categories of a workgroup information system are hardware sharing and
data sharing facilities. Hardware sharing applications allow members of the workgroup
to share many hardware devices, some of these devices may be uneconomical for a
single user system.
Collaborative writing systems let users share the writing of documents. These
applications coordinate and control the users efforts. The tedious copying and altering
of documents in a single user environment is avoided.
Group Decision Support Systems are applications which allow individual users to
use group workboards in their own decision making process. Group decision making
can be facilitated by a controller with the group using support applications.
Group textbase systems are similar to personal systems but shared documents are
indexed for group or individual retrieval.
Personal computer systems are used to help an individual facilitate their work.
This facilitation can be considered as support for communication, analysis and tracking
and monitoring.
Define Problem
Develop/ Code
From concept to production, you can develop a database by using the system
development life cycle, which contains multiple stages of development. This top-
down, systematic approach to database development transforms business information
requirements into an operational database. (Refer to figure 3.2)
Study and analyze the business requirements. Interview users and managers to
identify the information requirements. Incorporate the enterprise and application
mission statements as well as any future system specifications.
Build models of the system. Transfer the business narrative into a graphical
representation of business information needs and rules. Confirm and refine the model
with the analysts and experts.
Design
Design the database based on the model developed in the strategy and analysis
phase.
Build the prototype system. Write and execute the commands to create the tables
and supporting objects for the database.
Develop user documentation, help text, and operations manuals to support the
use and operation of the system.
Transition
Refine the prototype. Move an application into production with user acceptance
testing, conversion of existing data, and parallel operations. Make any modifications
required.
Production
Roll out the system to the users. Operate the production system. Monitor its
performance, and enhance and refine the system.
The SDLC procedures ensure that timely and accurate information concerning the
progress of system development is available to stakeholders and others in the university
community. System Development Life Cycle refers to a methodology for developing
systems. It provides a consistent framework of tasks and deliverables needed to develop
systems.
The SDLC methodology tracks a project from an idea developed by the user,
through a feasibility study, system analysis and design, programming, pilot testing,
Each Phase contains one or more individual deliverables associated with the Phase.
Figure 3.3
Planning
This is the first Phase in the SDLC. The Project Charter deliverable is the first
major deliverable in the project. The Project Charter is to be produced prior to the
commencement of a project.
Due to fiscal year budget allocations and fiscal contract restrictions the Project
Charter will typically refer to project activity up to the end of the fiscal year in which
the project starts. There may however be exceptions and special circumstances when a
Project Charter will span more than one fiscal year. In these cases, this should be
specifically stated in the Project Charter document.
For projects spanning more than one fiscal year, an updated Project Charter will
be required for each year that the project is undertaken, prior to the start of the fiscal
year (normally done in March of each year).
The updated Project Charter for a subsequent fiscal year(s) will mainly focus on
identifying the scope and deliverables of the project for that particular fiscal year. The
updated Project Charter will also include revisions (as appropriate) to project team,
budget, schedule and project status. The Business Champion is responsible for
producing the Project Charter. However, the Business Champion may delegate this
activity for completion. Together with appropriate members of the User Team, the
Business Champion will provide information for the completion of the Project Charter.
The Project Sponsor is responsible for reviewing and approving the Project
Charter and securing the necessary funding for the project
The draft Project Charter must be submitted to the Information Systems Branch
(ISB) for Business Analyst review and QA against standards.
Definition
This Phase of the SDLC defines exactly what, who, when and how the project will
be carried out. This Phase will take the deliverable from the previous Phase (Project
Charter), expand on the high-level project outline and provide a specific and detailed
project definition.
This Phase is the first activity of the project after obtaining approval and funding
to proceed.
The description of this Phase here assumes that the following associated activities
have been completed:
This is the second Phase in the SDLC but the first Phase of the project itself. The
Project Statement deliverable from this Phase must be completed and signed-off prior
to commencing the next phase of the project.
Agreement to the Project Statement ensures that everyone involved in the project
is clear on the project scope, objectives, goals and outcomes.
The Project Manager is responsible for producing the Project Statement. However,
the Project Manager may delegate this activity to the Development team for
completion.
The Business Champion, Business Analyst and appropriate members of the User
Team will provide project information which will support the completion of the Project
Statement.
The draft Project Statement must be submitted to the Information Systems Branch
for Business Analyst review and QA against standards.
The completed Project Statement will be submitted to the Business Champion and
Project Sponsor for approval and sign-off.
In some instances the steps leading to the functional specification and evaluation
might be quite informal. The systems analyst uses the simple techniques of questioning
and comparison. By asking questions, the analyst can consult references that have
already addressed the problem or can talk to people with more experience in the
problem area under consideration. By making comparisons the analyst will be able to
recognize and isolate those parts of the problem that are familiar.
The more people that need to interact in the decision-making process the more
formal the analysis and specification of the functional requirements will have to be.
Analysis
This Phase of the SDLC is required to understand and document the users' needs
for the system. This Phase will document, in significantly more detail than the Project
Statement, the scope, business objectives and requirements of the current/proposed
system.
The emphasis throughout this Phase is on what the system is to do. During the
analysis and specification, the technical aspects and constraints should be considered,
but should not be influenced by implementation characteristics. The technical aspects of
the system are addressed in the Design Phase.
During this Phase the Data Conversion requirements, at a high level, will become
known. This will commence a parallel set of SDLC Phases for the Data Conversion
associated with the system. Data Conversion will follow Phases 3 to 6 of the SDLC.
Depending on the size and complexity of the total project (system and data conversion)
In addition, the Data Warehousing requirements will also be identified during this
Phase and a parallel set of Data Warehouse SDLC Phases should commence.
Depending on the business requirements of the system, it is possible that the Data
Warehousing aspects will only be considered once the system has been operational for a
while.
These parallel streams of SDLC are shown in more detail in Figure 3.4 below. I
The Project Manager is responsible for producing the deliverables associated with
the Detailed System Analysis. However, the Project Manager usually delegates this to
the Business Analyst. This deliverable has significant input from the Business
Champion and User Team. In cases where the production of some or all of the Phase's
deliverables have been delegated, the Project Manager will still maintain overall
responsibility for the production of quality deliverable(s) submitted to the Business
Champion for review and sign-off and ISB for Quality Assurance.
The Project Manager will provide initial Quality Assurance of the deliverable
prior to review by ISB QA, User Team and the Business Champion.
The draft Detailed System Analysis must be submitted to the Information Systems
Branch for Data Administration review and QA against standards.
Design
This Phase of the SDLC continues on from the Detailed System Analysis and
describes how the proposed system is to be built. The Design is specific to the technical
environment that the system will be required to operate in and the tools to be used in
building the system. The results of this Phase will significantly impact the Build and
Transition Phases of the system.
The Project Manager is responsible for producing the deliverables associated with
the Detailed System Design.
However, the Project Manager usually delegates responsibility for some or all of
these deliverables to the Development Team. In cases where the production of some or
all of the Phase's deliverables have been delegated, the Project Manager will still
maintain overall responsibility for the production of quality deliverable(s) submitted to
the Business Champion and ISB for Quality Assurance.
The Project Manager will provide initial Quality Assurance of the deliverable(s)
prior to review by ISB QA, User Team and the Business Champion.
The draft Detailed System Design must be submitted to the Information Systems
Branch for Database Administration review and QA against standards.
Build
This Phase of the SDLC deals with the development, unit testing and integration
testing of the system (application) modules, screens and reports. In addition, this Phase
will address the preparation and establishment of the technical environment for
development, testing and training of user representatives.
This Phase is usually carried out in parallel with the development of user
procedures and user documentation from the Transition Phase. Both of these will be
required for module testing, upon the completion of the Build Phase. Coordination of
the activities of the Build and Transition Phases is a key responsibility of the Project
Manager at this time.
Any special procedures for data conversion and/or data warehousing are also
developed and tested. The processes of developing and testing of data conversion and
data warehousing modules is no different from those required for the system itself.
The Project Manager is responsible for producing the deliverables associated with
the Build Phase. However, the Project Manager usually delegates responsibility for
some or all of these deliverables to the Development Team. In cases where the
production of some or all of the Phase's deliverables have been delegated, the Project
Manager will' still maintain overall responsibility for the production of quality
deliverable(s) submitted to the Business Champion, User Team and ISB for Quality
Assurance.
The Project Manager will provide initial Quality Assurance of the deliverable(s)
prior to review by ISB QA, User Team and the Business Champion.
The draft System Build must be submitted to the Information Systems Branch for
technical review and QA against standards.
The completed System Build will be submitted to the Business Champion and
Project Sponsor for approval and sign-off.
Transition
This Phase of the SDLC is to prepare for and carry out the transition of the
developed system through user and acceptance testing to a full production system.
This is the sixth (and in some cases last) Phase in the SDLC. This will be the last
Phase only for those Business Systems that (for specific documented reasons) will not
make its data available in the Data Warehouse.
This Phase will provide users with the documentation and training to effectively
use the system. Although the Data Conversion will only to be done once, user
documentation will also be required.
The Project Manager is responsible for producing the deliverables associated with
the Transition Phase. However, the Project Manager usually delegates responsibility for
some or all of these deliverables to the Development Team. In cases where the
production of some or all of the Phase's deliverables have been delegated, the Project
Manager will still maintain overall responsibility for the production of quality
deliverable(s) submitted to the Business Champion, User Team and ISB for Quality
Assurance.
The Project Manager will provide initial Quality Assurance of the deliverable(s)
prior to review by ISB QA, User Team and the Business Champion.
The draft deliverables must be submitted to the Information Systems Branch for
technical review and QA against standards
Warehouse
This Phase of the SDLC addresses the publication of the system's data into the
Ministry's Data Warehouse for business manipulation and decision support. Although
described as one Phase here, the Warehouse Phase actually comprises, as appropriate,
all the deliverables associated with SDLC Phases 2 [Definition] through 6 [Transition].
The Project Manager will provide initial Quality Assurance of the deliverable(s)
prior to review by ISB QA, User Team and the Business Champion.
The draft deliverables must be submitted to the Information Systems Branch for
review and QA against standards.
This evolution involves major leaps in the complexity of tasks that it is being
designed to perform. As we review this evolution, a consistent pattern of change
emerges in the business application of IT. As we evolve automation of work
through information management to business transformation, the strategic
importance of IT applications increases and that amount of organizational change
required to realize the benefits of an application is also greater. Specifically, an
increasing number of changes are being made to elements of the business system
beyond IT such as business processes, organisational structure and even business
culture. At the same time, the number and complexity of applications (or potential
applications) also increases. The three stages of evolution are summarised in the
table 4.1 below.
Table 4.1
• Automation of work
outside of the payroll department. The most important thing was that the payroll
application ran correctly.
• Information Management
yield management systems. In the later steps of the information stage, automation
information bases provided opportunities to design new products, such as today's
multitude of mutual funds and numerous volume-based discount plans for valued
customers. It was no longer sufficient to simply provide the application and make
sure that it worked as specified. For these benefits to be realized, the nature of
people's work had to change. Business processes had to be restuctured and better
integrated. Reward systems had to change. Significant learning was required. The
changes crossed functional boundaries, and in some cases, changed or eliminated them.
Physically, personal computers emerged from behind the walls of - the central
data center. PCs began to appear everywhere in organizations and to be operated
by nonexperts. The number of potential applications of technology increased
dramatically. Many of these were conceived outside of the IT world, by the
broader community of business managers and front-line technology users.
• Business transformation
systems and more fighting for ownership of the client with travel agents, and thus
redefining the travel agent business. Amazon.com is helping to redefine the book
industry. It is not only selling books electronically and offering a wider selection
than is possible in physical bookstores, it is using the power of computers to
repackage - and eventually transform - a range of services that were historically
spread across multiple businesses, including the reference capabilities of libraries,
the retail display and selection expertise of bookstores, the efficiency of volume
discount distributors and the knowledge of professional book reviewers.
Linkage
Reach
People
A large number and diversity of people must be motivated and prepared to change.
This critical factor in business transformation is often underestimated. We need to
understand who these people are today, how they will have to change and what
interventions will be required to effect the change. We need to ask how these interventions
will be managed for people with different starting points, attitudes and motivations.
Time
In business transformation, time is always of essence, but realistic time frames are
notoriously hard to estimate. We need to ask - and ask again and again - what the realistic
length of time is for all the necessary changes to occur and for the full benefits to be
realized. We must base these estimates on understanding the previous three dimensions.
In the automation stage, the four dimensions were fairly straightforward and posed
few problems. In the case of automated payroll systems, for example, there few linkages,
organizational reach was limited and few people were affected. Time was required to
deliver the benefits was short; or, at least, the time frames were easily predictable in
advance. Finally, benefits were easy to measure. As we moved through the information
stage, there were more linkages, not all of which were, obvious
The potential risks and rewards associated with such cases of business
transformation show, what is involved in engineering out transition to a knowledge
economy. The opportunities include expanding geographic scope, expanding electronic
commerce and creating virtual companies. We are, moving toward an economy that is
on-line, interactive, instantaneous, inter-networked and knowledge based. It is an
economy that will require new organizational forms and which will dramatically change
the nature of organizations and work.
While the opportunities created by business transformation are awesome the risks
can be daunting to investment decision-makers. Today's large-scale IT projects and
organizational change programs will be viewed as relatively simple initiatives
compared to the sophisticated business transformation ones that will be required in the
knowledge economy. These will raise significant new issues of linkage, reach, people
and time. To manage these dimensions of complexity successfully, business
transformation initiatives can no longer be viewed as traditional projects. They will
need to be treated almost like mid-size businesses within the business, as programs that
are managed continuously and proactively over long periods of time.
Introduction
Managers earn more than operators because their decisions have a significant
effect on organization's performance and because of the inadequate supply of
effective managers.
Managers come in all shapes and sizes. First of all they exist at various levels
within the (hierarchic) structure of an organization. Secondly, they are found in a
variety of functional areas, such as marketing, finance and manufacturing.
Nevertheless, while there are obvious differences between the different levels
and functional areas, all managers perform the same functions and play the same roles.
Managers transmit and receive information in both written and in oral form.
Written communications include reports, letters, memos, books and magazines,
electronic mail and electronic bulletin boards like the World Wide Web. Oral
communication comes from telephone calls, formal and informal meetings, and
voice mail.
Problem solving is the set of activities that leads from the recognition of a
problem to its solution, where a problem is a condition or event that is (potentially)
harmful or alternatively, a condition or event that is (potentially) beneficial. While
trying to find a solution to a problem, managers engage in a sub-process of
decision making, where a decision is a particular selected course of action. In
reaching a solution to a given problem, it is generally necessary to make decisions.
Management and leadership, they are often used to mean the same thing. But what
many people don't realize is that they are quite different. What is management and what
is leadership? How do they relate?
success of organizations. Other key words that are used to define management are
coordinating, resources, accomplishment, and desired results.
Leadership is not very easy to define. One thing that is clear is that leaderships
function is to influence. The process of leadership is to influence. Influence in
leadership is defined as the ability to modify or change behavior of people. Influence
has five bases; legitimacy, coercion, rewards, expertise and referent. All these bases
help to exert influence. Management and leadership are the arts of influence over
people. They both are influencing people. Leadership is dealing solely with influence
over others. Leadership is dealing exclusively with making people do something they
otherwise would not do unless influenced to do. Management is much the same but is
not so aggressive as leadership is. Management is a way of making sure people stay on
task when they may not be on task.
In management, the idea is to accomplish a goal that is set forth and agreed upon.
There isn't any real persuasion that goes into this because the goals are set and decided
on and the only thing need is to just do the work. In leadership, influence must be
exercised. Influence is a process of making people do something they otherwise would
not do. In management, influence is not very great because the goals are established and
agreed on. Influence is not a major part of management like it is in leadership.
Management and leadership are not the same. Management is a way to accomplish
goals that have been agreed upon. Leadership is influencing people. The two have many
areas that seem the same but they are very different because of the level of influence involved.
Management Functions
Henri Fayol, a French industrialist from the early part of the 1900s, proposed that
managers perform five management functions: POCCC (plan, organize, command,
coordinate, control). These functions still provide the basis around which popular
management textbooks are organized. However, the functions have been condensed to four.
Although the functional approach is clear and simple, critics have suggested
that it does not provide an accurate description of what managers actually do.
Management Roles
Figurehead
The manager performs ceremonial and other duties, such as greeting visiting
dignitaries and signing legal documents.
Leader
The manager maintains the unit by hiring and training the staff and providing
motivation and encouragement
Liaison
The manager makes contact with persons outside the manager's own unit -
peers and others in the unit's environment - for the purpose of attending to business
matters. In many organizations it is the practice for all mail to be addressed to or
signed by the manager.
Monitor
Disseminator
Spokeperson
The manager passes on valuable information along to those outside the unit-
superiors and persons in the environment. Common activities are issuing press
releases, giving media interviews, reporting to board meetings and so on.
Entrepreneur
Disturbance handler
Resource allocator
The manager controls the purse strings of the unit, determining which
subsidiary units get which resources.
Negotiator
The manager resolves disputes both within the unit and between the unit and
its environment.
Management Skills
Managers need certain skills to perform the varied duties and activities
associated with being a manager. Robert L. Katz found through his research in the
early 1970s that managers need three essential skills or competencies.
2. Human skills include the ability to work well with other people both
individually and in a group. The importance of human skills remains consistent,
regardless of level within the organization.
All managers perform essentially the same functions, but lower-level managers
emphasize leading while upper-level managers spend more of their time planning,
organizing, and controlling. For the most part, the manager's job is the same in both profit
and not-for-profit organizations. Managers in small businesses tend to emphasize the
spokesperson role and are generalists. Also, the formal structure and nature of a
manager's job in a large organization is replaced by more informality in a small firm.
When managers work in different countries, they often need to modify their practices.
Management Structure
The model above (see figure 5.1) is rather simplistic but clearly, in any
substantial organization, there will be a top layer of senior management dealing
with high level, strategic and long-term issues for the organization. Below them
will be a middle management layer with responsibilities for performance
monitoring and shorter time horizons. Finally, there will be an administrative level,
ensuring that the day-to-day operational activities are effectively managed.
Upper Management uses a ‘Decision Support System’ to help make its decisions.
Decisions by middle management, usually the Directors, cover a broader range of time
than operational, management but less broad than upper management. These decisions
involve experience using historical data to plan and control operations and implement
the policies of upper management. Middle management decisions are semi-structured.
Scope of activity Extremely broad Entire functional area Single subfunction or task
Table 5.1
Measuring Performance
As stated previously, effective information managers are those who have a clear
understanding of what information they require to tell them how they are performing
against their objectives. That is not always as easy as it sounds. Managing a busy
department, for example, is a 'messy' business, with problems coming from all
directions and the manger constantly having to reassess priorities to deal with them. It
is easy to lose sight of the real goals and then, as a consequence, having to take drastic
short-term actions to regain course.
Critical success factors (CSF's) are crucial areas of responsibility where 'things
must go right' requiring 'constant and careful attention from management'. For example,
responding to new business enquiries within two working days.
3. Set up the systems necessary to deliver that information as and when required.
This process has been applied and can be seen in many large multi-national
companies today, where subsidiaries operate with a high degree of autonomy, reporting
regularly to the parent company or Board on their performance in a number of 'key' areas.
Previous research has shown that the informal channels for collecting, what
is termed, 'soft' information can be very important, particularly in more senior
positions within the organization. Indeed, less regard is likely to be given to
'hard' factual information. This tends to reflect the way in which managers work
in the real world, where informal soft information (over the telephone or in the
canteen) is collected and stored until other items of information (either soft or
hard) reinforce or add to that information, like pieces in a jig-saw. The human
mind is extremely effective at this form of 'information management' far more so
than computers, where attempts to mirror this methodology have been relatively
unsuccessful.
So, an important lesson for the manager is not to focus solely on hard information
and associated systems but also to consider how to develop and nurture the informal
contacts and networks, which operate in and between organizations.
Decision Making
Figure 5.3
At the lower management levels, daily operations are the chief concern. At
this operational management level, transaction processing and process control data
are the major concern, and information systems often highly automated and critical
for managing competitively. Beneath these systems, in the "basement" under the
management information system [MIS], the transaction processing and other
automated production processes churn out masses of data that must be stored,
processed, and converted into business intelligence that feeds into every level of
management.
Structured decisions have a known and well-defined solution that uses fully
available data. The outcomes will always be the same. Examples of structured
decisions are calculating net pay, calculating interest on a loan, etc. In nonstructured
(or unstructured and semi-structured) decisions, there is no agreement upon
procedure and not all the necessary data may be available. It requires judgement,
evaluation and intuition. There are few unstructured business decisions, many are
semi-structured or parts of the problem are structured and parts require judgement
and intuition.
Each level has different types of decisions, therefore, measures the value of
information differently.
1. Intelligence [Finding out what is going, what the significant problems are]
Problem Solving
♦ Structured Problems
• easily automated
• difficult to automate
♦ Semi-structured Problems
2. All problems that can be approached in a rational manner are capable of some
degree of structuring.
MIS are often categorized according to the type of approach that they employ in
helping resolve managerial problems related to these activities. Managerial problem
solving range from the highly structured to the relatively unstructured approachs.
Structured problems are sometimes called "programmable" decisions because they are
easily formalized with models. Structured problems predominate in operations, where
data processing on transaction data can be coupled with process control data to permit
highly automated decision making. When such processes become very repetitive, they
may be fully automated, leaving humans to simply monitor the decision and outcomes.
On the other end of the range, unstructured problems require tools that permit
managers to model the dimensions of their problems, honing in on the best solutions to
difficult, and often unique, situations. However, no problem that confronts management
is ever totally unstructured. If a problem can be identified, it must contain elements that
allow it to be structured to some degree. The amount of structuring that is done depends
upon the nature of the problem and the time and skill the manager has to devote to it. It
also depends upon the manager's own decision making style.
In the middle between the highly structured and the unstructured problems lie
many semi-structured problems in which parts, but not all of the problem can be
approached using formal decision making models. For example, the decision of
AquaPenn to acquire additional manufacturing facilities on the West Coast was based
partly upon a structured analysis of the logistics of supplying customers in that regoin
of the country. Solid data and knowledge to use it was available concerning the
logistics. However, parts of the problem, such as those involving the activities of their
competitors or the future demand for bottled water, required a less structured analysis of the
market for bottled spring water. Not all factors were known and not all data was "hard”.
Different employees have different needs. The most effective learned how to adapt
their style based on the individual's needs.
• Employees in this level are very fragile and should be protected from change
as much as possible. Keep as much of their routine the same since any type of
changes - even what seem to be small ones - will be terrifying to them and
their reactions may be extreme.
• Employees at this level need a "parent" figure to help them deal with the
changes - someone who can help them feel safe and secure with the changes as
well as help them adjust to the changes at a pace that they can handle.
Otherwise, they will create all sorts of problems for management.
• People at this level need to maintain their sense of self-worth through the
changes. If they feel threatened, they will fight back in whatever ways they can
find. If they can be recruited to have a feeling of personal importance in the
change, they will find it much easier to deal with. It is important to help them
understand that fighting the changes are not in their best interest, and that
working with others is the way they can benefit them the most.
• The challenge for people at this level is to feel that everyone's needs are being
taken care of. They will feel the pain of their co-workers' struggles to cope
with change, yet they may not know how to deal with their own feelings of
insecurity. They most value being part of a team and will usually want to help
others adjust to the changes. They may participate actively in the gossip about
changes. People at this level can assist by being asked to "partner" with
someone who is expected to have more difficulty in coping with changes.
• People at this level will be able to understand the reasons behind the changes
and can be very helpful as a calming presence for people at other levels. They
can be most helpful to management if they are told about upcoming changes
and have time to help prepare others. They will be most effective in this role
when they can see the change(s) as a positive one and translate that message to
those who are more fearful. They will usually take a more philosophical
attitude toward the changes when they are told the larger picture and overall
strategy behind the changes. As with people in the "relating" perspective,
employees at this level can be very helpful if they are asked to help others cope
with the changes through a "partner" or "buddy" arrangement, whether formal
or informal.
Note: "employee" can be any person who works in an organization - line, staff,
management or executive
♦ Top Management:
Top management has a hard time coming to grips with the direct implications of
the change. They often underestimate the impact that change has on their employees.
They tend to isolate themselves. Often they engage in strategic planning sessions and
gather information in survey reports. They avoid communicating or seeking "bad
news," because it is difficult for them to admit "they don't know." They expect
employees to "go along" when a change is announced and they blame their middle
managers if people resist or complain about the change. They often feel betrayed when
employees don't respond positively.
♦ Middle Management:
Managers in the middle feel the pressure to "make organization change" according
to the wishes of top management. They feel pulled in different directions. Middle
managers often lack information and leadership direction to focus on multiple priorities.
They are caught in the middle, and often fragmented because they don't have clear
instructions. They feel besieged with upset, resistant or withdrawn employees who no
longer respond to previous management approaches, and deserted, blamed or
misunderstood by their superiors.
♦ Employees/Workers/Associates:
6.1 Introducton
A detailed and specific knowledge base is the source of the problem solving
power of an expert system (Waterman [1986]). The manner in which the knowledge
base is used will be determined by the system's meta knowledge. Meta knowledge can
be seen as a set of rules governing how the knowledge base is applied. Meta knowledge
determines what case specific knowledge is obtained from the user and what knowledge
from the knowledge base is incorporated in the problem solving process. It also
provides the means to combine these two sources of knowledge for an expert system to
generate its output.
A model with a less advanced meta knowledge structure may produce the same
conclusion, but it would be less discriminating in its use of the knowledge base. As a
result, it may incorporate many pieces of knowledge that were not strictly necessary in
its analysis. A procedurally coded computer model will necessarily place a heavier
reliance on its knowledge base than it will on a meta knowledge structure. This is not
necessarily a failing. Knowledge is typically easier to capture and represent than meta
knowledge. Expert systems are commonly constructed using proprietary shell packages.
These packages provide the tools to construct the knowledge base required by an expert
systems model and the inference engine and meta knowledge required to access that
knowledge base. The construction of an expert system requires the skills of a
knowledge engineer, as opposed to a conventional programmer. In contrast, decision
support systems can be constructed using only a conventional program code compiler.
A compiler embodies less functionality than an expert system shell, but as a result,
needs to be much less structured and can therefore allow the developer much more
freedom over the form of a decision support system. The programming skills required
to use a conventional program compiler are much more readily available and therefore
potentially more cost effective than the skills of a knowledge engineer.
The remainder of this paper is organized as follows. The second section discusses
the distinction between expert systems and decision support systems. The third section
reviews the application of expert systems to audit risk assessment. The fourth outlines
the constructiort of two computer models, one expert systems based and the other
procedural. The final section provides a summary and conclusion.
the field is widely seen as having arrived, failed, and disappeared. However, it is
important to keep in mind that much this perceived failure is due to unrealistic
expectations fueled by hype rather than a lack of achievement by ES researchers
and developers. In fact, such a fate is not uncommon in information technology.
The basic idea behind expert systems is to create small, practical "intelligent"
systems by eliciting a knowledge base of rules from experts in a particular field
and providing non-expert users with means of accessing this knowledge easily. For
example, an early (circa 1975) expert system called MYCIN helps physicians to
diagnosis infectious blood diseases and prescribe medication. The knowledge base
of MYCIN was elicited from a large number of experts in the field and contained
an enormous amount of information about different types of bacteria associated
with blood diseases. Although MYCIN provided diagnosis as good or better than
any human expert, the technology itself never achieved widespread acceptance in
the medical community.
be able to infer new facts from your existing knowledge, as and when needed, and
capture general abstractions, which represent general features of sets of objects in
the world.
AI is a field that overlaps with computer science rather than being a strict
subfield. Different areas of Al are more closely related to psychology, philosophy,
logic, linguistics, and even neurophysiology.
6.4 Is AI Possible?
The most well known contributions to the philosophical debate are Turing's
"Turing test" paper, and Searle's "Chinese room". Very roughly, Turing considered
how you would be able to conclude that a machine was really intelligent. He
argued that the only reasonable way was to do a test. The test involves a human
communicating with a human and with a computer in other rooms, using a
computer for the communication. The first human can ask the other
human/computer any questions they like, including very subjective questions like
"What do you think of this Poem". If the computer answers so well that the first
human can't tell which of the two others is human, then we say that the computer is
intelligent.
wouldn't really be, as they'd be just using something like the rule book of the
Chinese room.
Many people go further than Searle, and claim that computers will never even
be able to appear to be really intelligent (so will never pass the Turing test). There
are therefore a number of positions that you might adopt:
Computers can clearly behave intelligently in performing certain limited tasks, full
intelligence is a very long way off and hard to imagine. However, these philosophical
issues rarely impinge on Al practice and research. It is clear that AI techniques can be
used to produce useful programs that conventionally require human intelligence, and that
this work helps us understand the nature of our own intelligence.
• Planning: The ability to decide on a good sequence of actions to achieve your goals.
• Robotics: The ability to move and act in the world, possibly responding to new
perceptions.
By expert reasoning I mean things that only some people are good at, and which
require extensive training. It can be especially useful to automate these tasks, as there
may be a shortage of human experts. Expert reasoning includes:
• Medical diagnosis.
• Equipment repair.
• Computer configuration.
• Financial planning.
Expert Systems are concerned with the automation of these sorts of tasks.
AI research is concerned with automating both these kinds of reasoning. It turns out,
however, that it is the mundane tasks that are by far the hardest to automate.
Having decided that your problem is suitable you need to extract the knowledge
from the expert and represent it using your expert system shell. This is the job of the
knowledge engineer, but involves close collaboration with the expert(s) and the end user(s).
To extract knowledge from the expert the knowledge engineer must first become at
least somewhat familiar with the problem domain, maybe by reading introductory texts or
talking to the expert. After this, more systematic interviewing of the expert begins.
Typically experts are set a series of example problems, and will explain aloud their
reasoning in solving the problem. The knowledge engineer will abstract general rules
from these explanations, and check them with the expert.
As in most applications, the system is wasted if the user is not happy with it,
so development must involve close collaboration with potential users. As
mentioned in the introduction, the basic development cycle should involve the
rapid development of an initial prototype and iterative testing and modification of
that prototype with both experts (to check the validity of the rules) and users (to
check that they can provide the necessary information, are satisfied with the
systems performance and explanations, and that it actually makes their life easier
rather than harder!).
In order to develop the initial prototype the knowledge engineer must make
provisional decisions about appropriate knowledge representation and inference
methods (e.g., rules, or rules+frames; forward chaining or backward chaining). To
test these basic design decisions, the first prototype may only solve a small part of
the overall problem. If the methods used seem to work well for that small part it's
worth investing the effort in representing the rest of the knowledge in the same form.
Expert system development was very trendy around 5-10 years ago, with
unrealistic expectations about the potential benefits. Now some cynicism has set
in. Expert system shells are in fairly wide use, but are often used to solve fairly
simple problems, and are chosen as much for their user interface and development
environments as for their inferential abilities.
History
Firstly, in its supporting role, the IS strategy has to be clearly linked with the
corporate and business goals of the organization. Its constituent parts will consider
specific issues, often viewed as sub-strategies, in considerable detail. As an example,
the software sub-strategy may include a detailed software replacement programme,
identifying which systems will be replaced and when, together with the resources
required at each stage. This sub-strategy will have been determined by reference to
the overall business strategy, as well as determining factors at the IS strategy level
such as 'a decision to source future systems from off the self rather than developed in-
house'. This would also be reflected in the sub-strategy dealing with I.T. staffing.
Key Players
Much has been written about who the key players are in IS strategy formulation
and a preliminary stakeholder analysis may give the clue as to who should be
involved. The culture of the organization - centralized or devolved - will also act as a
determining factor. IT directors have seen their fortunes wax and wane over the years
as more and then less were afforded Board level status. The concept of the 'hybrid
manager' has been adopted in many organizations.
Components of an IS Strategy
Undoubtedly, there is no one arrangement, which best suits all. Similarly, the
content of an IS strategic plan may very well be a 'mix and match' most appropriate to
the organization concerned - although elements of 'best practice' are well described in
the text and include the following:
• An implementation plan.
Rationale
indeterminate and potentially has a greater impact on the well-being of the parent
organization.
Framework
It has to be emphasized at this point that frameworks are not always mutually.
However, a particular approach may lend itself to the style and culture of an
organization.
Introduction
There is a real danger in attempting to use too many tools and techniques in
formulating an IS strategy and the predominant framework, as discussed in the previous
session, ought to give a firm steer as to which ones may be most appropriate.
Tools
The value of each of the tools described is in their ability to clarify, to a greater or
lesser extent, a complex situation. Often, they also force the planner to consider a range
of issues from a different perspective. As a consequence, 'Beauty' - the value of a
particular tool - is very much 'in the eye of the beholder'. For example, the layered
structure and inter-connectedness of Rockart's Critical Success Factors is immediately
attractive to someone coming from a highly stratified and bureaucratic organizational
background but would be alien to someone from an informal and innovative
organization such as 3M.
Each strategy may have several objectives. It is at the level of objectives that it is
reasonable to introduce performance measures to gauge progress toward achieving the
objectives and, therefore, the strategies and goals.
Projects are typically the links between plans and the budgeting process. As they are
used in this document, projects are undertakings directed at the accomplishment of an
objective. Projects, with cost and schedule estimates, may implement new information
systems or major improvements to existing systems. The completion of projects should
mark progress toward reducing the gap between an organization's current state and its
future vision.
The literature on the strategic planning process is extensive; the references provide
some recommended starting points. The ITSC has found two documents to be especially
helpful. Wells, [1995] have evolved a structured strategic planning process that is
appropriate for public sector organizations. While many variations of this process are
possible, it does provide a basic framework.
To develop the goals, strategies, and objectives, managers work through a gap
analysis to identify what needs to be changed to move the organization from its current
state to the new desired state. This gap analysis further illustrates the relationships
between strategic planning and typical business process reengineering (BPR) activities,
during which the "as is" and "to be" states are described.
Figure 7.1 shows that developing goals, strategies, and objectives marks the
completion of the strategic plan. Understandably, the calendar time and effort to
develop the plan will vary among organizations. In the Wells approach, the strategic
foundations -- the vision, mission, and guiding principles -- are developed in a 3-day
workshop with trained facilitators.
After the workshop, top management usually seeks the assistance of other
members of the organization to complete the goals, strategies, and objectives, based on
the strategic foundations.
Implementation efforts are monitored and measured so that the organization can
evaluate progress toward achievement of its goals, strategies, and objectives.
Information from this evaluation is used as input to the next iteration (see Figure 7.1) of
the strategic planning process where the strategic plan is validated and updated based
on changed conditions. An important by-product of this process is team-building and
the breaking down of functional barriers. Team building is a natural by-product because
senior managers are discussing the future of the organization and how they can co-
operate towards achievement of that future. The managers have agreed to work together
for the overall aim of the organization rather than their departments. This leads to
organizational alignment and the breaking down of barriers among departments.
Strategic Goals
The strategic goals are broad statements about where the organization wants to be
at some point in the future. These goals work towards achieving the overall mission of
the organization and help achieve the vision of the future. The strategic plan usually
contains goals in multiple strategic areas.
The strategic goals are sometimes confused with the future vision. Whereas the
future vision is a single statement or paragraph that describes a desired organizational
end-state at some point in the future, the strategic goals are general statements of
activities that if pursued would lead to the attainment of the future vision. In this
context the future vision is a statement about where to go, and the strategic goals are
statements about how to get there.
Objectives represent specific courses of action that are bounded by and support the
strategies. They contain a target and a performance measure. In the absence of targets
and performance measures, it is impossible to measure progress towards achieving the
strategic goals.
Introduction
expression, and should therefore be read with at least as critical a disposition as any
other paper in the area. It may also be compared with other critical interpretations such
as Swatman & Swatman (1992), Galliers (1993) and Ciborra (1994).
The notion and its origins are first discussed. The emergence of the key ideas is
then traced. The process whereby strategic information systems come into being is
assessed. Finally, areas of weakness are identified, and directions of current and future
development suggested.
Origins
The role of Information Systems (IS) has developed during the years. The original
conception was of the automation of existing manual and pre-computer mechanical
processes. This was quickly succeeded by the rationalisation and integration of systems.
In both of these forms, IS was regarded primarily as an operational support tool, and
secondarily as a service to management.
During the 1980s, an additional potential was discovered. It was found that, in
some cases, information technology (IT) had been critical to the implementation of an
organisation's strategy. The dominant sense in which the term is used is that a strategic
information system (SIS) is an information system which supports an organisation in
fulfilling its business goals.
organizational culture and skills, and other 'invisible assets' (Chamberlin 1933, Itami
1987). Competition therefore means cultivating unique strengths and capabilities, and
defending them against imitation by other firms. Another alternative sees competition
as a process linked to innovation in product, market, or technology (Schumpeter 1950).
The context within which SIS theory emerged was the competitive strategy
framework put forward by Porter (1980, 1985), which was based on industrial
organisation economics. For developments along that path, see Kaufmann 1966,
Kantrow 1980, Pyburn 1981, Parsons 1983, EDP Analyzer 1984a, 1984b, McFarlan
1984, Benjamin et al 1984, Wiseman & Macmillan 1984, Ives & Learmonth 1984, Cash
& Konsynski 1985, Porter & Millar 1985, Keen 1986, King 1986). This first section
outlines the basis of that theory. It will then be shown how Strategic information
systems theory is concerned with the use of information technology to support or
sharpen an enterprise's competitive strategy.
There are two basic strategic stances that enterprises can adopt:
• low cost; and
• product differentiation.
In the long run, firms succeed relative to their competitors if they possess
sustainable competitive advantage in either of these two, subject to reaching some
threshhold of adequacy in the other.
Another important consideration in positioning is 'competitive scope', or the
breadth of the enterprise's target markets within its industry, i.e. the range of product
varieties it offers, the distribution channels it employs, the types of buyers it serves,
the geographic areas in which it sells, and the array of related industries in which it
competes.
Under Porter's framework, enterprises have four generic strategies available to
them whereby they can attain above-average performance. They are:
• cost leadership;
• differentiation;
• cost focus; and
• focused differentiation.
Porter's representation of them is reproduced in Figure 7.2.
Figure 7.2
By performing these activities, enterprises create value for their customers. The
ultimate value an enterprise creates is measured by the amount customers are willing to
pay for its product or services. A firm is profitable if this value exceeds the collective
cost of performing all of the required activities. To gain competitive advantage over its
rivals, a firm must either provide comparable value to the customer, but perform
activities more efficiently than its competitors (lower cost), or perform activities in a
unique way that creates greater buyer value and commands a premium price
(differentiation). (Refer to figure 7.3)
Figure 7.3
Many differentiation bases exist, classified into four major groups (Border 1964,
quoted in Wiseman 1988):
IT can be used to support or sharpen the firm's product through these various attributes.
Figure 7.4
• New technologies. These may alter the path of the value chain, e.g. the
invention of semiconductors forced many vacuum-tube producers out of
business, and the printing and publishing industries are currently confronted by a
major upheaval;
• New or shifting buyer needs. Customers are demanding the convenience and
consistency offered by fast-food chains. This in turn influences related
market segments;
First is integrating the network into the organization's business strategy thus
ensuring the flexibility that today's businesses require. New business opportunities
require new application that need network support. An integrated planning process
ensures that the network can meet new demands and support new directions.
Competition in the last half of the 1990's and beyond requires that all business
investment provide benefits greater than their cost, networking is no different. Planning
any major business investment is challenging; however, planning the enterprise network
is especially complex. Network planners often, face several constraints that limit their
progress. For example, most organizations large enough to consider data networks have
already committed to a computing strategy, which may confine network planning
options. Many have already implemented data networks, and the financial inertia of
existing systems limits planning mobility. Additionally, new computing architectures,
machine platforms, and business reengineering make network planning little like
building an ocean liner in mid-voyage. Network planners must consider the constraints
as they develop their plans and evolve towards an effective enterprise network.
Today's businesses put much greater demands on the network than those of just a
few years These increasing demands are coupled with unprecedented technological
advances, service offerings, and vendor selection options. Uniting the business and IT
plans with an appropriate enterprise network is becoming increasingly difficult.
8.1 Introduction
Workflow systems are the flagship information system within the class of
group productivity software. Business processes are dynamic in nature. As the
market changes, as the organization changes, as regulations change, the business
process must change in response. Workflow systems are the essential tools which
an organization must have to respond to in this changing environment.
The basic idea behind groupware is said to be moving information, not people by
using software supported "intentional group processes" aimed at what one commentator
defines as "proactive analysis, compression and automation of information based tasks
and activities". Group Decision Support Systems (GDSS) are defined as interactive
computer systems supporting a group's formulation and solution of "unstructured
problems". Workflow is defined as a "process driven" way of managing a series of tasks
defined by procedures related to the flow of documents through organizations. It is
noted that one of the biggest problems in the workflow world is the lack of a "succinct
definition", although a March 1996 report identifies four types of workflow:
Production, Collaborative, Administrative and Ad hoc.
Early workgroup computing systems began to appear in the mid-1980s. The focus
of these systems, not surprisingly, was on the automation of mission-critical business
processes, which are repetitive in nature. These systems became known as "workflow
systems", with FileNet Corporation becoming the clear market leader in automating
repetitive business processes. In reality, these systems are simply transaction processing
systems that automate paper processing in addition to data processing. Early
applications have been bank loan processing, insurance claims processing, and records
management archiving.
As the workflow market has evolved, systems have tended to specialize in one of
three types of applications:
• Collaborative workflow systems for the automation of the more fluid, mission-
critical, knowledge-based business processes.
Figure 8.1
The workflow product must provide an easy to use, visual, point and click
interface for configuring screens and building workflow scripts. Screen building, script
building, and workflow should be integrated into a single user interface.
The workflow product must be application independent, that is, it must not impose
a particular data or routing structure. The rules, which define the organization structure,
document structure, and process routing must be completely customizable by
authorized users. Furthermore, the user must be able to change any of these workflow
definition structures through an intuitive interface, which does not require programmers
to execute.
environments should be able to define workflow objects with all of the key capabilities
of the underlying workflow product.
Figure 8.2
• Job Definition: The ability to define the workflow job as a series of tasks
and link each task to a set of task completion rules.
must support parallel document routing and the ability to re-synchronize the
workflow process based on managing the document approval cycle.
• Event Monitoring: The engine must monitor significant events such as when
a user does not approve or reject a document within a specified time frame, and
generate appropriate system actions based on such events.
• Audit logging: Audit logs and reports should be provided to allow managers
and regulators to review historical actions and provide proof of compliance
with mandated processes.
In providing these functions the workflow engine is supporting both the need to
structure the workflow process as well as to enable authorized users to generate ad hoc
events within the workflow process. If developed properly, the workflow product will
to convert data from legacy systems into relational databases, will provide the best of
both worlds.
Figure 8.3
Groupware environments are the backbone upon which users will expect
document management systems and workflow systems to operate. Major commercial
groupware products include Lot Notes, Novell GroupWise, and Microsoft Exchange.
Lotus Notes is the current groupware market leader with approximately 4.5 Million
installed seats. Groupware products include built-in electronic mail and database
replication where data is distributed based on the user group. Each user group
contains an object database which may include groupware forms as well as other
document types.
The rapidly growing popularity of the Internet and web (HT-FP) servers` threatens
the market position of the major commercial groupware products. Many analysts in fact
anticipate that the Internet as well as major corporate Intranets will become the
dominant groupware environment of the future. The Internet provides several
significant advantages over the currently installed technology infrastructure:
• Low incremental cost per user relative to current user workstations. The
Internet relies only on a "thin client" to retrieve information from back-end
servers, and may in fact open up new opportunities for lower cost client
hardware.
The Internet has no built-in database structure, as one would find in a groupware
product. However, major relational database vendors such as Oracle, Informix, and
Sybase have announced their own Web servers so that Web pages can be dynamically
generated "on-the-fly" to serve as a front-end to these relational databases. This
potential convergence between the Internet front-end and the relational database back-
end threatens the position of many commercial groupware products.
intranets, one integration strategy will be to allow a web browser to serve as the client
for the workflow engine.
Workflow products, which work across the major groupware environments will
become essential elements of the emerging information infrastructure required to
manage the global enterprise. Workflow products which work in both traditional
client-server environments as well as with the Internet/intranet will provide the best of
both worlds, taking full advantage of each world while offering the buyer the
opportunity to use a common workflow product across the corporation.
Chapter 9: groupware
Introduction
Below we've listed the three functions that groupware supports and the software
that facilitates that support.
1. Team Dynamics
♦ Videoconferencing Software
♦ Whiteboard Software
2. Document Management
3. Applications Development
calendars are used to schedule videoconferencing meetings, multi-player games use live
video and chat to communicate, and newsgroup discussions spawn more highly-
involved interactions in any of the other systems.
Consider how these systems can be integrated in other ways. We are still quite far
from developing the grand groupware system that encompasses every type of
communication, and we will probably never get there since the possibilities are
constantly evolving with changes in both our patterns of social interaction and the
technology we have available.
♦ is a component of groupware
♦ may even let you reserve a certain room for a meeting and any equipment that
you may need
Office Tracker
ResSched
WallCHART
Livelink OnTime
Visto Briefcase
ScheduleSoft
Meetingmaker
♦ Provides instant access to free/busy time information for both users and
resources
♦ is a component of groupware
Groupsystems
Microsoft NetMeeting
Meeting Builder
Videpconferencing software
Videoconferencing software:
♦ is a component of groupware
♦ allows teams to have meetings and see each other when the team members
are geographically dispersed
NetMeeting
♦ Made my Microsoft
EVX
Bright Light
CineVideo/Direct
♦ Made by PictureTel
CU-SeeMe
♦ With Pro pack video-in feature, share video from your camcorder or VCR
while you talk
MyPlaceWare
♦ Select a time for the meeting and send e-mail addresses an invitation with
the URL
QVIX/CU30
Whiteboard software
Whiteboard software:
♦ is a component of groupware
SMART Board
InPerson Whiteboard
♦ Allows you to view, mark up, scale, and manipulate 2D and 3D models
GroupSystems Whiteboard
TeamBoard
Soft Board
♦ Lets you create softproofs from all common graphic arts formats
TearnWave
♦ Use whiteboard for meetings, note taking, design, discussion, and more
♦ is a component of groupware
Work Expeditor
MQSeries Workflow
Staffware
PowerFlow
Ultimus Workflow
Dolphin
Cabinet NG
Lotus Note/Domino R5
Microsoft Exchange
GroupWase
♦ Y2K compliant
LinkWorks
TeamWARE Office
Web-4M
Netscape - SuiteSpot
♦ Integrated software set that forms the basis of the networked enterprise
Newsgroups and mailing lists are similar in spirit to email systems except
that they are intended for messages among large groups of people instead of' 1-to-1
Hypertext is a system for linking text documents to each other, with the Web
being an obvious example. Whenever multiple people author and link documents,
the system becomes group work, constantly evolving and responding to others'
work. Some hypertext systems include capabilities for seeing who else has visited
a certain page or link, or at least seeing how often a link has been followed, thus
giving users a basic awareness of what other people are doing in the system -- page
counters on the Web are a crude approximation of this function. Another common
multi-user feature in hypertext (that is not found on the Web) is allowing any user
to create links from any page, so that others can be informed when there are
relevant links that the original author was unaware of.
Collaborative writing systems may provide both realtime support and non-
realtime support. Word processors may provide asynchronous support by showing
authorship and by allowing users to track changes and make annotations to
documents. Authors collaborating on a document may also be given tools to help
plan and coordinate the authoring process, such as methods for locking parts of the
document or linking separately-authored documents. Synchronous support allows
authors to see each other's changes as they make them, and usually needs to
provide an additional communication channel to the authors as they work (via
videophones or chat).
Shared whiteboards allow two or more people to view and draw on a shared
drawing surface even from different locations. This can be used, for instance,
during a phone call, where each person can jot down notes (e.g. a name, phone
number, or map) or to work collaboratively on a visual problem. Most shared
whiteboards are designed for informal conversation, but they may also serve
structured communications or more sophisticated drawing tasks, such as
collaborative graphic design, publishing, or engineering applications. Shared
whiteboards can indicate where each person is drawing or pointing by showing
telepointers, which are color-coded or labeled to identify each person.
screen. Chat groups are usually formed by having listing chat rooms by name,
location, number of people, topic of discussion, etc.
Many systems allow for rooms with controlled access or with moderators to
lead the discussions, but most of the topics of interest to researchers involve issues
related to unmoderated realtime communication including: anonymity, following
the stream of conversation, scalability with number of users, and abusive users.
While chat-like systems are possible using non-text media, the text version of
chat has the rather interesting aspect of having a direct transcript of the
conversation, which not only has long-term value, but allows for backward
reference during conversation making it easier for people to drop into a
conversation and still pick up on the ongoing discussion.
Multi-player games have always been reasonably common in arcades, but are
becoming quite common on the internet. Many of the earliest electronic arcade
games were multi-user, for example, Pong, Space Wars, and car racing games.
Games are the prototypical example of multi-user situations "non-cooperative",
though even competitive games require players to cooperate in following the rules
of the game. Games can be enhanced by other communication media, such as chat
or video systems.
As with all user interface design, the method used for designing a groupware
system is more significant than specific design suggestions. This introduction thus
begins with the groupware design process. The remaining sections address some of
the most common` issues that face groupware designers.
When designing groupware, it is often best to begin with field studies. The goal is
to understand a particular type of group or organization that will be using the
groupware system. A number of different studies can be conducted: interviews,
surveys, analysis of artifacts used in the work process, examination of processes and
workflows, etc. In all cases, the object is to identify the users' tasks and goals,
understand how the group communicates and determine the power structures and roles.
One key challenge is to appear non-threatening and objective to the users in order
to obtain accurate information and to insure that they will accept any design that results.
Another challenge is translating the findings from one organization to others -- this is
especially a concern when the groupware is intended for organizations which are truly
unique or too large to effectively study.
Interoperability
In the early 90s, AT&T and MCI both introduced videophones commercially, but
their two systems couldn't communicate with each other. This lack of
interoperability/compatibility meant that anyone who wanted to buy a videophone had
to make sure that everyone they wanted to talk to would buy the same system.
Compatibility issues lead to general wariness among customers, who want to wait until
a clear standard has emerged.
Perceived Benefit
Even when everyone in the group may benefit, if the choice is made by
individuals, the system may hot succeed. An example is with office calendar systems: if
everyone enters all of their appointments, then everyone has the benefit of being able to
safely schedule around other people's appointments. However, if it's not easy to enter
your appointments, then it may be perceived by users as more beneficial to leave their
own appointments off, while viewing other people's appointments.
This disparity of individual and group benefit is discussed in game theory as the
prisoner's dilemma or the commons problem. To solve this problem, some groups can
apply social pressure to enforce groupware use (as in having the boss insist that it's
used), but otherwise it's a problem for the groupware designer who must find a way to
make sure the application is perceived as useful for individuals even outside the context
of full group adoption.
Avoiding Abuse
Most people are familiar with the problem of spamming with email. Some other
common violations of social protocol include taking inappropriate advantage of
anonymity, sabotaging group work, or violating privacy.
If a village has a "commons" area for grazing cattle then this area can be a strong
benefit to the community as long as everyone uses it with restraint. However,
individuals have the incentive to graze as many cattle as possible on the commons as
opposed to their own private property. If too many people send too many cattle to the
commons, the area will be destroyed, and the whole village is worse off as a result.
There are a couple of straightforward solutions to the Commons Problem: an
appropriate fee can be charged for each head of cattle or a limit can be imposed on the
number of cattle any individual may bring. These solutions are an appropriate starting
point for solving problems of abuse in GroupWare.
9.6 How is GroupWare Design Different from Traditional User Interface Design?
However, many aspects of groups require special consideration. For instance, not
only do million-person groups behave differently from 5-person groups, but the
performance parameters of the technologies to support different groups vary. Ease-of-
use must be better for GroupWare than for single-user systems because the pace of use
of an application is often driven by the pace of a conversation. System responsiveness
and reliability become more significant issues. Designers must have an understanding
of the degree of homogeneity of users, of the possible roles people play in cooperative
work and of who key decision-makers are and what influences them.
♦ to enable telecommuting
Raw data, as it is initially collected, is of little value or use. The human brain
organizes audio/visual (and other) inputs and the mind interprets these inputs and
assigns meaning to them. Thus a pattern of vibrations in the air is interpreted as
conversation or as music, and a pattern of retinal impulses-is a rose or perhaps a lover.
One sees and hears, but there is much more to this than just a few organized sensory
inputs, for intelligent organization is required to give meaning to the stimuli. Likewise,
economic and scientific data consists only of raw numbers (symbols) until it has been
organized and interpreted. Once such higher levels of meaning have been attached to
data, it is termed information. This intelligent act of attaching meaning to raw data is
clearly an abstraction process. Indeed, assigning meaning may to some extent be
thought of as a synonym for the whole abstraction forming activity.
However, there are practical issues to solve at a lower level than describing what
data processing is. These centre on how data is communicated. Whenever one writes a
symbol like "4" or “four," a potential for communication exists, based on the fact that
these symbols encode a certain idea. By convention, everyone encodes the same idea
with these symbols, so communication is possible.
Computation devices must also encode data in some consistent way. There are two
categories of such codes:
1. External codes. These are usually human-readable characters that are input into
a computer or output from it using a keyboard, screen, printer, or other device. The
most common form for this data is a character such as "4," "a," "%," and so on.
There are some issues relating to data representation that do make a difference to
programmers, however.
The modern digital computer can only understand binary representations of data.
Binary means "two" -- that is to say, only two "states" can be represented. Over the half
century since the computer has been in existance, many ways have been used to
electrically or mecahically record binary representations. In the beginning, punched
tape and cards were used-- hole or no-hole indicated the binary state. With today's
magnetic media, the polarity (positive or negative) is used to indicate these two binary
states. Many other methods have been used in the past and presently. For humans (not
machines), we use the convenient notation of zero [0] or one [1] to write these two
binary states. We call these elemental states bits--the "zero bit" and the "one bit." The
term "bit" is derived from "BInary digiT".
Quite obviously, if zero and one were all we could represent in the computer, we
wouldn't have much data. Only rarely is business data represented by a single bit. Bits,
however, are similar to atoms. They serve to create more complicated representations--
to carry the analogy further, the "molecules" of data that humans can use to encode
business information. Bits are assembled into eight-bit patterns called bytes. The basic
storage unit in the computer's main memory and secondary storage is the byte. Bits are
not stored separately.
the files. More sophisticated application are still programmed frequently using the
programming language available with the data base management system.
A database is a single collection of data. It can hold the information about many
different entities related to the company (an entity being simply a person, place or thing).
Databases not only contain the information about these entities (known as attributes),
they also contain information about relationships.
A database is all the raw data needed to create and provide information that will
satisfy the varying needs of users in the organization. In some cases the database may
be a mixture of data held on computer and manual filing systems but for the purposes of
future sessions we will assume it is all held on computer. Inside the database are all of
the items of data that belong to the organisation but this data may have been stored on
different systems in different locations (a distributed database). In order to be able to
locate where the information is stored, the database requires a management system
which effectively provides a map of where data is stored and a programme to locate and
retrieve the data specified. In order to overcome the problem of data stored on different
systems, written in different 'computer languages', programming staff have had to
develop bespoke management systems specifically for the purpose. The Data Base
Management System is, in effect, the intermediary between raw information and those
who need the information, giving users the freedom to use the data in their own specific
way. This may not be the ideal solution - disparate databases often require a high level
of maintenance to maintain their effectiveness and there is a substantial cost associated
with this. A preferred solution may be to develop a corporate database with a
corporate Information Systems structure and framework.
The database is the primary means of integration and dissemination of data (refer
to figure 10.1)
The data that is stored or updated by one system is then available to the others. It
can also be used to produce specific management information.
Figure 10.2
The higher up the organisation, the less structured information tends to become
(refer to figure 10.2)
In a previous session, a figure similar to the one above was used to explain the
role of different levels of management within an organization. If we consider the
types of decision with, which these levels will be involved then there will be a
tendency to move from structured decisions (programmed) at the operational level, to
unstructured (non-programmed) decisions at the strategic level. Underpinning the
pyramid is a representation of the operational and functional systems of the business,
i.e. accounting, stock control, debtors/creditors, providing information for the operational
level of management. In the past, these systems have also provided the management
information for the tactical and strategic levels of management. In effect, the next
layer up has received information contextualized by the level below. This has three
major disadvantages:
ii. Information being passed upwards is a reflection of what that level thinks
the next level up requires (but is often not the case).
iii. By the time information has passed up the structure, been acted upon and
decisions passed back down again, the situation will have altered in any case!
First of all, strategic planning helps provide a personal vision for the future. It
is so easy to get wrapped up in the present that we lose sight of where the future
might be leading us. If we intend to grow in any significant aspect of our lives, we
need to periodically focus on what the future will look like. Depending on where
you are in your career and your life, this future vision may be as little as two or
three years out or could be as much as twenty or thirty years in the future. The
ironic thing about focusing on a period in the future is that what you project for
that time is probably not what's going to happen. Specific circumstances,
opportunities, threats, and personal preferences may lead you in a distinctly
different direction from what you established in your initial planning effort. That
does not invalidate the planning process. By focusing on the future, we are able to
determine when it is appropriate to change a course of direction.
development, and service to others. Just as a chair will not function properly if one
of its legs is longer or shorter than the other, neither will our lives function
effectively without some sort of holistic balance. Giving significant attention to
another important aspect of our lives does not necessarily detract from our
professional focus. In fact, it is possible to achieve a true synergy wherein the
"whole" person can be even more productive professionally as well as personally.
interesting to note that we frequently don't have time to plan the job appropriately
ahead of time. However, when the job does not get done right, somehow or other
we always find the time to do it over again. Time is one of those exhaustible
commodities with which we have to deal as professionals. Consequently, the
planning process, while it does take time, could lead us to much more productive
use of what limited time we do have available.
When things are going well is a time when it is easy to fall into the trap of
thinking that they are going to continue to go well "forever." When we are on a
highly successful path is probably the time when strategic planning is most critical.
Despite the euphoria we may feel at such times, it is absolutely certain that it will
not go on "forever." Something will happen that will be other than what we had
anticipated, thus moving us, whether we like it or not, in a different direction.
Being better prepared for dealing with those situations that may run counter to our
desired direction is one of the single most important reasons for doing strategic
planning in the first place. While we may not be able to anticipate everything that
will have an impact on where we are going, we stand a much better chance of
dealing effectively with that if we have looked ahead and anticipated some of the
things we might be facing.
When things are not going well is probably a more legitimate reason for
postponing strategic planning efforts. When your house is on fire is not the time to
think about installing a sprinkler system. Really, when we are faced with survival,
that has to take precedence over where we are going in the long run. For, as one
wag said, "In the long run we are all dead." The problem that frequently faces us,
however, is that we tend to think we are always in a crisis situation. Unfortunately,
this frequently comes as a result of the way we function personally. If you find
yourself in a situation where you are moving from one crisis to another on a
continuing basis, perhaps that is when you need to take some time off, sit back and
really think about where it is that you want to go. We frequently find that these
crises come as a result of a lack of effective planning in the first place. At some
point, we may need to break that pattern in a way that is going to be more
productive for us.
Text Processing
A.M. Turing said that if a machine could impersonate a human being, then the
machine was thinking. A computer may be said to have artificial intelligence when
it exhibits intelligence ordinarily associated with human behaviour (reasoning,
learning, use of language, and so on). In the field of AI, there are opposing view
points as to whether computers should be programmed to imitate the way the
Existing texts, such as the Bible or Shakespeare's works, can be analyzed, and
so can a collection of utterances gathered from spoken or written sources. Such a
collection is called a corpus. At Brown University (U.S.A.), a corpus of written
American English (consisting of newspaper passages, articles from magazines, and
other literary material) was compiled. A corpus of spoken American English,
similar in size to the Brown corpus, was also collected. The Brown corpus and the
spoken American English corpus were analyzed and compared. This comparison
provided a contrast between written and spoken American English. The pronoun
"I" occurs ten times more frequently in spoken language. Profane and taboo words
are, as expected, more frequent in spoken language. Prepositions occur more
frequently in written than in spoken American English, so different syntactic
structures are used in written and spoken English.
Electronic Spreadsheets
By the summer of 1978, Bricklin had programmed the first working version of
his concept. The program would let users input a matrix of five columns and 20
rows. The first version was not very "user friendly" so Bricklin recruited Bob
Frankston to improve and expand the program. Frankston expanded the program
and "packed the code into a mere 20k of machine memory, making it both powerful
and practical enough to be run on a microcomputer".
During the summer of 1978, Daniel Flystra joined Bricklin and Frankston.
Flystra was also an M.I.T./HBS grad. Flystra was marketing oriented and
suggested that the product would be viable if it could run on an Apple computer.
The three formed Software Arts corporation in January 1979. In April 1979, the
company began marketing "VisiCalc", a compression of the phrase "visible
calculator".
What is a spreadsheet?
In the accounting world a spreadsheet was and is a large sheet of paper that lays
everything out for a businessperson. It spreads or shows all of the costs, income, taxes,
etc. on a single sheet of paper for a manager to look at when making a decision.
An electronic spreadsheet organizes information into columns and rows. The data
can then be "added up" by a formula to give a total or sum. The, spreadsheet
summarizes information from many sources in one place and presents the information
in a format to help a decision maker see the financial "big picture" for the company.
The market for electronic spread sheets was growing rapidly in the early 1980s
and VisiCale was slow to respond to the introduction of the IBM PC that used an Intel
computer chip. Mitch Kapor developed Lotus and his spreadsheet program quickly
became the new industry standard. It not only made spreadsheet formula easier by using
the shorter, more intuitive Al referencing system (as opposed to Visicalc's R1C1
system) but also added graphics and set spreadsheets on the road to become major data
presentation packages, as well as complex calculation tools. Lotus was also the first
spreadsheet vendor to introduce naming cells, cell ranges and spreadsheet macros.
While at Visicorp, Kapor wrote Visiplot/Visitrend which he sold to Visicorp for $1 million.
Part of that money was used to start Lotus Development Corp. Ironically, Kapor offered
to sell Visicorp his Lotus 1-2-3 program. Supposedly VisiCorp executives declined the
offer because Lotus 1-2-3's functionality was "too limited". To date Lotus 1-2-3 is still
the all-time best selling application software in the world.
The next milestone was Excel. Originally written for the 512K Apple Macintosh
in 1985 Excel was one of the first spreadsheets to use a graphical interface with true
pull down menus and a mouse point and click. The spreadsheet instantly became easier
to use than the "archaic interface" of PC-DOS products and many people bought Apple
Macintoshes simply so that they could use Excel. Excel never did come out in a DOS
version.
When Microsoft unveiled the original Windows in 1987, Excel was one of the first
products to be written for it, and even now many people still use Excel 2.1 which was
written to run under Windows version 2. When Windows finally took off with Version 3.0
in late 1989 Excel was its flagship product. It remained the only Windows spreadsheet
for nearly 3 years and has only received any real competition from other products since
summer 1992.
By the mid 1980s many companies had introduced spreadsheet products. Lotus
had acquired Software Arts and the rights to VisiCalc. Also, Microsoft had-joined the
fray with the Excel spreadsheet. By the mid-1990's IBM had acquired Lotus and
Microsoft Excel was the spreadsheet market leader.
11.1 Introduction
Executive support systems (ESS) help managers make unstructured and semi-
structured decisions. They focus on the information needs of senior management and
combine data from both internal and external sources. A system is a generalized
ESS can and do change the workings of organizations. Executives are better able
to monitor activities below them, allowing them to push decision making further down
in the organization while expanding the executive's span of control.
ESS flexibility allows executives to shape the problems, using the system as an
extension of their own thinking. ESS offer executives the ability to analyze quickly
and to compare and highlight trends, freeing up executives -- and even more so their
staffs -- for more creative analysis and decision making.
♦ Organizational structure
♦ Crisis management
♦ capability to look over the situation in your company “in half an hour"
♦ it doesn't require any keyboard skills from user, executives can use just
mouse
♦ many kind of printed reports and reviews due the different criteria
Executive Information Systems (EIS) have grown in interest and use over the
past 15 years. They are a response to inadequacies in Management Information
Systems (MIS) which, although capable of manipulating vast quantities of data, are
frequently difficult to use and not able to respond to managers' needs with any
degree of flexibility. An executive decision maker requires a precise understanding
of the current organisational situation and it has been observed (Dreyfus & Dreyfus
1986) that businessmen prefer concrete information, even gossip, speculation and
hearsay to the abstracted summary information contained in routine MIS reports.
EIS are relatively new tools which purport to provide executives with computer-
based information support for decision making. Not surprisingly, one of the
specified characteristics of an EIS is that it be ‘user-friendly and require minimal
or no training to use' (Watson et al, 1991). Due to the reluctance of executives to
adopt information technology for their own work, developers of EIS have been
particularly concerned with the user-friendliness of their products, with the result
that most EIS have attractive and ease to use interfaces. Despite this, or maybe
because of this, the majority of EIS projects are not successful. This concentration
on a generic interface between a user and the machine arises from work in HCI
(Human Computer Interaction) based on cognitive psychology which, although
adequate for lower level information systems, may account for a lack of EIS
success when applied to these more complex systems
This leads us to seek an optimal interface design for users who are making
executive decisions' and to question whether the current models of HCI are
adequate. Questions raised relate to the characteristics of this user population and
the way executives may be expected to approach and make use of such a system.
Do executives model things differently to others? Do they focus on different
information? Do they deal with information in a different way? Are executives a
homogeneous group in this regard? Have studies been conducted which examine
these questions? In this paper we look at the background to this problem and
suggest ways of discovering solutions to this most fundamental problem in EIS
interface design.
well as more complex choices of what analysis should be done and which data will
be used. Status Access and Data Analysis may shade into each other depending on
how much control is given to the user in status access to vary displays. If a
standard data access tool is used then the system design may be constrained by the
tool specified. Excluding Communication/Organizing tools from a system leaves
Status Access/Query & Analysis as a single module, which can be looked at in
terms of a consistent interface.
There are formal and abstract models of human cognition which deal with
‘language, inference and consciousness' such as Johnson-Laird (1983) but these are
not procedural or practical enough to be useful. Rockart & de Long (1988) outline
a number of models of the way executives operate but are founded on a rather
informal theoretical base lacking a solid psychological foundation. Allen (1994)
has studied the concept of usability which he defines as the non uniform effects of
system characteristics on user performance. He also explored the offering of choice
to users in the interface as a means of improving performance. This is important
because it may indicate that EIS need to be individually tailored to particular
cognitive styles. Er (1988) has classified the cognitive style of managers along two
dimensions; the preferred way of getting data and the preferred way of processing
it. In the former, an individual may be classified as a 'sensing' (S) type who prefers
hard data that deals with specific problems or an 'intuitive' (N) type preferring
holistic information that describes possibilities. On the processing dimension
individuals may be 'thinking' (T) using logic or other formal means for reasoning
or 'feeling' (F) where preference is given to personal terms in decision making.
Combining these dimensions gives four possible decision styles:
♦ Systematic (ST)
This decision maker prefers hard data and logic such as cost benefit analysis
and evaluation research.
♦ Speculative (NT)
This type prefers to speculate future possibilities with dicision trees and
sensitivity analysis.
♦ Judicial (SF)
♦ Heuristic (NF)
making the decisions. For example 'what if analysis suits the speculative decision
maker and cost benefit analysis of hard data suits the systematic type. There is
some evidence (Phillips, 1984) that decision makers do not make decisions based
on the probability of success but rather on the basis of their aversion to failure due
to the more drastic penalties. Phillips goes on to suggest that decision technology
should be centred on problem solvers (with experience, intuition and knowledge)
supplemented by information technology (computers, software, databases,
networks and modelling) and preference technology (value judgements, time and
risk preferences and trade offs).
To obtain user requirements and draw up user profiles for the development of
any computer system it is necessary to have all understanding of its prospective
users. EIS are no different in this respect except that the users are high powered
An EIS itself does not have a clear cut goal as do most conventional computer
based information systems. It is rather a tool which has the flexibility to be used as a
source of information when needed. On the other hand executives are often very goal
oriented in their work on a specific task and rely on sets of intuitive procedures which
are known to have worked previously. This relates well to an Activity Theory approach
as outlined in Boedker (1991) where she maintains that all activity is bound to a goal
and/or an object and the characteristics of the goal or object partially determine and
structure the activity. The goals or objects of activity, undertaken by executives in their
day-to-day tasks, vary and cannot be anticipated when an EIS is being developed. This
concept is rarely appreciated by EIS developers. The relationship of goals, objects and
activities, as found in Activity Theory, provides a framework on which new
development. methodologies can be created for the building of flexible EIS where the
goals of the user can be determined as the system is used.
11.6 Conclusion
Mediated mental processes are the central thesis of Activity Theory. Just as the
use of language represented a new stage in the development of human higher mental
processes can we, as Tikhomirov (1981) poses, "distinguish a new stage in the
Nowhere is the concept of mediation more evident than in the use of EIS for
organisational strategic decision making. EIS is used as a flexible and exploratory tool
by the executive to identify problem areas and windows of opportunity thus
transforming the very activity it was designed to assist. In this way by continued EIS
use the executive's internal model of the organization is modified and new demands are
made of the EIS. The EIS must be flexible enough to adapt to these changing
requirements and to evolve along with the executive if it is to take its place as a useful
tool in a long line of other artefacts adopted by humans EIS over the centuries.
Much work has still to be done in this area in the adaptation of currently available
psychological profiles so that they provide information about executives, which can be
used as input to HO design of EIS interfaces. But the topic seems to be promising as a
future research area.
Introduction
A DSS is a set of very special computer programs that are connected to all internal
networks but use mostly external Wide Area Networks, the information from which is
used to make corporate decisions.
A DSS uses mostly external data and is more hardware complex than the other
forms of MIS.
♦ Input and output: Inputs consists of some summarized reports, some processed
transaction data, and other internal data. They also include data that is external
to that produced by the organization. The outputs are flexible, on-demand
reports with which a top manager can make decisions about unstructured problems.
♦ Mainly for top managers: A DSS is intended principally to assist top managers,
although it is now mainly used by other managers, too. Its purpose is to help
♦ Produces analytic models: The key attribute of a DSS is that it uses models. A
model is a mathematical representation of a real system. The model allows the
manager to do a simulation-play a "what-if' game - to reach decisions.
♦ Control Aids: Capabilities that allow the user to control the activities of the DSS.
♦ DSS software: permits easy interaction between users and the DSS database and
model base and delivers the end-user interface.
DSS generally use smaller amounts of data than MIS, and do not need on-line
transaction data. DSS have a smaller number of important users, and tend to employ
more sophisticated analytic models than MIS. DSS are customized to specific users and
so require even more user participation than building MIS. Moreover, DSS are
continually evolving and changing, so they must use a flexible, iterative method of
development, usually prototyping.
Today DSS uses operational files from Purdue's human resources and financial
information systems to create DSS tables. Business offices and academic units need, this
information to manage their operations. DSS allows access to an Oracle server containing
the administrative database. Query tools such as BRIO 3.5 and other software that
support Open Database Connection (ODBC) such as Microsoft Excel 5.0 are used to
retrieve the data. Information can then be processed in customized and ad-hoc reports,
graphics and tables for management decision-making.
Decision support systems help the decision making process by providing structured
sets of spatial information. Anything from a photograph to a map can be used to support
decision making and in developing a decision support system, experts are consulted to
ensure accuracy and understanding of the information being used.
A decision support systems is based on the requirements for making a specific range
of decisions and may be quite unsuited to a purpose other than that for which is has been
structured. When placed in context with environmental management and geographic
information systems, these tools may be described as highly specialised computer-based
systems which use expert knowledge and have a specific range of uses.
Geographic information systems may be used as decision support systems but not
necessarily vice versa. That is, we clan form late decision support systems from various
sources of spatial information using a geographic information overlay system which may
have no use other than for the range of questions it was designed to help answer. Of
course a decision support system may, have some dynamic modelling capabilities not
available in a geographic information system, particularly where non-spatial data sources
need to be incorporated in a model.
2. Decision design:
3. Choice:
Decision Support Systems and Executive Support Systems focus on supporting the
fast 3 phases.
Components of a DSS
3. Sensitivity analysis features - How do the results change if the values change (not
if the model changes)
4. DSS User Interface & Integration Software: The part of the system that lets the
users user the data in the model
Uses of DSS
They started off aimed as helping senior executives, but they have ended up being
used extensively by middle managers and other professionals (e.g. engineers).
[This is because the types of decisions faced by the two groups differ in terms of
their frequency, the kind of information that is needed, the amount of structure in the
problems]
General examples
Specific examples
• Materials -> (Can take time to get there, buy lots - but not too
much, inventory costs money)
Requires a complete model of the production process - DSS can help by providing
this model and detailed information about the current status.
• Egyptian cabinet
• Complex decision
• Many alternatives
In this case the value of the system was probably the process of building it. To
build a DSS that was acceptable they needed to consider in detail the situation - quite
possible more detail and with more consistency than they otherwise would have.
Introduction
In a GDSS environment, there is usually a big room with something like 40 seats,
which means that 40 people can be at the meeting at any one time. There are not only
40 seats but also 40 microcomputers. This enables every participant to have the use of
one microcomputer during the course of the meeting. The reason why each participant
needs a microcomputer depends on how GDSS works.
In the GDSS, with special computer software, the facilitator of each meeting will
first make the agenda of the meeting, which will be projected onto a big screen that
everyone can see. Then the participants will type simultaneously in their ideas of the
topic of discussion on the individual microcomputers next to them. Then the computer
will sort the ideas, and then the participants will then vote or comment on which ideas
they like or they dislike. In the course of the whole meeting, GDSS stores, categorizes
and prints out all the ideas, comments and vote tallies, so that each of the meeting
participants will get a summary of the meeting when it ends.
Besides, under this GDSS, no one can dominate the meeting. This is because of
another feature of GDSS. GDSS provides an anonymous scheme, so that whatever you
type in the terminal (i.e. your opinion) will be protected. Under this circumstance, no one
really knows who is typing what. Because of this, not a single person can dominate the
meetings. In the worst case, we might say "some ideas" are dominating the meeting, but
this is perfectly fine because this is as a matter of fact an aim of the GDSS: to help
meeting participants voice their opinions from an idea-oriented mindset. For example,
simply because you have a prejudice against person A does not mean that you are going
to reject the idea being proposed in the meeting, because you do not know who is
proposing that idea!!
Besides, this anonymity scheme will also help those team members who are shy to
voice opinions. And with the anonymity, people are likely to be more honest, just as
you'll say more, and more honestly on the professor's evaluation form if you know
whatever you write will not affect your final grade on the course. This, of course„ is
because you know you don't have to worry about the consequences.
However, whether this anonymity is good or not can be very controversial. The
success of meetings supported by GDSS depend largely on the conduct of the
participants. If people are taking advantage of the anonymity system by typing obscene
words or foul languages, this system may be banned for the good of the organization.
The three underlying problems in group decision making that led to the
development of GDSS are:
♦ Hardware: including the conference facility itself, the room, tables, chairs, the
layout of the facility, electronic hardware, audiovisual equipment and computer
hardware;
♦ Software tools: including tools for organizing ideas, gathering information, and
ranking and prioritizing;
♦ Questionnaire tools: support the facilitators and group leaders as they gather
information before and during the prioritization process.
♦ Additional tools might include: group outlining and writing tools, software that
stores and reads project files and software that allows the attendees to view
internal operational data stored by the organization's production computer
systems.
The Commander
Figure 12.1
highest quality materials and exceptional attention to detail. For your organization, the
Commander's classic "c" shape provides executives with a unique sense of team, while
the 40" of elbow room will deliver ample personal space and comfort.
The Executive
Figure 12.2
As the signature table of the IS line, the Executive is a high quality platform that
serves both as an exquisite center-piece and a powerful collaborative tool(refer to figure
12.2). Utilized by Learning Organizations across the country, including GDSS, the
Executive addresses the demands of an organization that needs to unleash the creativity
and innovation of its employees while maintaining an atmosphere of elegance.
The Diplomat
Figure 12.3
Simplicity often produces the greatest results. The Diplomat was forged from this
ideology. This table incorporates the basics of group collaboration into a simple, elegant
design that is perfect for teams' ongoing process work. (Refer to figure 12.3) Available
with or without the incorporation of collaborative technology, the Diplomat's compact
design is ideal for small conference rooms or break out areas. As a stand alone table, or
as a compliment to a larger Innovation Suite, the Diplomat will provide your organization
with a tool that empowers teams to produce quality results in less time.
Figure 12.4
The Two-Top design was born out of a demand for a flexible collaborative tool that
could meet a variety of needs in a taxing environment (refer to figure 12.4). Built with
the same attention to detail and quality as the full size Innovation Suites, the Two-Top
table takes advantage of sliding-covered laptop computers, self-contained networking
hardware, and unique "daisy-chaining" capabilities to produce a multitude of
configurations. This configurable platform increases the flexibility of boardroom
environments, serving as a traditional conference table, a roll-around work station, or a
powerful collaborative tool.
Advantages
Group Decision Support Systems give groups several advantages over many
traditional, non-automated group meetings (Nunamaker, et al., 1991):
• More participation
participants will not be as subject to group think or conformance pressure (the reluctance
to criticize the comments of others due to politeness or fear of reprisals). In addition, each
group member will have more "air time" or time to contribute ideas. In non-automated
meetings, people must listen to others speak and pausing to reflect can cost a turn at
comment or response; a GDSS allows everyone to "speak" in parallel. In a typical
meeting, group members have only few minutes to express their ideas rather than the
entire meeting time. In some non-automated meetings, a few group members may exert
undue influence or monopolize the group's time; a GDSS makes every participant equal,
eliminating member status incongruities. Finally, more information will be presented to
the group as more participate.
• Group synergy
Other group members will be able to use an idea in a manner that the originator did
not because participants have different information skills. Also, the group as a whole will
be able to catch errors in a comment better than the individual who proposed the idea.
Reading a comment often gives creative stimulus to others in the group. Also, groups
may be more likely to consider an idea as the group's idea rather than an individual's
because ideas have been merged together.
A GDSS can record all comments generated during the meeting, and consequently,
the group participants may not need to take notes. In a non-automated setting, group
members have to remember comments (rather than thinking of new ones) until they have
a chance to speak. Participants may also forget what has been said before. In vocal
meetings, some participants may not understand what was said or they may not be able to
process the information quickly enough. This automated log of the discussion supports
the development of an organizational memory from meeting to meeting.
• More structure
A GDSS also provides a certain amount of structure to the meeting. With this
structure in place, it is more difficult to deviate from the problem-solving cycle and make
incomplete or premature decisions. The group has a more concentrated discussion, and
they stay focused on the issues throughout the meeting. Lower levels of non-task
interactions (gossiping, for example) in such groups have been observed as compared
with traditional meetings.
• Other benefits
• Disadvantages
There are some disadvantages to the technology, however, and they include:
• Slow Communication
Most people speak much faster than they type, and thus would usually prefer a
verbal environment (other things being equal). However, a GDSS allows participants to
review recorded comments (people may read and scan faster than they can hear and
process). Other advantages, including anonymity and parallel communication, may
override the slow typing speed. The breakeven point, where it is more efficient to type in
parallel rather than speak and listen in sequence, occurs at a group size of approximately
eight members (depending upon typing speed).
• Conclusion
13.1 Introduction
Expert systems are computer programs designed to review a set of facts (market
conditions) and apply a set of rules (knowledge base) to arrive at the same conclusion
that a team of experts would make if presented with the same set of facts. There are two
primary components to an expert system: knowledge base, and inference engine.
The next step in the process involves applying the rules against a set of facts to
determine which rules apply. Embedded within the expert system is an inference engine,
which determines when and how to apply the rules. The unique order in which the rules
are applied to a given set of facts allows the expert system to be dynamic. As one might
expect given the features and capacity of these technologies, the number of potential
good solutions to a problem can be vast. The inference engine helps to quantify the
solution sets to provide our portfolio managers with consistent, unbiased, unemotional
problem analysis.
Recently, expert systems based decision models have been applied to various areas
within the auditing domain (McCarthy et al. [1992]). These models function adequately
when compared to human judgements. However, expert systems based techniques may
not be the only approach to decision problems within the auditing domain. Another
approach that is often available consists of conventionally coded decision support
computer models. It may be the case that these techniques provide a more efficient and at
least equally effective means of addressing problems within the auditing domain.
Decision support systems may have a potential for greater efficiency when applied
to some audit decision problems. This paper assesses whether decision support systems
are a match for the effectiveness of expert systems when applied to one particular audit
decision problem, namely, the assessment of inherent and control risk in the purchases
transaction cycle and the revision of the hours allocated to an audit program based on
those assessments. Two computer models were constructed, one based on an expert
systems shell (Clips Version 4.2, developed by the NASA Space Labs), and the other a
decision support model constructed using a conventional compiler (Microsoft QuickBasic
Version 4.5). Both models used the same data set as the basis of their knowledge base. This
data set was derived from a case study questionnaire administered to a sample of 80 auditors.
The objective of constructing these two models was to compare the output they
generated to determine if they were comparably effective. This comparison is primarily
concerned with the consistency of the output, but it also addresses other factors such as
the format in which the output is presented and the degree to which the computer models
were accessible to the user.
Before identifying the data used by decision makers and the significance they attach
to them, it is necessary to develop a framework within which to ft these data items. The
audit risk model provides a decision framework that is prescribed for use by auditors for
audit planning decisions. The way that auditor's risk assessments impact on audit
planning provides a readily accessible field for the development of computer models. The
application of the audit risk model by practicing auditors has attracted considerable
research attention (Peters, Lewis and Dhar [1989]) Mock and Wright [1990] examined
the link between audit planning judgement and observed levels of inherent and control
risk. They argued that risk assessments have a stronger impact on the extent of audit
work carried out than on the nature of work carried out and that risk assessments may
be subject to change over time. The models examined in this paper use a fixed set of
audit procedures; the models adjust the hours allocated for performing those
procedures, not the list of procedures.
The user interacts with the system through a user interface, which may use menus,
natural language or any other style of interaction). Then an inference engine is used to
reason with both the expert knowledge (extracted from our friendly expert) and data
specific to the particular problem being solved. The expert knowledge will typically be
in the form of a set of IF-THEN rules. The case specific data includes both data
provided by the user and partial conclusions (along with certainty measures) based on
this data. In a simple forward chaining rule-based system the case specific data will be
the elements in working memory.
Almost all expert systems also have an explanation subsystem, which allows the
program to explain its reasoning to the user. Some systems also have a knowledge base
editor which help the expert or knowledge engineer to easily update and check the
knowledge base.
One important feature of expert systems is the way they (usually) separate domain
specific knowledge from more general purpose reasoning and representation
techniques. The general purpose bit (in the dotted box in the figure) is referred to as an
expert system shell. As we see in the figure, the shell will provide the inference engine
(and knowledge representation scheme), a user interface, an explanation system and
sometimes a knowledge base editor. Given a new kind of problem to solve (say, car
design), we can usually find a shell that provides the right sort of support for that
problem, so all we need to do is provide the expert knowledge. There are numerous
commercial expert system shells, each one appropriate for a slightly different range of
problems. (Expert systems work in industry includes both writing expert system shells
and writing expert systems using shells.) Using shells to write expert systems generally
greatly reduces the cost and time of development (compared with writing the expert
system from scratch).
Figure 13.1
• user interface: provides a means for the non-expert to interact with the
knowledge contained in the knowledge base,
• inference engine: the core of the expert system--determines how the rules in the
knowledge base are processed,
• working memory: an area of memory containing (a) observed facts, and (b) new
facts deduced from observed facts.
3. a new fact deduced from the observed fact and the rule: Socrates is mortal
The only difference between this syllogism and the inference mechanism of an
expert system is that in expert systems, the rules are represented differently. Specifically,
expert systems cannot operate on universal quantifiers such as "all" and thus variables
have to be used instead. For example, the rule.
All men are mortal would have to be re-written as follows for inclusion in an ES
knowledge base:
RULE 1
IF x is a man
THEN x is mortal
This so-called "IF-THEN rule" has two parts: an antecedent (IF x is a man)
and a consequent (THEN x is mortal). A rule is said to fire whenever its antecedent is
satisfied. When a rule fires, its consequent is instantiated and added to working
memory. To illustrate this, consider the following "inference trace":
Socrates is a man
working memory = working memory + Socrates is mortal
Note that there is nothing magic here. The expert system simply has the ability to
match patterns. For instance, it considers the antecedent of the rule and identifies the
following pattern: <variable name> is a man. Then it searches working memory for a
fact of the form <concrete object> is a man. Since a match is found, the variable x is
bound to the concrete object "Socrates" and the consequent Socrates is a man is
added to working memory. This new fact may be used to satisfy some other rule, and
thus the chain of inference continues.
The deductive process itself is relatively straightforward. However, there are two
fundamentally different approaches to deciding how to navigate the knowledge base:
forward chaining and backward chaining. In either case, the problem to be solved is
posed as a question, for instance: "Is Socrates mortal?"
In forward chaining, deduction is data driven. In other words, the goal is not
used to drive the inference process. Instead, the inference engine simply checks the
contents of working memory from time to time to see if the question has been answered.
The basic flow of the forward chaining process is shown below in figure 13.2.
Figure 13.2
In backward chaining, deduction is goal driven. The inference starts with the
original question and seeks to answer it by matching it with the consequent of a rule. The
basic flow of the backward chaining process is shown below in figure 13.3
Figure 13.3
There are two important features o± backward chaining. The first is that the goal (or
question to be answered) always drives the search strategy. The second is that the current
goal changes as new unresolved variables are encountered. The result is that the
backward chaining procedure is called recursively until the original question in answered
or a sub-goal cannot be resolved.
The dependence people place on cases poses a problem to those who treat human
cognition as being primarily rule-based. Much work in artificial intelligence, for
example, is done in "expert systems." These systems are based on the notion that expert
knowledge consists of a collection of rules. By determining the rules an expert in a
domain uses, the idea goes, we may then simulate expert behavior in that domain. Not
surprisingly, expert systems have run into a significant problem: they are brittle. When
faced with a problem which bends the rules, they are unable to cope. They fail because
they are not grounded in cases. They are unable to fall back on the details of their
experience, find a similar case, and apply it. Likewise, they are unable to use similarities
between tough problems and previous experiences to update their rules. Their failure to
retain cases cripples their ability to learn from their experiences.
Writing an expert system generally involves a great deal of time and money. To
avoid costly and embarrasing failures, people have developed a set of guidelines to
determine whether a problem is suitable for an expert system solution:
1. The need for a solution must justify the costs involved in development. There
must be a realistic assessment of the costs and benefits involved.
4. The problem is well structured and does not require (much) common sense
knowledge. Common sense knowledge is notoriously hard to capture and represent. It
turns out that highly technical fields are easier to deal with, and tend to involve relatively
small amounts of well formalised knowledge.
5. The problem cannot be easily solved using more traditional computing methods.
If there's a good algorithmic solution to a problem, you don't want to use an expert
system.
7. The problem is of proper size and scope. Typically you need problems that
require highly specialized expertise, but would only take a human expert a short time to
solve (say an hour, max).
It should be clear that only a small range of problems are appropriate for expert
system technology. However, given a suitable problem, expert systems can bring
enormous benefits. Systems have been developed, for example, to help analyze samples
collected in oil exploration, and to help configure computer systems. Both these systems
are (or were) in active use, saving large amounts of money.
Rule-based systems can be either goal driven using backward chaining to test
whether some hypothesis is true, or data driven, using forward chaining to draw new
conclusions from existing data. Expert systems may use either or both strategies, but the
most common is probably the goal driven/backward chaining strategy. One reason for
this is that normally an expert system will have to collect information about the problem
from the user by asking them questions - by using a goal driven strategy we can just ask
questions that are relevant to a hypothesized solution.
Anyway, in a simple goal-driven rule-based expert system there are often a set of
possible solutions to the problem - maybe these are a set of illnesses that the patient
might have. The expert system will consider each hypothesized solution (e.g., has_Cold
(fred)) and try to prove whether or not it might be the case. Sometimes it won't be able to
prove or disprove something from the data initially supplied by the user, so it will ask the
user some questions (e.g., "have you got a headache?"). Using any initial data plus
answers to these questions it should be able to conclude which of the possible
solutions to the problem is the right one.
The essential point of this test is that it places no restriction on what techniques
are used to attain a successful result. Thus, if we accept this test, and its implications,
artificial intelligence need not be the same form of intelligence, or function in the
same way as human intelligence, so long as they provide results that are comparable
with those produced by a human expert analysing the same problem. An extension of
this position is to say that expert systems based models, and procedural models may
A procedural coded computer model can approach this type of structure. Such a
model can access a knowledge base composed of heuristic, or numerically based
rules. The programme code, which determines how that knowledge base is accessed
and applied' can act in the role of meta-knowledge. However, conventional programming
languages are not adapted specifically for the construction of meta-knowledge structures
while expert systems shells are. For this reason, the scenario outlined above may tend to
arise. While it is possible that procedural and expert systems based models may reach
comparable conclusions, the procedural model may utilise a considerably larger portion
of the knowledge base than is strictly necessary to do so.
This need not be a significant shortcoming. The actual cost of performing analyses
on an ongoing basis, using what may be a less efficient model, is relatively small when
compared to the potential cost of developing a more efficient model that involves much
more advanced developmental techniques.
Waterman (1986) presents three sets of criteria for identifying those situations in
which the development of expert systems is appropriate. These are summarised below.
Similarly, it would appear that Waterman's third class of criteria, the appropriacy
of expert systems, must be met completely. In the absence of these criteria, the
development of expert systems will be possible, however, the violation of these
conditions would present a situation where the development of an expert system
would not be justified, since the problem concerned could clearly be addressed by
simpler modelling techniques, or was not worthy of modelling at all. Once again, it
must be assumed that the presence of pre-existing systems denotes that problems from
within the auditing domain meet these criteria.
On the other hand, Waterman's second category, which seeks to codify those
situations where the use of expert systems is justified, as opposed to possible, would
not appear to need to be met completely in each case. Satisfaction of any one of these
criteria would appear to provide a justification for the use of expert systems to address
that particular problem, provided no simpler technique were available. It seems that at
least two of these criteria, being the (relative) scarcity of human experts, and high
solution payoffs, are met by problems from within the auditing domain.
14.1 Introduction
The litikage of minicomputers in each District (State) office and the Bureau
mainframe computer foTrns the, basis for a Distributed Information System (Posson, 1985).
The Distributed Information System (DIS) provides a high level of local computing
and data-processing capability. Computerized files are transferred between sites
(nodes) within the network and work is done at distant locations. The DIS provides
interactive and batch processing in support of Water Resources Divisions's State and
National water data files, and provides for the diverse computational needs of the
Division. These needs include data management, hydrologic modeling and statistics,
and administrative programs.
The DIS computers are connected to each other via a TCP/IP (Transmission
Control Protocol/Internet Protocol) over an ethernet Local Area Network (LAN). The
LANs are connected together using a frame-relay Wide Area Network (WAN) called
DOINET. The WAN uses Stratacom switches and Cisco routers for the network
service backbone.
hydrologic data files. In addition, the administrator performs other tasks that are
necessary for efficient operation of the system.
A security system is used in ADAPS to restrict access to the data files and to
limit the ability of some users to perform certain operations in the system. The
security system is a multilevel system. User classes of System Administrator (SYST),
Data Base Administrator (ADBA), User (USER), and Cooperator (COOP) have been
established. In addition, Ingres Access Control Lists' (Seybold, 1985, p. 3-1 to 3-28)
are used. Security measures are implemented by the local administrator or manager in
consultation with District supervisory personnel.
14.3 Programs
The programs (software) for ADAPS are developed by personnel of the Water
Resources Division. The master copy resides at Headquarters in Reston Virginia, and is
electronically transferred over the network or distributed by magnetic tape from
Headquarters to each of the District offices. These offices are located nationwide and
include Alaska, Hawaii, Guam, and Puerto Rico. Some Districts have more than one
office, and a copy of the software is located at the subdistrict offices, if they have a
minicomputer.
Most ADAPS programs are written using the Fortran77 programming language, and
several different categories of software make up the water data-processing system. The
major categories are:
• General-purpose programs.
The programs m each of the above categories is used for a specific function or
purpose. For example, the utility programs are used to initialize, create, update, and
maintain the numerous s u p p o r t , processing, and data (time-series) files that make up
the District data bank. The general-purpose programs are used to process many different
types of water data along with the subsidiary calculations and computations that go into
computing and producing a water data record. Most programs use insert and common
blocks to share and communicate data between programs, to provide software flexibility,
and to ease software maintenance. The operating system software is called
UNIX(Seybold, 1985, p. 1-2). The Command Procedure Language (CPL) is a PRIMOS
command level language that provides a programming capability (Landy, 1982).
Operating system commands or directives are passed to PRIMOS for execution after they
are stored in a CPL file (suffix .CPL). The application programs are used primarily to
compute statistical information about the hydrologic data. The graphics programs are
used for preliminary viewing of the data, for comparison purposes, and for report
purposes. The system contains both user-written and vendor-supplied programs such as
DISSPLA (Integrated Software Systems Corporation, 1984).
The Automated Data Processing System (ADAPS) programs are used to compute
water-data records on an electronic computer. The machine operations generally parallel
the manual operations. The sequence of processing the data is well established; however,
the particulars of each step by electronic means may change in response to continued
improvement in storage and access procedures, new or expanded needs, and search for
additional efficiencies. The general functions of the programs are to provide input and
output to and from the computer in a logical sequence; this sequence includes the
computational steps necessary to efficiently compute water data records. Once the
records are computed, they must be saved and archived for general use. Therefore,
programs argil available to provide for a broad scope of functionality including
initialization, maintenance, security, backup, recovery, restart, and other overall data-
processing requirements.
14.4 Fibs
The ADAPS files are the repositories that contain the information and data
necessary to use ADAPS for computing and processing water data records as previously
mentioned. The files consist of program source code and associated executable code,
CPL files, and operation files.
Some of these operational files are shared by the various NWIS systems and some
are specific to ADAPS. The shared and ADAPS-specific files are briefly described in
Section 3 of this manual.
Most files used and/or maintained by ADAPS are structured as MIDAS files and
managed by a utility and user-written software. The MIDAS files allow records to be
retrieved rapidly and efficiently on the basis of selected data elements defined as key
indexes (elements). MIDASPLUS utilities, user-written programs, and scripts are used to
create the file templates, create input files, and subsequently populate the files. Other
utilities are used to dump the files, delete files, cleanup files, and monitor files.
Some information regarding the file creation was provided in informal instructions
(Trapanese, S.M., U.S. Geological Survey, written common., 1987) to install ADAPS on
the local minicomputer and to convert existing minicomputer and mainframe data files
for use in ADAPS. Additional information concerning initialization (creation) of files is
presented later in this manual. ADAPS menu (program) options are selected to perform
the various functions necessary to create new MIDASPLUS files.
In virtually every data-processing system, the possibility exists that errors may
occur that accidently alter or destroy data stored on disks in data files. This may occur
because of unexpected hardware failures, natural disasters such as power outages, etc., or
through improper processing of the data.
It is essential, therefore, to provide a means to ensure that any lost data can be
recovered. The most common method used is backup files. A backup file is merely a
copy of a file stored on magnetic tape or disk. If the file is destroyed or becomes
unusable, the backup can be used to recreate or restore the file. In transaction-oriented
(online) systems, backups are critical because updates to a file can occur at any time.
Therefore, the system should provide for creating backups on a regular basis (sometimes
every hour or day) and saving the transactions that occur to the file after the backup file is
made (incremental backups). If the file is lost, it can be re-created from the backup file
and then the transactions which have been saved can be processed against the file to bring
it back to the status it was before it was lost.
using the backup files and messages sent to terminals asking the users to re-send those
transactions not processed.
Even though knowledge is ultimately grounded in the brains of the users of DIS,
we can devise mechanisms that infer how tokens of knowledge in DIS relate to one
another, from the way these tokens are used by a community of users. Tokens of
knowledge are all the different types of information stored in the nodes of a network, e.g.
text files, databases, links between documents, etc. The meaning of these tokens is
ultimately established by the users of DIS, both individuals and societies. Whether
artificial information networks can ever produce autonomous creation of meaning is an
open question. What they can do exceptionally well, however, is to monitor how their
users manipulate and relate the tokens of knowledge stored. TalkMine does not pursue a
passive analysis of relationships of knowledge tokens, but rather an active categorization of
these tokens by changing the structure of a DIS to adapt it to the changing interests of its
users. We refer to this active organization of distributed information as knowledge mining.
The kind of knowledge management proposed with TalkMine will result in the
evolution of a "Second Generation Internet" or DIS. Theoretically, TalkMine is based on
the idea that there cannot be creation and open-ended evolution of new knowledge in
artificial systems until a fundamental coupling between structure and semantics is
established, that is, until some artificial embodiment is implemented. TalkMine empowers
DIS with such a coupling as a result of the continual feedback between the level of
information utilization by users (semantics and pragmatics), and the relations and links
between tokens of knowledge in networks which define the structure of DIS. The ability
to gather and combine information from different neighborhoods (e.g different networked
databases or WWW sub-regions) of DIS as a result of the continual integration of user
(interactive) queries, is shown to provide this ability for open-ended construction of
knowledge categories.
MANAGEMENT
INFORMATION
SYSTEMS
First Edition
Work Book
Chapter 1
Question 1
List the five components of an information system and describe four roles
people play in information environment.
Question 2
Question 3
Question 4
Question 5
c) What are the steps to be taken by managers to kept proper way valuable
information for in most organisations?
Question 6
Support for Strategic Planning Support for Management Control Support for
Operational Control Improved Product Quality Improved Product Delivery
Chapter 2
Question 1
a) Transaction processing
d) Management Information
f) Data Warehouses
Question 2
Question 3
Question 4
What is business network design? What are the four types of network redesign?
Question 5
Question 6
Explain how on-line analytical processing tools assist in maintaining data archives?
Question 7
Chapter 3
Question l
Question 2
Question 3
a) Analytical support
b) Communication support
Question 4
What do you understand by the term organisation? Outline the three that
organisations share in common.
Question 5
Question 6
Question 7
What is the system development life cycle? Briefly explain the different stages of
the development life cycle.
Question 8
b) System
c) Functional Specification
Chapter 4
Question 1
Question 2
Question 3
Define the term business transformation. What is its impact on the enterprise?
Question 4
Question 5
What has been the impact of the emergence of Internet and virtual banking on
the financial industry?
Question 6
Question 7
Question 8
i. Describe one way in which support of EUC has changed the structure of an
organisation's information service provision.
ii. Briefly describe the general changes in the nature of work undertaken by
information services personnel that you would expect with the introduction
of EUC into an organisation.
Chapter 5
Question 1
Differentiate between the terms management and leadership. What are the main
functions of management?
Question 2
Question 3
With the help of a diagram explain the managerial structure. Explain the role of
the manager at each level.
Question 4
Question 5
Different employees have different needs. The most effective managers know
this and have learned how to adapt their style based on the individual's needs. In this
respect, describe the following terms:
a) Surviving perspective
b) Learning perspective
c) Competing perspective
d) Relating perspective
e) Teaching perspective
Question 6
a) What four common functions do all managers perform? Briefly describe them.
Question 7
a) Define the team of `data and information'. To having good information what
are the steps to be taken, list and explain briefly.
Chapter 6
Question 1
a) Knowledge representation
b) Artificial Intelligence
c) Knowledge Engineering
d) Knowledge Base
e) Expert Systems
Question 2
a) List and briefly describe the disciplines that constitute the subject of artificial
intelligence (AI).
c) Do most expert systems in use today replicate the abilities of a human expert?
What do they do?
Question 3
Question 4
Question 5
Define the term meta knowledge and what is its importance in the field of
artificial intelligence.
Question 6
Question 7
Chapter 7
Question 1
What does framework refer to?
Question 2
Describe the term strategic planning?
Question 3
What are the components of an IS strategy?
Question 4
What are the contents of the IS strategic plan?
Question 5
Write short notes on the following: -
a) Product differentiation
b) Competitive scope
c) Differentiation bases
Question 6
What does the term ‘hybrid manager' refer to?
Question 7
Describe the strategic planning process.
Question 8
Discuss how has the role of Information systems changed over the years?
Question 9
In Porter's view, the performance of individual corporations is determined by the
extent to which they cope with, and manipulate, the five key 'forces' which make up the
industry structure. What are those five key forces?
Question 10
Discuss how differentiation can help enterprises achieve above average
performance.
Chapter 8
Question 1
What are the criteria required to enhance the effectiveness of workgroups?
Question 2
What are the main drawbacks of workflow market structure?
Question 3
What are the key functions that the workflow engine must support to completely
support application requirements?
Question 4
What are the advantages that Internet provides over the workflow computing
infrastructure?
Question 5
Explain how a workgroup is connected to horizontal and vertical organizations.
Give an example of a workgroup, one horizontal and one vertical organization
connected to that workgroup, and one information flow for each connection.
Question 6
What are the advantages of workgroup computing? Question 7
What are the main limitations of transaction-based workflow systems?
Question 8
Distinguish between transaction-based, collaborative and adhoc workflow systems?
Question 9
Explain with the aid of examples what is rapid application generation.
Question 10
Explain the concept of environmental independence with respect to workflow systems.
Chapter 9
Question 1
Question 2
Question 3
What are the functions that groupware supports and the give some examples of
software that facilitate that support.
Question 4
c) Videoconferencing software
d) Whiteboard software
Question 5
Question 6
What does the term CSCW (Computer-Supported Cooperative Work) refer to?
Question 7
Question 8
Question 9
What are the main factors that need to be taken into consideration while
designing a groupware?
Chapter 10
Question 1
Computation devices encode data using basically two types of codes: external and
internal code. Distinguish between these two types of codes.
Question 2
Write short notes on the following:
a) Data processing
b) Database
c) Database Management System
Question 3
In what ways is personal strategic planning beneficial?
Question 4
Why do professionals resist planning?
Question 5
Describe four categories of information systems personnel. What are the functions
of each? Which of these categories are combined in a personal information system?
Question 6
Distinguish between data and information.
Question 7
a) What tasks are involved in designing the people component of a personal
information system?
b) What is parallel installation?
c) What is pilot installation?
Question 8
Summarize how the systems development effort changes when developing
information systems at the personal and workgroup levels.
Chapter 11
Question 1
Question 2
Question 3
Question 4
Question 5
Question 6
What are the main issues that an Executive Support System need to consider?
Question 7
Question 8
Question 9
Chapter 12
Question 1
Question 2
Question 3
Question 4
b) Explain the basic structure and process of the data management component
of a DSS program.
c) Explain the relationship of the data interface program in the DSS and
extraction programs on an organizational computer.
Question 5
Question 6
c) Explain the two ways in which the term management information system is
used. Give the broad definition of MIS. Give the narrow definition of MIS.
Question 7
uestion 8
Chapter 13
Question 1
Question 2
Question 3
What are the main guidelines that need to be considered to determine whether a
problem is suitable for an expert solution?
Question 4
What are the five stages in the development of an expert system? In which two
ways does this process differ from the process used to develop other personal,
workgroup, and organisational information systems?
Question 5
a) List and briefly describe the disciplines that constitute the subject of artificial
intelligence (AI).
c) Do most expert systems in use today replicate the abilities of a human expert?
What do they do?
Question 6
What does the term deductive inference refer to? What are its advantages?
Question 7
Question 8
Question 9
Chapter 14
Question 1
c. Knowledge Mining
Question 2
Question 3
What are the major categories of ADAPS programs? Outline the purpose of each
type of program.
Question 4
Question 5
Question 6
Question 7
Question 8
REFERENCES
- Management Information Systems ActiveBook, 7/e
Kenneth C. Laudon, New York University
Jane P. Laudon, Azimuth Information Systems, Prentice Hall, 2002
- Essentials of Management Information Systems, 4/e
Jane P. Laudon, Azimuth Information Systems
Kenneth C. Laudon, New York University, Prentice Hall, 2001
- Developing Management Skills, 5/e
David A. Whetten, Brigham Young University
Kim S. Cameron, University of Michigan, Prentice Hall, 2002
- Management, 7/e
Stephen P. Robbins, San Diego State University
Mary Coulter, Southwest Missouri State University, Prentice Hall, 2002
- Essence of Information Systems, 2/e
Chris Edwards, Cranfield School of Mgt., Cranfield, UK John Ward, Cranfield
School of Mgt., Cranfield, UK' Andy Byetheway, Cranfield School of Mgt., Cranfield,
UK, Prentice Hall PTR,1998
- Managing Technological Change: Strategies For College And University Leaders
By: Bates, Anthony W.; Bates, Tony; Other Bates, Published: November 1999
Jossey-Bass
- Information Systems Analysis and Modeling: An Information
MacRodynamics Approach (Kluwer International Series in Engineering and
Computer Science, 2"d Edition
by Vladimir S. Lerner
Hardcover (October 1999),
Kluwer Academic Publishers; ISBN: 0792386833
- Information Modeling in the New Millennium
by Matti Rossi (Editor)
Idea Group Publishing; ISBN: 1878289772, Hardcover - 528 pages (April 2001)
- Workplace Studies : Recovering Work Practice and Informing System
Design by Paul Luff (Editor), Jon Hindmarsh, Christian Heath
Cambridge University Press, Hardcover (August 2000)