You are on page 1of 23

STUDY UNIT ELEVEN

IT SECURITY AND APPLICATION DEVELOPMENT

11.1 PHYSICAL AND SYSTEMS SECURITY

1. Data Integrity

a. The difficulty of maintaining the integrity of the data is the most significant limitation of computer-
based audit tools

1) Electronic evidence is difficult to authenticate and easy to fabricate

2) Internal auditors must be careful not to treat computer printouts as traditional paper
evidence. The data security factors pertaining to electronic evidence must be considered.

3) The degree of reliance on electronic evidence by the auditor depends on the effectiveness of
the controls over the system from which such evidence is taken.

4) When making recommendations regarding the costs and benefits of computer security, the
auditor should focus on

a) Potential loss if security is not implemented,

b) The probability of the occurrences, and

c) The cost & effectiveness of the implementation and operation of computer security.

5) The most important control is to enact an organization-wide network security policy. This
policy should promote the following objectives:

a) Availability. The intended and authorized users should be able to access data to meet
organizational goals.

b) Security, privacy, and confidentiality. The secrecy of information that could adversely
affect the organization if revealed to the public or competitors should be ensured.

c) Integrity. Unauthorized or accidental modification of data should be prevented.

b. Many controls once performed by separate individuals may be concentrated in computer systems.
Hence, an individual who has access to the computer may perform incompatible functions. As a result,
other control procedures may be necessary to achieve the control objectives ordinarily accomplished by
segregation of functions.

1) These controls can be classified as one of two broad types, physical controls and logical
controls. Physical controls are further divided into two subcategories, physical access controls
and environmental controls.

2. Physical Controls

a. Physical access controls limit who can physically enter the data center.

Keypad devices Entry of a password or code to gain entry to a physical location or computer system
Card reader Reading information from a magnetic strip on an access card. Controls can then be
controls applied to information about the cardholder contained on the magnetic strip
Biometric Individual’s identity using fingerprints, retina patterns, hand geometry,
technologies physiological or behavioral signature dynamics, speech, and keystroke
traits dynamics

b. Environmental controls are also designed to protect the organization’s physical information assets.

3. Logical Controls

a. Access controls have been developed to prevent improper use or manipulation of data files and
programs. They ensure that only those persons with a bona fide purpose and authorization have access
to computer systems.

Access control software firewall separates internal from external networks


Passwords and ID an effective control in an online system to prevent unauthorized access to
numbers computer files.
File attributes to control access to and the use of files (Read/write, read only, archive, and
hidden)
A device authorization Restricts file access to those physical devices that should logically need
access

4. Internet Security

a. Connection to the Internet presents security issues

1) A firewall separates an internal network from an external network and prevents passage of
specific types of traffic.

a) A firewall may have any of the following features:

A packet filtering system examines each incoming network packet and drops (does not pass on)
unauthorized packets
A proxy server maintains copies of web pages to be accessed by specified users
An application gateway limits traffic to specific applications
A circuit-level gateway connects an internal device, e.g., a network printer, with an outside
TCP/IP port. It can identify a valid TCP session.
Stateful inspection stores information about the state of a transmission and uses it as
background for evaluating messages from similar sources
 Checksums help ensure the integrity of data by checking whether the file has been changed. The
system computes a value for a file and then proceeds to check whether this value equals the last
known value for this file. If the numbers are the same, the file has likely remained unchanged.

11.2 INFORMATION PROTECTION

1. Business Objective

a. According to a publication of The IIA, the following five categories are IT Business
Assurance Objectives:

Availability ensure that information, processes, and services are available at all times
Capability ensure reliable and timely completion of transactions
Functionality ensure that systems are designed to user specifications to fulfill business
requirements
Protectability ensure that a combination of physical and logical controls prevents unauthorized
access to system data
Accountability ensure that transactions are processed under firm principles of data ownership,
identification, and authentication

7. Privacy

a. Management is responsible for ensuring that an organization’s privacy framework is in place. Internal
auditors’ primary role is to ensure that relevant privacy laws and other regulations are being properly
communicated to the responsible parties.

b. The IIA provides guidance on this topic in Practice Advisory 2130.A1-2, Evaluating an Organization’s
Privacy Framework:

1) “Risks associated with the privacy of information encompass personal privacy (physical and
psychological); privacy of space (freedom from surveillance); privacy of communication (freedom
from monitoring); and privacy of information (collection, use, and disclosure of personal
information by others)” (para. 2).

a) Personal information is information associated with a specific individual.

2) “Effective control over the protection of personal information is an essential component of


the governance, risk management, and control processes of an organization. The board is
ultimately accountable for identifying the principal risks to the organization and implementing
appropriate control processes to mitigate those risks. This includes establishing the necessary
privacy framework for the organization and monitoring its implementation” (para. 3).

3) “In conducting such an evaluation of the management of the organization’s privacy


framework, the internal auditor:

a) Considers the laws, regulations, and policies relating to privacy in the jurisdictions where the
organization operates;

b) Liaisons with in-house legal counsel to determine the exact nature of laws, regulations, and
other standards and practices applicable to the organization and the country/countries in which
it operates;

c) Liaisons with information technology specialists to determine that information security and
data protection controls are in place and regularly reviewed and assessed for appropriateness;

d) Considers the level or maturity of the organization’s privacy practices. Depending upon the
level, the internal auditor may have differing roles” (para. 7).
11.3 AUTHENTICATION AND ENCRYPTION

1. Application Authentication

a. Application authentication is a means of taking a user’s identity from the operating system on which
the user is working and passing it to an authentication server for verification.

2. Encryption Overview

a. Encryption technology converts data into a code.

11.4 END-USER COMPUTING (EUC)

1. End-User vs. Centralized Computing

Certain environmental control risks are more likely in EUC

copyright violations
Unauthorized access to application programs
and related data
not have adequate backup, recovery, and
contingency planning
lack the centralized control Program development, documentation, and
maintenance

The auditors should determine that EUC applications contain controls

to discover their existence and their intended functions.


risk assessment
to review the controls included in the applications chosen in the risk assessment

The audit trail is diminished because of the lack of history files, incomplete printed output.

Available security features for stand-alone machines are limited compared with those in a network.

2. Three Basic Architectures for Desktop Computing

a. Client-server model: a client machine on a network and a server

- Security for client-server systems may be more difficult than in a mainframe based system because of
the numerous access points.

Load balancing: distributes data and data processing among available servers, e.g., evenly to all servers
or to the next available server.

11.5 PROGRAM CHANGE CONTROL

a. Once a change to a system has been approved, the programmer should save a copy of the production
program in a test area of the computer, sometimes called a “sandbox.”
1) Except in emergencies, and then only under close supervision, should a change be made
directly to the production version of a computer program.

2) The IT function must be able to revert immediately to the prior version of the program if
unexpected results are encountered during an emergency change.

b. The programmer makes the necessary changes to this copy of the program.

1) The program appears on the programmer’s screen, not as digital bits and bytes,

but as English-like statements and commands. A computer program in this form, i.e., readable by
humans, is called source code.

c. The programmer transforms the changed program into a form that the computer can execute. The
resulting production-ready program is referred to as executable code.

1) Programming languages that are transformed from source code into executable code at run
time by a specialized converter program are said to be interpreted.

2) Programming languages that are transformed in one step, before run time, and then executed
directly on the computer are said to be compiled.

d. Once the executable version of the changed program is ready, the programmer tests it to see if it
performs the new task as expected.

1) This testing process must absolutely not be run against production data. A special set of test
data must be available for running test programs against.

e. The programmer demonstrates the new functionality for the user who made the request.

1) The user either accepts the new program, or the programmer can go back and make further
changes.

f. Once the program is in a form acceptable to the user, the programmer moves it to a holding area.

1) Programmers (except in emergencies) should never be able to put programs directly into
production.

g. The programmer’s supervisor reviews the new program, approves it, and authorizes its move into
production, generally carried out by operations personnel.

1) The compensating control is that operators generally lack the programming knowledge to
put fraudulent code into production.

11.6 APPLICATION DEVELOPMENT

1. Build or Buy

a. The future end users of the system, as well as IT personnel, are also involved, drawing up
specifications and requirements.

2. Systems Development Life Cycle (SDLC)


Definition A proposal for a new system is submitted to the IT steering committee
Feasibility studies are conducted
Design Logical design consists of mapping the flow and storage of the data
Data flow diagrams (DFDs) and structured flowcharts are commonly used
Development Actual program code and database structures that will be used
Testing is the most crucial step of the process
User acceptance testing is the final step before placing the system in live
operation
Implementation  Four strategies for converting to the new system
parallel operation: both are run at full capacity for a given period (SAFEST)
cutover conversion: least expensive and least time-consuming strategy (RISKIEST)
pilot conversion: one branch, department, or division at a time
phased conversion: one function of the new system at a time is placed in
operation
 Training and documentation are critical.
 Systems follow-up or post-audit evaluation is a subsequent review of the
efficiency and effectiveness of the system after it has operated for a
substantial time
Maintenance the final phase of the SDLC

3. Prototyping: creating a working model of the system requested, demonstrating it for the user,
obtaining feedback, and making changes to the underlying code. (This process repeats through several
iterations until the user is satisfied with the system’s functionality) “an efficient means of systems
development”

4. Application Development Tools

a. Computer-aided software engineering (CASE)

b. Object-oriented programming (OOP) combines data and the related procedures into an
object.

c. Rapid application development (RAD)

STUDY UNIT TWELVE

IT SYSTEMS

12.1 WORKSTATIONS AND DATABASES

1. Workstations

2. Database Overview

a. A database is a series of related files combined to eliminate redundancy of data items. A


single integrated system allows for improved data accessibility. A single integrated system
allows for improved data accessibility
b. The data are stored physically on direct-access storage devices (MAGNETIC DISK)

1) The most frequently accessed items are placed in the physical locations permitting
the fastest access

2) A logical data model is a user view. It is the way a user describes the data and
defines their interrelationships based on the user’s needs, without regard to how the
data are physically stored.

3) A data item is identified using the data manipulation language, after which the
DBMS locates and retrieves the desired item(s).

a) The data manipulation language is used to add, delete, retrieve, or modify


data or relationships.

4) The physical structure of the database can be completely altered without having to
change any of the programs using the data items. Thus, different users may define
different views of the data (subschemas).

3. Two Early Database Structures (tape)

a. Storing all related data on one storage device creates security problems.

1) Should hardware or software malfunctions occur, or unauthorized access be


achieved, the results could be disastrous.

2) Greater emphasis on security is required to provide backup and restrict access to


the database.

* Two inefficiencies are apparent at once in this method of accessing data:

a) The customer’s address has to be stored with every order the customer places, taking up
much unnecessary storage.

b) All intervening records must be read and skipped over in order to find both records
pertaining to this customer.

b. Database technology overcame these two difficulties. There are three main ways of
organizing a database: tree/hierarchical, network, and relational.

A tree or hierarchical a one-to-many relationship Stores a pointer with each record


Adv: Cuts down on data redundancy
Dis: Retains the necessity of searching every
record to fulfill a query
Adding new records is awkward and ad hoc
queries are inefficient
The network connects every record Adv: Make queries more efficient
Dis: Maintenance far too complex
Relational Database a conceptual arrangement Tables can be joined or linked based on
common fields
Adv: easy to maintain, processes queries
efficiently

4. Relational Database Structure

a. A relational structure organizes data in a conceptual arrangement.

1) An individual data item is called a field or column (e.g., name, date, amount).

b. Note that in a relational structure, each data element is stored as few times as necessary.
This is accomplished through the process of normalization. Normalization prevents
inconsistent deletion, insertion, and updating of data items

c. The three basic operations in the relational model are selecting, joining, and projecting.

Selecting creates a subset of records that meet certain criteria


Joining combining of relational tables based on a common field or combination of fields
Projecting creates a new table containing only the required information

d. Two features that make the relational data structure stand out are cardinality and
referential integrity.

1) Cardinality refers to how close a given data element is to being unique.

2) Referential integrity means that for a record to be entered in a given table, there
must already be a record in some other table

* A distributed database is stored in two or more physical sites using either replication or
partitioning.

1) The replication or snapshot technique makes duplicates to be stored at multiple


locations.

2) Fragmentation or partitioning stores specific records where they are most needed.

5. Database Terminology

* The schema is a description of the overall logical structure of the database using data-definition
language, which is the connection between the logical and physical structures of the database.

* The database administrator (DBA) is the individual who has overall responsibility for developing and
maintaining the database and for establishing controls to protect its integrity.

* Object-oriented database: the need to store not only numbers and characters but also graphics and
multimedia applications.
*Hypermedia database: blocks of data are organized into nodes that are linked in a pattern
determined by the user so that an information search need not be restricted to the predefined
organizational scheme. A node may contain text, graphics, audio, video, or programs.

* Data warehouse: Contains not only current operating data but also historical information from
throughout the organization.

12.2 IT CONTROL FRAMEWORKS

1. Control Framework

a. A control framework is a model for establishing a system of internal control.

2. COSO

a. Probably the most well-known control framework in the U.S.

b. The COSO Framework defines internal control

● Effectiveness and efficiency of operations

● Reliability of financial reporting

● Compliance with applicable laws and regulations

c. COSO further describes five components of an internal control system:

1) Control environment

2) Risk assessment

3) Control activities

4) Information and communication

5) Monitoring

3. eSAC

a. Electronic Systems Assurance and Control (eSAC) is a publication of The IIA. In the

eSAC model, the organization’s internal processes accept inputs and produce

outputs.

1) Inputs: Mission, values, strategies, and objectives

2) Outputs: Results, reputation, and learning

b. The eSAC model’s broad control objectives are influenced by those in the COSO Framework:

1) Operating effectiveness and efficiency

2) Reporting of financial and other management information


3) Compliance with laws and regulations

4) Safeguarding of assets

c. eSAC’s IT business assurance objectives fall into five categories:

1) Availability. The organization must assure that information, processes, and services are
available at all times.

2) Capability. The organization must assure reliable and timely completion of transactions.

3) Functionality. The organization must assure that systems are designed to user specifications
to fulfill business requirements.

4) Protectability. The organization must assure that a combination of physical and logical
controls prevents unauthorized access to system data.

5) Accountability. The organization must assure that transactions are processed under firm
principles of data ownership, identification, and authentication.

4. COBIT 4.1

a. Specifically for IT controls, the best-known framework is Control Objectives for Information
and Related Technology (COBIT).

* Version 4.1, described here, has been very successful and is not necessarily superseded by
COBIT 5.

b. The COBIT model for IT governance contains five focus areas:

1) Strategic alignment

2) Value delivery

3) Resource management

4) Risk management

5) Performance measurement

c. The COBIT framework embodies four characteristics

Business-focused
Process-oriented
Controls-based
Measurement-driven

d. Each characteristic contains multiple components.

1) Business-focused

a) This characteristic lists seven distinct but overlapping information criteria:


effectiveness, efficiency, confidentiality, integrity, availability, compliance, and reliability.

b) Business goals must feed IT goals, which in turn allow the organization to

design the appropriate enterprise architecture for IT.

c) IT resources include applications, information, infrastructure, and people.

2) Process-oriented. This part of the model contains four domains:

a) Plan and organize

b) Acquire and implement

c) Deliver and support

d) Monitor and evaluate

3) Controls-based. “An IT control objective is a statement of the desired result or


purpose to be achieved by implementing control procedures in a particular IT activity.”
COBIT describes controls in three areas:

a) Process controls. “Operational management uses processes to organize and manage


ongoing IT activities.”

b) Business controls. These impact IT at three levels: the executive management level,
the business process level, and the IT support level.

c) IT general controls and application controls. This dichotomy for IT controls is of very
long standing.

i) “General controls are those controls embedded in IT processes and


services . . . . Controls embedded in business process applications are commonly
referred to as application controls.”

4) Measurement-driven

a) The centerpiece of the COBIT framework in this area is the maturity model.

i) “The organization must rate how well managed its IT processes are. The
suggested scale employs the rankings of non-existent, initial, repeatable,
defined, managed, and optimized.”

b) Performance measurement

Goals and metrics are defined in COBIT at three levels:

IT goals and metrics what the business expects from IT


Process goals and metrics what the IT processes must deliver to support IT’s objectives
Process performance metrics

5. COBIT 5 -- A framework for IT governance and Management


a. COBIT 4.1 is the best-known control and governance framework that addresses information
technology.

1) In its earliest versions, COBIT was focused on controls for specific IT processes.

2) Over the years, information technology has gradually come to pervade every facet of the
organization’s operations. IT can no longer be viewed as a function distinct from other aspects of
the organization.

a) The evolution of this document has reflected this change in the nature of IT within the
organization.

b. COBIT 5 -- Five Key Principles

1) Principle 1: Meeting Stakeholder Needs

a) COBIT 5 asserts that value creation is the most basic stakeholder need. Thus, the creation of
stakeholder value is the fundamental goal of any enterprise, commercial or not.

i) Value creation in this model is achieved by balancing three components:

● Realization of benefits

● Optimization (not minimization) of risk

● Optimal use of resources

ii) COBIT 5 also recognizes that stakeholder needs are not fixed. They evolve under the
influence of both internal factors (e.g., changes in organizational culture) and external
factors (e.g., disruptive technologies).

● These factors are collectively referred to as stakeholder drivers.

2) Principle 2: Covering the Enterprise End-to-End

a) COBIT 5 takes a comprehensive view of all of the enterprise’s functions and processes.
Information technology pervades them all; it cannot be viewed as a function distinct from other
enterprise activities.

i) Thus, IT governance must be integrated with enterprise governance.

b) IT must be considered enterprise-wide and end-to-end, i.e., all functions and processes that
govern and manage information “wherever that information may be processed.”

3) Principle 3: Applying a Single, Integrated Framework

a) In acknowledgment of the availability of multiple IT-related standards and best practices,


COBIT 5 provides an overall framework for enterprise IT within which other standards can be
consistently applied.

b) COBIT 5 was developed to be an overarching framework that does not address specific
technical issues; i.e., its principles can be applied regardless of the particular hardware and
software in use.
4) Principle 4: Enabling a Holistic Approach

a) COBIT 5 describes seven categories of enablers that support comprehensive IT governance


and management:

i) Principles, policies, and frameworks

ii) Processes

iii) Organizational structures

iv) Culture, ethics, and behavior

v) Information

vi) Services, infrastructure, and applications

vii) People, skills, and competencies

5) Principle 5: Separating Governance from Management

a) The complexity of the modern enterprise requires governance and management to be treated
as distinct activities.

i) In general, governance is the setting of overall objectives and monitoring progress toward
those objectives. COBIT 5 associates governance with the board of directors.

● Within any governance process, three practices must be addressed: evaluate, direct, and
monitor.

6. GTAG

a. Beginning in 2005, The IIA replaced its Practice Advisories on IT topics with an extremely
detailed series of documents known collectively as the Global Technology Audit Guide (GTAG).

b. GTAG 1 recognizes three “families” of controls:

1) General and application controls

2) Preventive, detective, and corrective controls

3) Governance, management, and technical control

Governance controls Linked with the concepts of corporate governance


Management controls Deployed as a result of deliberate actions by management to recognize risks
Technical controls The technologies in use within the organization’s IT infrastructures

12.3 ASPECTS OF AUTOMATED INFORMATION PROCESSING

1. Characteristics of Automated Processing


a. The use of computers in business information systems has fundamental effects on the nature of
business transacted, the procedures followed, the risks incurred, and the methods of mitigating those
risks.

b. Transaction Trails

1) A complete trail useful for audit and other purposes might exist for only a short time or only
in computer-readable form. (batched prior or whether they are processed immediately)

c. Uniform Processing of Transactions

1) eliminates random & clerical error, but programming errors & systematic errors

d. Segregation of Functions

1) Many controls once performed by separate individuals may be concentrated in computer


systems. Hence, an individual who has access to the computer may perform incompatible
functions. As a result, other controls may be necessary to achieve the control objectives
ordinarily accomplished by segregation of functions.

e. Potential for Errors and Fraud

1) The potential for individuals, including those performing control procedures, to gain
unauthorized access to data, to alter data without visible evidence, or to gain access (direct or
indirect) to assets may be greater in computer systems. Decreased human involvement in
handling transactions can reduce the potential for observing errors and fraud. Errors or fraud in
the design or changing of application programs can remain undetected for a long time.

f. Potential for Increased Management Supervision

1) Computer systems offer management many analytical tools for review and supervision of
operations. These additional controls may enhance internal control. For example, traditional
comparisons of actual and budgeted operating ratios and reconciliations of accounts are often
available for review on a more timely basis. Furthermore, some programmed applications
provide statistics regarding computer operations that may be used to monitor actual processing.

g. Initiation or Subsequent Execution of Transactions by Computer

1) Certain transactions may be automatically initiated or certain procedures required to execute


a transaction may be automatically performed by a computer system. The authorization of
these transactions or procedures may not be documented in the same way as those in a manual
system, and management’s authorization may be implicit in its acceptance of the design of the
system.

h. Dependence of Controls in Other Areas on Controls over Computer Processing

1) Computer processing may produce reports and other output that are used in performing
manual control procedures. The effectiveness of these controls can be dependent on the
effectiveness of controls over the completeness and accuracy of computer processing. For
example, the effectiveness of a manual review of a computer-produced exception listing is
dependent on the controls over the production of the listing.
2. Two Basic Processing Modes

Batch Processing Online, Real-Time Processing


a delayed basis updated immediately
payroll airline reservation system

12.4 IT CONTROLS

1. Classification of Controls

a. Version 4.1 of Control Objectives for Information and Related Technology (COBIT) has achieved
widespread acceptance. It provides a model for the impact of an organization’s internal controls on IT:

executive management level business objectives are set, policies are established and decisions
are made on how to deploy and manage the resources of the
enterprise to execute the enterprise strategy
business process level Most business processes are automated and integrated with IT
application systems, resulting in many of the controls at this level
being automated as well. These controls are known as application
controls.
support the business processes IT provides IT services, usually in a shared service to many business
processes, as many of the development and operational IT
processes are provided to the whole enterprise, and much of the
IT infrastructure is provided as a common service (e.g., networks,
databases, operating systems and storage). The controls applied to
all IT service activities are known as IT general controls

b. The organization must implement appropriate controls at each of the three levels described in the
COBIT 4.1 model.

1) For example, at the executive level, an IT steering committee should be established,


composed of senior managers from both the IT function and the end-user functions. The
committee approves development projects, assigns resources, and reviews their progress.

2) The steering committee also ensures that requests for new systems are aligned with the
overall strategic plan of the organization.

c. The interaction between general and application controls is crucial for an audit. According to COBIT
4.1, The reliable operation of these general controls is necessary for reliance to be placed on application
controls.

d. The following are the categories of general controls: systems development, change management,
security, and computer operations. (Parity check, Echo check)

e. The following are examples of types of application controls: completeness, accuracy, validity,
authorization, and segregation of duties.

2. Application Controls: Input – Process – Output


a. They should provide reasonable assurance that the recording, processing, and reporting of data are
properly performed.

Input Controls Batch Input Controls Financial totals, Record counts, Hash totals
Online Input Controls Preformatting, Field checks, Validity checks, Limit
checks, Self-checking digits, Sequence checks, Zero
balance checks
Processing controls Ensure that data are complete and accurate during updating
Output controls Processing results are complete, accurate, and properly distributed.
An important output control is user review.

STUDY UNIT THIRTEEN

IT SYSTEMS AND BUSINESS CONTINUITY

13.1 FUNCTIONAL AREAS OF IT OPERATIONS

1. Segregation of Duties

Traditional IT
Authorization, recording, and access to assets Programming, Operating, Library

a) If the same person provides the input and receives the output for this process, a significant
control weakness exists. Accordingly, certain tasks should not be combined.
b) Compensating controls may be necessary, such as library controls, effective supervision, and
rational personnel. Segregating test programs makes concealment of unauthorized changes
in production programs more difficult.

2. Responsibilities of IT Personnel

Systems analysts Qualified to analyze and design computer information systems


database administrator Developing and maintaining the database and for establishing controls
to protect its integrity (only the DBA should be able to update data
dictionaries)
Programmers Design, write, test, and document the specific programs according to
specifications developed by the analysts
Webmaster Responsible for the content of the organization’s website
Operators Responsible for the day-to-day functioning of the data center
 Computer operators should not have programming knowledge
or access to documentation not strictly necessary for their
work
Help desks Log reported problems, resolve minor problems, and forward more
difficult problems to the appropriate information systems resources,
such as a technical support unit or vendor assistance
Information security officers In charge of developing information security policies, commenting on
security controls in new applications, and monitoring and investigating
unsuccessful login attempts
Network technicians Maintain devices that interconnect the organization’s computers and
the organization’s connection to other networks, such as the Internet
End users be able to change production data but not programs

13.2 ENTERPRISE-WIDE RESOURCE PLANNING (ERP)

1. Overview

a. ERP is intended to integrate enterprise-wide information systems across the organization by creating
one database linked to all of the entity’s applications.

b. ERP connects all functional subsystems (human resources, the financial accounting system,
production, marketing, distribution, purchasing, receiving, order processing, shipping, etc.) and also
connects the organization with its suppliers and customers.

1) ERP facilitates demand analysis and materials requirements planning.

2) By decreasing lead times, it improves just-in-time inventory management.

3) ERP’s coordination of all operating activities permits flexible responses to shifts in supply and
demand.

c. The disadvantages of ERP are its extent and complexity, which make customization of the software
difficult and costly.

d. The benefits of ERP may significantly derive from the required business process reengineering.

a) Implementing an ERP system is likely to encounter significant resistance because of its


comprehensiveness. Most employees will have to change ingrained habits and learn to use new
technology. Hence, successful implementation requires effective change management.

2. Functions of a Traditional ERP

a. Materials requirements planning (MRP) was an early attempt to create an integrated computer-based
information system. (to plan and control materials used in a production setting) Push system

b. Manufacturing resource planning (MRP II) continued the evolution begun with MRP.

4. Configuration

b. An advantage of an ERP system is the elimination of data redundancy through the use of a central
database. In principle, information about an item of data is stored once, and all functions have access to
it.

1) Thus, when the item (such as a price) is updated, the change is effectively made for all
functions. The result is reliability (data integrity).

a) If an organization has separate systems for its different functions, the item would have to be updated
whenever it was stored. Failure of even one function to update the item would cause loss of data
integrity. For example, considerable inefficiency may arise when different organizational subunits (IT,
production, marketing, accounting, etc.) have different data about prices and inventory availability.
5. Implementation

a. The initial step is to do strategic planning and to organize a project team that is

representative of affected employee groups.

b. The second step is to choose ERP software and a consulting firm.

c. The third and longest step is pre-implementation.

1) The length of the process design phase is a function of the extent of

a) Reengineering

b) Customization of the software

2) Data conversion may be delayed because all departments must agree on the meaning of every
data field, i.e., what values will be considered valid for that field.

3) The ERP system and its interfaces must be tested.

e. Implementation (“going live”) is not the final step. Follow-up is necessary to monitor the activities of
the numerous employees who have had to change their routines.

f. Training should be provided during implementation not only regarding technical matters but also to
help employees understand the reasons for process changes

6. Costs

7. Benefits

a. The benefits of an ERP system may be hard to quantify.

13.3 WEB INFRASTRUCTURE

1. Overview

a. The Internet is a network of networks all over the world.

b. Most Internet users obtain connections through Internet service providers (ISPs) that in turn connect
either directly to a backbone or to a larger ISP with a connection to a backbone.

c. The three main parts of the Internet

Servers hold information


Clients view the information
Transmission Control Protocol/Internet Protocol suite of protocols that connect the two
(TCP/IP)

d. A gateway: translates between two or more different protocol families and makes connections
between dissimilar networks possible

2. Servers
3. Mainframes to Desktops

a. Large mainframe computers

1) All processing and data storage were done in a single, central location.

2) Communication with the mainframe was accomplished with the use of dumb terminals,
simple keyboard-and-monitor combinations with no processing power (i.e., no CPU) of their
own.

b. Networking to connect computers in separate buildings and eventually separate countries.

c. Improvements in technology have led to increasing decentralization of information processing.

4. Languages and Protocols

hyperlink to click on a word or phrase on their screens and having another document
automatically be displayed
Hypertext Markup simple coding mechanism (LIKE HYPERLINK)
Language (HTML)
Hypertext Transfer to allow hyperlinking across the Internet rather than on just a single computer
Protocol (HTTP)
Extensible Markup a variation of HTML (hypertext markup language), which uses fixed codes
Language (XML) (tags) to describe how web pages and other hypermedia documents should be
presented (XML uses codes that are extensible, not fixed.)
Extensible Business for financial statements is the specification developed by an AICPA
Reporting Language
(XBRL)

5. Uses

Intranet restricted to those within the organization and to outsiders after appropriate
identification
Extranet uses the public Internet as its transmission medium but requires a password for access

13.4 IT SYSTEM COMMUNICATIONS

1. Systems Software

Systems software performs the fundamental tasks needed to manage computer resources
operating system an interface among users, application software, and the computer’s hardware
application software
Utility programs perform basic data maintenance tasks (Sorting, Merging, Copying)

2. Network Equipment

Client devices
network interface card (NIC)
3. Data and Network Communication

a. A protocol is a set of formal rules or conventions governing communication between a

sending and receiving device.

b. A network consists of multiple connected computers at multiple locations.

c. A local area network (LAN) connects devices within a single office or home or among

buildings in an office park

1) A peer-to-peer network operates without a mainframe or file server, but does processing
within a series of personal computers.

A peer-to-peer network client-server model


operates without a mainframe or file servers are centrally located and devoted to the functions that
server, but does processing within a are needed by all network users (differ from peer-to-peer
series of personal computers networks in that the devices play more specialized roles)
increasingly difficult to administer The most cost-effective and easy-to-administer arrangement
with each added device for LANs
The key to the client-server model is that it runs processes on
the platform most appropriate to that process while
attempting to minimize traffic over the network
Security for client-server systems may be more difficult than
in a highly centralized system because of the numerous
access points

metropolitan area connects devices across an urban area, for instance, two or more office parks
network (MAN)
wide area network consists of a group of LANs operating over widely separated locations
(WAN) Value-added Private networks that provide their customers with
networks (VANs) reliable, high-speed secure transmission of data.
By providing their customers with (a) error detection and
correction services, (b) electronic mailbox facilities for EDI
purposes, (c) EDI translation, and (d) security for email
and data transmissions
Virtual private A company connects each office or LAN to a local Internet
networks (VPNs) service provider and routes data through the shared, low-
cost public Internet. (a relatively inexpensive way to solve
the problem of the high cost of leased lines)
private branch a specialized computer used for both voice and data
exchange (PBX) traffic
Public-switched Use public telephone lines to carry data.
networks Economical, but the quality of data transmission cannot
be guaranteed, and security is questionable

4. Classifying Networks by Protocol

a. A protocol is a set of standards for message transmission among the devices on the network

b. LAN Protocols

1) Ethernet has been the most successful protocol for LAN transmission.

c. Switched Networks

d. Routed Networks

1) Routing is what makes the Internet possible.

a) Transmission Control Protocol/Internet Protocol (TCP/IP) is the suite of routing


protocols that makes it possible to interconnect many thousands of devices from dozens
of manufacturers all over the world through the Internet.

e. Wireless Networks

1) Radio-frequency identification (RFID) technology involves the use of a combined microchip


with antenna to store data about a product, pet, vehicle, etc. Common applications include

a) Inventory tracking

b) Lost pet identification

c) Tollbooth collection

13.5 SOFTWARE LICENSING

1. Rights Pertaining to Software

a. Software piracy is a problem for vendors. Any duplication of the software beyond what is allowed in
the software license agreement is illegal.

1) To avoid legal liability, controls also should be implemented to prevent use of unlicensed
software that is not in the public domain. A software licensing agreement permits a user to
employ either a specified or an unlimited number of copies of a software product at given
locations, at particular machines, or throughout the organization. The agreement may restrict
reproduction or resale, and it may provide subsequent customer support and product
improvements.

b. Diskless workstations increase security by preventing the copying of software to a flash drive from a
workstation. This control not only protects the company’s interests in its data and proprietary programs
but also guards against theft of licensed third-party software.

c. To shorten the installation time for revised software in a network, an organization may implement
electronic software distribution (ESD), which is the computer-to-computer installation of software on
workstations. Instead of weeks, software distribution can be accomplished in hours or days and can be
controlled centrally. Another advantage of ESD is that it permits the tracking of PC program licenses.

13.6 CONTINGENCY PLANNING

1. Overview

a. The information security goal of data availability is primarily the responsibility of the IT function.

b. Contingency planning

Disaster recovery the process of resuming normal information processing operations after the
occurrence of a major interruption
Business continuity the continuation of business by other means during the period in which
computer processing is unavailable or less than normal

2. Backup and Rotation

a. Periodic backup and offsite rotation of computer files is the most basic part of any disaster recovery
or business continuity plan

c. The offsite location must be temperature- and humidity-controlled and guarded against physical
intrusion. Just as important, it must be far enough away from the site of main operations not to be
affected by the same natural disaster

3. Risk Assessment Steps (Recovery)

a. Identify and prioritize the organization’s critical applications

b. Determine the minimum recovery time frames and minimum hardware requirements

c. Develop a recovery plan

4. Disaster Recovery Plan (DRP)

a. Disaster recovery is the process of regaining access to data (e.g., hardware, software, and records),
communications, work areas, and other business processes.

b. Thus, a DRP that is established and tested must be developed in connection with the business
continuity plan

5. Contingencies with Data Center Available

Cold site
Warm site
Hot site
Mirror site

7. Other Technologies for Restoration of Processing

8. Business Continuity Management (BCM) Overview


a. The objective of BCM is to restore critical processes and to minimize financial and other effects of a
disaster or business disruption

b. BCM is the third component of an emergency management program

9. Elements of BCM

a. Management Support

b. Risk Assessment and Mitigation

c. Business Impact Analysis

e. Awareness, Exercises, and Maintenance

1) Education and awareness (including training exercises) are vital to BCM

2) The BCM capabilities and documentation must be maintained to ensure that they remain
effective and aligned with business priorities.

You might also like