MIS AND ERP Material For MBA 2nd SEMESTER By Mr.

Satyabrata Kanungo


MODULE-1 Management information system
A management information system (MIS) is a system that provides information needed to manage organizations effectively. Management information systems are regarded to be a subset of the overall internal controls procedures in a business, which cover the application of people, documents, technologies, and procedures used by management accountants to solve business problems such as costing a product, service or a business-wide strategy. Management information systems are distinct from regular information systems in that they are used to analyze other information systems applied in operational activities in the organization. Academically, the term is commonly used to refer to the group of information management methods tied to the automation or support of human decision making, e.g. Decision Support Systems, Expert systems, and Executive information systems An 'MIS' is a planned system of the collection, processing, storage and dissemination of data in the form of information needed to carry out the management functions. In a way, it is a documented report of the activities that were planned and executed. According to Philip Kotler "A marketing information system consists of people, equipment, and procedures to gather, sort, analyze, evaluate, and distribute needed, timely, and accurate information to marketing decision makers." The role of a management information system (MIS) is to provide a manager with sufficient information to make informed decisions to help him to carry out the above functions. The best definition of an MIS is: The role of a management information system is to convert data from internal and external sources into information that can be used to aid in making effective decisions for planning, directing and controlling.


Organizational structure
• An organizational structure consists of activities such as task allocation, coordination and supervision, which are directed towards the achievement of organizational aims. It can also be considered as the viewing glass or perspective through which individuals see their organization and its environment. • Many organizations have hierarchical structures, but not all.[citation needed] • An organization can be structured in many different ways, depending on their objectives. The structure of an organization will determine the modes in which it operates and performs. • Organizational structure allows the expressed allocation of responsibilities for different functions and processes to different entities such as the branch, department, workgroup and individual. • Organizational structure affects organizational action in two big ways. First, it provides the foundation on which standard operating procedures and routines rest. Second, it determines which individuals get to participate in which decision-making processes, and thus to what extent their views shape the organization’s actions. Organization structure:


Information system
• • An information system (IS) - or application landscape - is any combination of information technology and people's activities using that technology to support operations, management, and decision-making. In a very broad sense, the term information system is frequently used to refer to the interaction between people, algorithmic processes, data and technology. In this sense, the term is used to refer not only to the information and communication technology (ICT) an organization uses, but also to the way in which people interact with this technology in support of business processes. Information systems are implemented within an organization for the purpose of improving the effectiveness and efficiency of that organization. Capabilities of the information system and characteristics of the organization, its work systems, its people, and its development and implementation methodologies together determine the extent to which that purpose is achieved. Components: It consists of computers, instructions, stored facts, people and procedures.

Changes in the Business Environment
• In order to understand why the number of students choosing to major in accounting has decreased and why professionals with accounting degrees would not major in accounting again, it is necessary to understand changes that have been taking place in business and how these changes have impacted business and accounting education. For many years, business relied on accountants to prepare financial information for internal and external decision making, to audit the fairness of that information and to assist them in fulfilling their regulatory and tax-reporting requirements. Information was expensive and understanding how to prepare accurate financial reports required expertise that could only be developed through rigorous accounting education or relevant experience. Rarely did an individual or institutional investor have sufficient power to influence management or require that specific information be provided. Organizational threats came largely from a few domestic competitors. Because information preparation and dissemination was expensive, product life cycles and competitive advantages could be managed effectively and inefficiencies were not readily observable.

Drivers of Change • At least three major developments have occurred that have changed dramatically the business environment for which we prepare graduates. First, technology has been developed that has made information preparation and dissemination inexpensive. This technology has taken the form of low-cost, high-speed digital and cable video and data transmission, hardware that produces information quickly and easily, and the development of software that makes preparation, data, and communication tools available to individuals who previously did not have access to needed information. With these technology developments, time, space, and other temporal constraints to information have been reduced and, in many cases, eliminated. • A second major development that has significantly impacted business has been globalization. Faster methods of transportation, together with instantaneous information, have allowed the world to become one giant marketplace. Consumers can now buy products from foreign firms as easily as they can from a local store. Organizations such as General Motors have to worry not only about what Chrysler and Ford are doing, but also what Toyota, Volkswagen, and BMW are doing as well. Infact, Chrysler is not just “Chrysler” anymore. It is now a conglomeration of European, North American, and Asian manufacturers known as DaimlerChrysler. Instead of having only two major American competitors, General Motors and all other business organizations now have to compete with similar companies throughout the world. In addition, 4

with the increased availability of inexpensive information, more is known about these competitors and about General Motors than ever before. If a General Motors product has deficiencies, for example, the world knows about and can act on those problems instantly.

Organization of Information Systems and Services
Chapter Goals Know different ways IS’s are organized Understand each architecture’s advantages and disadvantages Explain the importance of collaboration between IS managers and line managers Explain chargeback List career paths The way information and informational resources are organized impacts an organizations efficiency and effectiveness. Architecture and Management Information System Architecture referrers to the physical layout of computers and data communications networks. Management normally reflects the systems architecture. Centralized – strict vertical hierarchy(Mainframe's geometric power/cost ratio) Advantage – high degree of control Disadvantage – inflexibility
o o

Information could be locked in propriatary format. There could be few tools with which to to develop applications

Decentralized – more authority/responsibility to lower level managers (Organized by department) Increases departmental independence in organizing and utilizing their information systems Tends to be more responsive Disadvantage – can be difficult to share applications, information, and expertise, consequently can be more expensive with more risks The Text defines distributed systems architecture…then says it is same as a connected form of decentralized architecture... Centralized Advantages Economies of Scale Standardized Hardware and Software Easier Training Encouragement of Common Reporting Systems Effective Planning of Shared Systems Easier Strategic Planning Efficient IS Personnel use (though perhaps less effective) Tight Control 5

Decentralized Advantages Better business fit Timely Responsiveness End user empowerment (greater availabilty of application development tools) More Innovative More standards in Information format promoting greater information flow Accommodation to decentralized Enterprise wide management style Information System Organizational Trends Downsizing Less Big Iron (Mainframes) more servers Reengineering – defined as evaluating and revising business processes to improve quality and/or reduce costs Standardizing More use of open standards such as TCP/IP, SQL, and HTTP Internet Increased utilization of Inter/Intra/Extra net technologies Note that decentralized structure is also known as functional organization. IS Organization Tends to mirror the general layout of the organizations information systems architecture Central IS management usually includes a steering committee with representatives from a variety of key business units. It establishes projects for systems development and implementation of communications networks, it considers and prioritizes requests for new systems; and commits funds to projects. On the other hand, when only a central systems department is available, business units often find themselves overly dependent on—and at times resentful of the central unit, over which they have no control, although they depend upon it for their success. In functional IS management, funds for development and maintenance of ISs always come from the unit’s budget. Many companies use elements of both central and functional IS management. Challenges for IS Managers and Line Managers At the line level A broad understanding of the business activities Prompt response to the information needs of the business unit Clear explanation of what the technology can and cannot do for the business unit Straight forward budgeting Reference personnel Chargeback Career Paths Programmers Programmer/Analyst DBA DB Designer Web Specialist 6

Networking Specialist

The Organization
• A social unit of people systematically structured and managed to meet a need or to pursue collective goals on a continuing basis. All organizations have a management structure that determines relationships between functions and positions, and subdivides and delegates roles, responsibilities, and authority to carry out defined tasks. Organizations are open systems in that they affect and are affected by the environment beyond their boundaries. Basically, an organization in its simplest form (and not necessarily a legal entity, e.g., corporation or LLC) is a person or group of people intentionally organized to accomplish an overall, common goal or set of goals. Business organizations can range in size from one person to tens of thousands. There are several important aspects to consider about the goal of the business organization. These features are explicit (deliberate and recognized) or implicit (operating unrecognized, "behind the scenes"). Ideally, these features are carefully considered and established, usually during the strategic planning process. (Later, we'll consider dimensions and concepts that are common to organizations.)

• •

Members of the organization often have some image in their minds about how the organization should be working, how it should appear when things are going well.

An organization operates according to an overall purpose, or mission.

All organizations operate according to overall values, or priorities in the nature of how they carry out their activities. These values are the personality, or culture, of the organization.

Strategic Goals
Organizational members often work to achieve several overall accomplishments, or goals, as they work toward their mission.

Organizations usually follow several overall general approaches to reach their goals.

Systems and Processes that (Hopefully) Are Aligned With Achieving the Goals
Organizations have major subsystems, such as departments, programs, divisions, teams, etc. Each of these subsystems has a way of doing things to, along with other subsystems; achieve the overall goals of the organization. Often, these systems and processes are defined by plans, policies and procedures. How you interpret each of the above major parts of an organization depends very much on your values and your nature. People can view organizations as machines, organisms, families, groups, etc. (We'll consider more about these metaphors later on in this topic in the library.)


Organizations as Systems (of Systems of Systems)
Organization as a System
It helps to think of organizations as systems. Simply put, a system is an organized collection of parts that are highly integrated in order to accomplish an overall goal. The system has various inputs which are processed to produce certain outputs that together, accomplish the overall goal desired by the organization. There is ongoing feedback among these various parts to ensure they remain aligned to accomplish the overall goal of the organization. There are several classes of systems, ranging from very simple frameworks all the way to social systems, which are the most complex. Organizations are, of course, social systems. Systems have inputs, processes, outputs and outcomes. To explain, inputs to the system include resources such as raw materials, money, technologies and people. These inputs go through a process where they're aligned, moved along and carefully coordinated, ultimately to achieve the goals set for the system. Outputs are tangible results produced by processes in the system, such as products or services for consumers. Another kind of result is outcomes, or benefits for consumers, e.g., jobs for workers, enhanced quality of life for customers, etc. Systems can be the entire organization, or its departments, groups, processes, etc. Feedback comes from, e.g., employees who carry out processes in the organization, customers/clients using the products and services, etc. Feedback also comes from the larger environment of the organization, e.g., influences from government, society, economics, and technologies. Each organization has numerous subsystems, as well. Each subsystem has its own boundaries of sorts, and includes various inputs, processes, outputs and outcomes geared to accomplish an overall goal for the subsystem. Common examples of subsystems are departments, programs, projects, teams, processes to produce products or services, etc. Organizations are made up of people -- who are also systems of systems of systems -- and on it goes. Subsystems are organized in an hierarchy needed to accomplish the overall goal of the overall system. The organizational system is defined by, e.g., its legal documents (articles of incorporation, by laws, roles of officers, etc.), mission, goals and strategies, policies and procedures, operating manuals, etc. The organization is depicted by its organizational charts, job descriptions, marketing materials, etc. The organizational system is also maintained or controlled by policies and procedures, budgets, information management systems, quality management systems, performance review systems, etc.

Standard Planning Process is Similar to Working Backwards through the System
Remember how systems have input, processes, outputs and outcomes? One of the common ways that people manage systems is to work backwards from what they want the system to produce. This process is essentially the same as the overall, standard, basic planning process. This process typically includes: a) Establishing overall goals (it's best if goals are defined in measurable terms, so they usually are in terms of outputs) (the overall impacts of goals are outcomes, a term increasingly used in nonprofits) b) Associating smaller goals or objectives (or outputs?) along the way to each goal c) Designing strategies/methods (or processes) to meet the goals and objectives d) Identifying what resources (or inputs) are needed, including who will implement the methods and by when. The five classical functions of a manager are: 1. Planning – the direction a company takes e.g. diversifying, where to operate. 2. Organising - resources such as people, space, equipment and services. 3. Coordinating - the activities of various departments. 4. Decision-making - about the organisation, products or services made or sold, the employees, use of I.T. 8

5. Controlling - monitoring and supervising the activities of others.

The role played by a manager in a business organization may be stated as follows.
(i) To Have Contacts: He has to establish and maintain contacts with many people both within and outside the business. The persons with whom he has regular contacts within the organization include his subordinates, fellow managers and so on. Government officials, suppliers etc., are the outsiders with whom the manager may have frequent contacts. (ii) To Supervise: Every manager has to supervise the work of subordinates while the latter are doing their work and offer necessary help. Supervision also needs to be undertaken to ensure that the subordinates do not waste their time during working hours. (iii) To Attain Targets: Managers may work under pressure most of the time as they have targets to achieve. This is particularly true in the case of production and sales managers who are the line managers. (iv) To Delegate Authority: Managers have to get done things by their subordinates. For this they have to delegate authority to the latter to enable them to perform the tasks assigned. The managers must ensure that the authority delegated is just sufficient to carry out the duties by the subordinates. If authority exceeds responsibility there may be misuse of authority. On the other hand, if authority is inadequate, the subordinates may not be able to carry out the task. (v) To Hold Meetings: Managers, often, may have to hold meetings to put forth their views before their subordinates. Such meetings are also necessary to get feedback information from the subordinates on the progress of their work. Managers of different departments also may have to meet at regular intervals to secure proper co-ordination and to review progress. (vi) To Act as a Leader: As a leader, the manager has to set an example to his subordinates. He must be sincere, honest and committed to his work. Only then, he will be able to guide and motivate the subordinates under him. (vii To Ensure Proper Use of Resources: The manager has to ensure that the organizational resources such as men, machines, materials and money are optimally utilized. (viii) To Resolve Conflicts: 9

Whenever there are conflicts between the employees over certain organizational matters, the manager is expected to resolve all such conflicts and arrive at an amicable solution. (ix) To Undertake Trips: Managers, particularly those in charge of sales, may have to undertake business trips frequently and as a result they may not be able to remain in hometown always. In the same manner, the managers cannot work strictly according to the working hours. They may have to work even beyond working hours sometimes in view of a higher quantum of work. (x) To Make Decisions: Managers also have to make certain routine decisions in connection with matters pertaining to the daily operations of the business. Purchase of raw materials, payment of wages, sanctioning leave to subordinate staff, etc., are examples of such routine decisions. (xi) To Handle Crisis: The manager is also expected to handle crisis that may arise in the organization. Strike-call by the workers, breakdown of machinery, fire accident in the godown or in the workplace are examples of critical situations that may arise in any organization at any time. In such a situation the manager has to act swiftly and wisely and find remedy.
xii)LEADER IS A REPRESENTATIVE OF SUBORDINATES He is intermediary between the work groups and top management. They are called linking pins by rensis likert. As linking pins they serve to integrate the entire organization and the effectiveness depends on the strength of these linking pins. Leader shows personal consideration for the employees. As representatives they carry the voice of the subordinates to the to management. LEADER IS AN APPROPRIATE COUNSELLOR Quite often people in the work place need counseling to eliminate the emotional disequilibrium that is created sometimes in them. Leader removes barriers and stumbling block to effective performance. For instance, frustration that results from blocked need drive keeps an employee derailed or the working track. It is here the leader comes in, renders wise counsel, releases the employee of the emotional tension and restores equilibrium. USES POWER PROPERLY If a leader is to effectively achieve the goal expected of him, he must have power and authority to act in a way that will stimulate a positive response from the workers. A leader , depending on the situation , exercises different types of power , viz reward power and expert power. Besides the formal basis , the informal basis of power also have a more powerful impact on organizational effectiveness. No leader is effective unless the subordinates obey his orders. There fore, the leader uses appropriate power so that subordinates willingly obey the orders and come forward with commitment. LEADER MANAGES THE TIME WELL Times is precious and vital but often overlooked in management. There are three dimensions of time – boss – imposed – time , systemimposed –time and self – imposed time . That are prominent in literature. Because the leader has through knowledge of the principle of time management such as preparing time charts, scheduling techniques, etc., he is in a position to utilize the time productively in the organization. STRIVES FOR EFFECTIVENESS Quite frequently the manager are work – abolic and too busy with petty things to address to major details of effectiveness. To fill the gap, sometimes leaders throws his concerted efforts to bring effectiveness by encouraging and nurturing team work, by better time management


and by the proper use of power. Further, leader provides and adequate reward structure to encourage performance of employees. Leader delegates authority where needed and invites participation where possible to achieve the better result. He also provides the workers with necessary resources. By communicating to workers what is expected of them, leader brings effectiveness to organization. The above functions of the leader are by no means comprehensive but they do suggest as to what leaders do generally. MANAGING AND LEADING Leading and managing are not synonymous. One popular way of distinguishing between managing and leading is brought out by the French terms dux and Rex. Dex is a leader and an activist, innovators and often an inspirational type and rex is a stabilizer or broker of manager. But more realistically, effective management required good leadership. Bennis had once commented, there are many institutions I know are very well managed but very poorly led”. This statement crystal – clearly demonstrates that the difference between managing, and leading is indeed a lot. Though a layman considers managing as a broad terms including leading function a behaviorist advances the following points to marshall the difference between these two leading and managing. RELEATIONSHIPS Managerial behavior implies the existence of a manager managed relationship. This relationship arises with in organizational context. Where as leadership can occur why where, it does not have to originate in the organization context. for example , a mob can have a leader but cannot have a manager. Further, is an organization, informal. Group have leader not managers. SOURCES OF INFLUENCE Another potential difference between leader and manager lies in their sources of influence. Authority is attached to the managerial position in the case of a manager: where as a leader may not have authority but can receive power directly from his followers. In other words, managers obtain authority from his followers. In rather pure terms, this is the difference between the formal authority theory and the acceptance theory of authority. SANCTIONS A Manger has command over all allocation and distributions of sanctions. For Example, manager has control over the positive sanctions such as promotion and awards for his task performance and the contribution to organizational objectives. Manager is also in a position to exercises the negative sanctions such as with holding promotions, or mistakes, etc. In a sharp contrast, a leader has altogether different type of sanctions to exercises and grant. He cans gerent or with hold access to satisfying the very purpose of joining the group’s social satisfactions and related task rewards. These informal sanctions are relevant to the individual with belongingness or ego needs: where as the organizational sanctions granted or exercised by the managers are geared to the physiological and security needs of individual. ROLE CONTINUANCE Another fundamental difference between managing and leading is the role continuance. A manager may continue in office as long as his performance is satisfactory and acceptable to the organization. In sharp contrast, a leader maintains his position only through the day to day wish to the followers. REASONS FOR FOLLOWING Though in both managing and leading followers become involved, the reasons may be different. People follow managers because their job description, supported by a system of rewards and sanctions, requires them to follow. Where as people follow leader on voluntary basis. Further, it there are no followers, leader no more exists. But, even if there are no followers, a manager may be there.

Managers are organizational members who are responsible for the work performance of other organizational members. Managers have formal authority to use organizational resources and to make decisions. In organizations, there are typically three levels of management: top-level, middle-level, and first-level. These three main levels of managers form a hierarchy, in which they are ranked in order of importance. In most organizations, 11

the number of managers at each level is such that the hierarchy resembles a pyramid, with many more first-level managers, fewer middle managers, and the fewest managers at the top level. Each of these management levels is described below in terms of their possible job titles and their primary responsibilities and the paths taken to hold these positions. Additionally, there are differences across the management levels as to what types of management tasks each does and the roles that they take in their jobs. Finally, there are a number of changes that are occurring in many organizations that are changing the management hierarchies in them, such as the increasing use of teams, the prevalence of outsourcing, and the flattening of organizational structures.

Top-level managers, or top managers, are also called senior management or executives. These individuals are at the top one or two levels in an organization, and hold titles such as: Chief Executive Officer (CEO), Chief Financial Officer (CFO), Chief Operational Officer (COO), Chief Information Officer (CIO), Chairperson of the Board, President, Vice president, Corporate head. Often, a set of these managers will constitute the top management team, which is composed of the CEO, the COO, and other department heads. Top-level managers make decisions affecting the entirety of the firm. Top managers do not direct the day-to-day activities of the firm; rather, they set goals for the organization and direct the company to achieve them. Top managers are ultimately responsible for the performance of the organization, and often, these managers have very visible jobs. Top managers in most organizations have a great deal of managerial experience and have moved up through the ranks of management within the company or in another firm. An exception to this is a top manager who is also an entrepreneur; such an individual may start a small company and manage it until it grows enough to support several levels of management. Many top managers possess an advanced degree, such as a Masters in Business Administration, but such a degree is not required. Some CEOs are hired in from other top management positions in other companies. Conversely, they may be promoted from within and groomed for top management with management development activities, coaching, and mentoring. They may be tagged for promotion through succession planning, which identifies high potential managers.

Middle-level managers, or middle managers, are those in the levels below top managers. Middle managers' job titles include: General manager, Plant manager, Regional manager, and Divisional manager. Middle-level managers are responsible for carrying out the goals set by top management. They do so by setting goals for their departments and other business units. Middle managers can motivate and assist first-line managers to achieve business objectives. Middle managers may also communicate upward, by offering suggestions and feedback to top managers. Because middle managers are more involved in the day-to-day workings of a company, they may provide valuable information to top managers to help improve the organization's bottom line. Jobs in middle management vary widely in terms of responsibility and salary. Depending on the size of the company and the number of middle-level managers in the firm, middle managers may supervise only a small group of employees, or they may manage very large groups, such as an entire business location. Middle managers may be employees who were promoted from first-level manager positions within the organization, or they may have been hired from outside the firm. Some middle managers may have aspirations to hold positions in top management in the future.


First-level managers are also called first-line managers or supervisors. These managers have job titles such as: Office manager, Shift supervisor, Department manager, Foreperson, Crew leader, Store manager. First-line managers are responsible for the daily management of line workers—the employees who actually produce the product or offer the service. There are first-line managers in every work unit in the organization. Although firstlevel managers typically do not set goals for the organization, they have a very strong influence on the company. These are the managers that most employees interact with on a daily basis, and if the managers perform poorly, employees may also perform poorly, may lack motivation, or may leave the company. In the past, most first-line managers were employees who were promoted from line positions (such as production or clerical jobs). Rarely did these employees have formal education beyond the high school level. However, many first-line managers are now graduates of a trade school, or have a two-year associates or a four-year bachelor's degree from college.

Managers at different levels of the organization engage in different amounts of time on the four managerial functions of planning, organizing, leading, and controlling. Planning is choosing appropriate organizational goals and the correct directions to achieve those goals. Organizing involves determining the tasks and the relationships that allow employees to work together to achieve the planned goals. With leading, managers motivate and coordinate employees to work together to achieve organizational goals. When controlling, managers monitor and measure the degree to which the organization has reached its goals. The degree to which top, middle, and supervisory managers perform each of these functions is presented in Exhibit 1. Note that top managers do considerably more planning, organizing, and controlling than do managers at any other level. However, they do much less leading. Most of the leading is done by first-line managers. The amount of planning, organizing, and controlling decreases down the hierarchy of management; leading increases as you move down the hierarchy of management.

Exhibit 1 Time Spent on Management Functions at Different Management Levels

In addition to the broad categories of management functions, managers in different levels of the hierarchy fill different managerial roles. These roles were categorized by researcher Henry Mintzberg, and they can be grouped into three major types: decisional, interpersonal, and informational. 13

Decisional roles require managers to plan strategy and utilize resources. There are four specific roles that are decisional. The entrepreneur role requires the manager to assign resources to develop innovative goods and services, or to expand a business. Most of these roles will be held by top-level managers, although middle managers may be given some ability to make such decisions. The disturbance handler corrects unanticipated problems facing the organization from the internal or external environment. Managers at all levels may take this role. For example, first-line managers may correct a problem halting the assembly line or a middle level manager may attempt to address the aftermath of a store robbery. Top managers are more likely to deal with major crises, such as requiring a recall of defective products. The third decisional role, that of resource allocator, involves determining which work units will get which resources. Top managers are likely to make large, overall budget decisions, while middle mangers may make more specific allocations. In some organizations, supervisory managers are responsible for determine allocation of salary raises to employees. Finally, the negotiator works with others, such as suppliers, distributors, or labor unions, to reach agreements regarding products and services. First-level managers may negotiate with employees on issues of salary increases or overtime hours, or they may work with other supervisory managers when needed resources must be shared. Middle managers also negotiate with other managers and are likely to work to secure preferred prices from suppliers and distributors. Top managers negotiate on larger issues, such as labor contracts, or even on mergers and acquisitions of other companies.

Interpersonal roles require managers to direct and supervise employees and the organization. The figurehead is typically a top of middle manager. This manager may communicate future organizational goals or ethical guidelines to employees at company meetings. A leader acts as an example for other employees to follow, gives commands and directions to subordinates, makes decisions, and mobilizes employee support. Managers must be leaders at all levels of the organization; often lower-level managers look to top management for this leadership example. In the role of liaison, a manger must coordinate the work of others in different work units, establish alliances between others, and work to share resources. This role is particularly critical for middle managers, who must often compete with other managers for important resources, yet must maintain successful working relationships with them for long time periods.

Informational roles are those in which managers obtain and transmit information. These roles have changed dramatically as technology has improved. The monitor evaluates the performance of others and takes corrective action to improve that performance. Monitors also watch for changes in the environment and within the company that may affect individual and organizational performance. Monitoring occurs at all levels of management, although managers at higher levels of the organization are more likely to monitor external threats to the environment than are middle or first-line managers. The role of disseminator requires that managers inform employees of changes that affect them and the organization. They also communicate the company's vision and purpose. Managers at each level disseminate information to those below them, and much information of this nature trickles from the top down. Finally, a spokesperson communicates with the external environment, from advertising the company's goods and services, to informing the community about the direction of the organization. The spokesperson for major announcements, such as a change in strategic direction, is likely to be a top manager. But, other, more routine information may be provided by a manager at any level of a company. For example, a middle manager may give a press release to a local newspaper, or a supervisor manager may give a presentation at a community meeting.


Regardless of organizational level, all managers must have five critical skills: technical skill, interpersonal skill, conceptual skill, diagnostic skill, and political skill.

Technical skill involves understanding and demonstrating proficiency in a particular workplace activity. Technical skills are things such as using a computer word processing program, creating a budget, operating a piece of machinery, or preparing a presentation. The technical skills used will differ in each level of management. First-level managers may engage in the actual operations of the organization; they need to have an understanding of how production and service occur in the organization in order to direct and evaluate line employees. Additionally, firstline managers need skill in scheduling workers and preparing budgets. Middle managers use more technical skills related to planning and organizing, and top managers need to have skill to understand the complex financial workings of the organization.

Interpersonal skill involves human relations, or the manager's ability to interact effectively with organizational members. Communication is a critical part of interpersonal skill, and an inability to communicate effectively can prevent career progression for managers. Managers who have excellent technical skill, but poor interpersonal skill are unlikely to succeed in their jobs. This skill is critical at all levels of management.

Conceptual skill is a manager's ability to see the organization as a whole, as a complete entity. It involves understanding how organizational units work together and how the organization fits into its competitive environment. Conceptual skill is crucial for top managers, whose ability to see "the big picture" can have major repercussions on the success of the business. However, conceptual skill is still necessary for middle and supervisory managers, who must use this skill to envision, for example, how work units and teams are best organized.

Diagnostic skill is used to investigate problems, decide on a remedy, and implement a solution. Diagnostic skill involves other skills—technical, interpersonal, conceptual, and politic. For instance, to determine the root of a problem, a manager may need to speak with many organizational members or understand a variety of informational documents. The difference in the use of diagnostic skill across the three levels of management is primarily due to the types of problems that must be addressed at each level. For example, first-level managers may deal primarily with issues of motivation and discipline, such as determining why a particular employee's performance is flagging and how to improve it. Middle managers are likely to deal with issues related to larger work units, such as a plant or sales office. For instance, a middle-level manager may have to diagnose why sales in a retail location have dipped. Top managers diagnose organization-wide problems, and may address issues such as strategic position, the possibility of outsourcing tasks, or opportunities for overseas expansion of a business.

Political skill involves obtaining power and preventing other employees from taking away one's power. Managers use power to achieve organizational objectives, and this skill can often reach goals with less effort than others who lack political skill. Much like the other skills described, political skill cannot stand alone as a manager's skill; in particular, though, using political skill without appropriate levels of other skills can lead to promoting a manager's own career rather than reaching organizational goals. Managers at all levels require political skill; managers must 15

avoid others taking control that they should have in their work positions. Top managers may find that they need higher levels of political skill in order to successfully operate in their environments. Interacting with competitors, suppliers, customers, shareholders, government, and the public may require political skill.

There are a number of changes to organizational structures that influence how many managers are at each level of the organizational hierarchy, and what tasks they perform each day.

Exhibit 2: Flat vs. Tall Organizational Hierarchy

Organizational structures can be described by the number of levels of hierarchy; those with many levels are called "tall" organizations. They have numerous levels of middle management, and each manager supervises a small number of employees or other managers. That is, they have a small span of control. Conversely, "flat" organizations have fewer levels of middle management, and each manager has a much wider span of control. Examples of organization charts that show tall and flat organizational structures are presented in Exhibit 2. Many organizational structures are now more flat than they were in previous decades. This is due to a number of factors. Many organizations want to be more flexible and increasingly responsive to complex environments. By becoming flatter, many organizations also become less centralized. Centralized organizational structures have most of the decisions and responsibility at the top of the organization, while decentralized organizations allow decisionmaking and authority at lower levels of the organization. Flat organizations that make use of decentralization are often more able to efficiently respond to customer needs and the changing competitive environment. As organizations move to flatter structures, the ranks of middle-level managers are diminishing. This means that there a fewer opportunities for promotion for first-level managers, but this also means that employees at all levels are likely to have more autonomy in their jobs, as flatter organizations promote decentralization. When organizations move from taller to flatter hierarchies, this may mean that middle managers lose their jobs, and are either laid off from the organization, or are demoted to lower-level management positions. This creates a surplus of labor of middle level managers, who may find themselves with fewer job opportunities at the same level. 16

A team is a group of individuals with complementary skills who work together to achieve a common goal. That is, each team member has different capabilities, yet they collaborate to perform tasks. Many organizations are now using teams more frequently to accomplish work because they may be capable of performing at a level higher than that of individual employees. Additionally, teams tend to be more successful when tasks require speed, innovation, integration of functions, and a complex and rapidly changing environment. Another type of managerial position in an organization that uses teams is the team leader, who is sometimes called a project manager, a program manager, or task force leader. This person manages the team by acting as a facilitator and catalyst. He or she may also engage in work to help accomplish the team's goals. Some teams do not have leaders, but instead are self-managed. Members of self-managed teams hold each other accountable for the team's goals and manage one another without the presence of a specific leader.

Outsourcing occurs when an organization contracts with another company to perform work that it previously performed itself. Outsourcing is intended to reduce costs and promote efficiency. Costs can be reduced through outsourcing, often because the work can be done in other countries, where labor and resources are less expensive than in the United States. Additionally, by having an out-sourcing company aid in production or service, the contracting company can devote more attention and resources to the company's core competencies. Through outsourcing, many jobs that were previously performed by American workers are now performed overseas. Thus, this has reduced the need for many first-level and middle-level managers, who may not be able to find other similar jobs in another company. There are three major levels of management: top-level, middle-level, and first-level. Managers at each of these levels have different responsibilities and different functions. Additionally, managers perform different roles within those managerial functions. Finally, many organizational hierarchies are changing, due to changes to organizational structures due to the increasing use of teams, the flattening of organizations, and outsourcing.

Activity of the Organization: Promoting Organizations
Well-planned and well-executed publicity does more than help ensure attendance at meetings. It also enhances your organization's sense of purpose, builds pride, and creates community awareness. Good promotion of your organization's activities can be one of its best assets. Whether it is carried out by one public relations officer or a promotion committee, the job is twofold: 1.To serve as a communication link with members and others involved in your organization's programs. Inform people of upcoming events, promoting items of interest and providing newsworthy information. 2. To serve as a communication link with the general public. Explain the objectives of your group and place your successful programs and activities before the public to foster understanding and goodwill.

Know Your Organization
As a member, you are familiar with your group's membership, objectives and accomplishments. Your objective as promotion chair is to know what's happening at all times. Arrange to receive copies of the secretary's minutes and important committee reports. Consult with former promotion chairs. They can give you ideas about working with 17

local media, keeping on top of events and choosing the appropriate publicity techniques. A good way to start your job is to assemble a notebook with the information you gather.

Designing A Promotion Plan
There are lots of different tools you can use to inform the community or promote activities. It takes research and some careful evaluation of the results to select the right tool for each method. The first step is to determine what type of promotion is required for your activity. Ongoing events such as upcoming meetings can usually be publicized by an announcement. Special events such as fund-raising, membership drives, and community activities require a broader range of publicity and demand more intensive treatment. Both types require research. A small group with few financial resources should never feel compelled to orchestrate an expensive, high-gloss campaign, nor should an organization ever send out material which looks as if it had been thrown together at the last minute. Four factors should be considered in a promotion plan:
• • • •

your organization's publicity needs your organization's calendar the people you want to inform the information centres and media outlets in your area

Decide what can be done: work out a promotion schedule, discuss it with the executive, finalize the plan and give copies to the executive. Remember that the plan is a guide, so you can change it as necessary. It provides a checklist to assure that all jobs get done. All duties and tasks should be listed in chronological order, for example: Activity Prepare press release Mail press release Date Person Responsible

Contact media by phone October 1 John October 5 Susan October10 Committee

Gather information at your meetings. Select information carefully by asking what will be of greatest interest to your audience. Answer the 5 Ws - Who? What? Where? When? Why?-and How? Look for human interest facts that will give your story an unusual angle. Make it stand out from the rest. In most situations, you will want to spend your time focusing on media. Begin by identifying all the local newspapers, magazines, radio and T.V. stations in your area. Call or visit each media office. Make an appointment with the person who will be handling the news from your organization. Learn the following:
• •

the name of the contact person and his/her title, phone and fax numbers and address. the type of information each media outlet uses. Newspapers often run feature stories, organization news, letters to the editor and community calendars. Magazines print event calendars and feature stories. Radio stations air announcements, interviews and discussion programs. T.V. stations program public service announcements, interview shows and local news. who their audience is. 18

the policies for submitting information: how often they will use your material, what the deadlines are, whether information should be phoned in or submitted in writing, and what the possibilities are of a media contact coming out to your group to do an occasional feature story.

When speaking with each media contact person, give your name, address, telephone number, the name of your organization and a brief outline of its objectives. Ongoing activities are most often promoted by the use of announcements. These are usually publicized without charge in the coming events section in newspapers, or as a public service announcement for radio and television. The announcements should be kept short (40-50 words), be typewritten and double spaced. The contact person's name and telephone number should be located in the top left-hand corner. All material should be dated and sent in before the media deadline. If you have determined that your event is newsworthy enough to merit more extensive coverage (i.e., it involves a large number of the community and has a lot of community interest), then you require a press release. In preparing a release, you should ask yourself the following question: What would the public want or need to know about your activity? The content of the release should then answer the 5 Ws. All these answers are vital to the editor in deciding on whether or how to cover a story. All releases should be brief with the 5 Ws at the beginning of the release. The least important information should be at the end. The contact name and telephone number should be placed the same as an announcement. The release should be kept to one page of double-spaced typing with short sentences. Avoid adjectives. Double-check your releases for grammar, spelling, punctuation, and accuracy. End your release with "END" or "-30-". Releases should be sent two weeks before the deadline date. A follow-up phone call can be made to the editor to further discuss coverage of the event or any questions. When material you provide is not published or broadcasted, politely find out why. This will help eliminate problems in the future.

Few organizations operate with a large advertising budget. A well-designed media campaign begins with consulting media outlets for advice on costs and content. If you plan to purchase advertisement space or air time, allocate your money fairly among the media. For example, it is not fair to place newspaper ads, but expect the radio station to provide free air time. The alternative is "free" advertising via local businesses, store-fronts and other avenues. Free advertising has its limitations including size, location, and audience reached. Weigh alternatives carefully. The success of your group's event may depend on it.

Alternative Publicity Techinques
Full-scale publicity doesn't have to stop at the press release. Posters, flyers and brochures can be produced at low cost and distributed through local businesses, direct mail, libraries, etc. Other ideas could include displays at shopping centres, schools, community events, slide presentations, a speakers' bureau, ads on restaurant placemats, other organizations' newsletters, banners, and much more. Know what your message is and who the intended audience is. Then decide which technique or combination of techniques is best.


One of the strongest promotional tools is word-of-mouth. Everyone involved in your organization is a potential salesperson. The promotion committee should ensure that everyone is well versed in all the activities in order to be an effective promoter.

Costing It Out
Predicting the money involved in a publicity campaign is not an easy task. With your promotion plan in hand, you can determine what the fixed costs will be for the year, e.g. printing, postage, and supplies. Investigate the costs for additional promotion such as advertising, brochure design, and special event costs. Take your time, shop around and use whatever resources are at hand that make the best (most efficient/effective) use of your promotion budget.

The Day Arrives
What do you do the day of the event? All arrangements should be checked and any last minute details performed. Make sure you and your committee are available to answer any questions from the general public and the media. Be prepared to run errands or find necessary information for media inquiries.

Mop Up
Now comes the time to evaluate the event. This needs to be done both from within the group and from outside the organization. Within the group This is the gut reaction of your group on completion of the event. You want to get the impressions of the group while they are clear in their minds. Each person should be asked for his or her reaction and the answers should be quizzed thoroughly. Review your promotion efforts and their success. Look at how you spent money, on what, for what and the return received. It is important to analyze what worked and what didn't work and make recommendations for the next time. Outside the organization Participant evaluations and questionnaires consist of a series of questions about the program and provide immediate feedback. Some questions should deal with how a participant heard about the program. Another method to evaluate the event is by simply talking with the participants during the activity. For this to be most effective, the promotion committee should spread out and make sure they learn the reactions of a good crosssection of the participants.

There's Always A Next Time
Remember to keep a file on anything associated with the event. You probably won't do exactly the same thing again but the materials, format, strategies and contacts are valuable resources. Things to include in your file are: mailing lists, media contacts, newspaper clippings, photographs, posters, brochures, budgets, bills, receipts, information packets, advertisements, memos and notes, copies of all agendas and minutes, press releases and background material on sponsors, and correspondence. Lastly, remember to thank all the people who helped you on your promotion campaign. Although you probably thanked them on the day of the activity, it is time to say thank you again. Write notes, send a little token that says something about your organization, or take the group to lunch. It is important to recognize the efforts of all. 20

This publicity checklist is to be used as a working tool for your promotion committee.

Activity Three Months Ahead 1. Determine your audience profile 2. Establish a mailing list 3. Contact media o - introduce yourself and your organization o - inquire re: deadline dates 4. Lay out promotion timepath for event (i.e., dates, times, who's responsible) 5. Send initial release to magazines, T.V., newsletters with long lead time, community calendars Six Weeks Ahead 1. Prepare printed material 2. Confirm all details 3. Send press releases 4. Make contacts re: advertising, exhibits Two Weeks Ahead 1. Send press releases to daily and weekly newspapers 2. Mail special invitations, complimentary tickets 3. Assemble press kits One Week Before 1. Phone calls to remind media 2. Check details for any special coverage 3. Meet with organization for final briefing and any last-minute details 4. "Hold tight" The Day of the Event 1. Arrive early 2. Greet media and distribute press kits 3. Answer any questions After the Event 1. Evaluate within the group and outside the organization

Date Completed

Person Responsible


2. Organize and file all materials used 3. Say "Thank you"

The term data refers to qualitative or quantitative attributes of a variable or set of variables. Data (plural of "datum") are typically the results of measurements and can be the basis of graphs, images, or observations of a set of variables. Data are often viewed as the lowest level of abstraction from which information and then knowledge are derived. Raw data, i.e. unprocessed data, refers to a collection of numbers, characters, images or other outputs from devices that collect information to convert physical quantities into symbols.

Information is any kind of event that affects the state of a dynamical system. In its most restricted technical sense, it is an ordered sequence of symbols. As a concept, however, information has many meanings. Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.

Data model

Overview of data modeling context: A data model provides the details of information to be stored, and is of primary use when the final product is the generation of computer software code for an application or the preparation of a functional specification to aid a computer software make-or-buy decision. The figure is an example of the interaction between process and data models.[1] 22

A data model in software engineering is an abstract model, that documents and organizes the business data for communication between team members and is used as a plan for developing applications, specifically how data is stored and accessed. According to Hoberman (2009), "A data model is a wayfinding tool for both business and IT professionals, which uses a set of symbols and text to precisely explain a subset of real information to improve communication within the organization and thereby lead to a more flexible and stable application environment."[2] A data model explicitly determines the structure of data or structured data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually data models are specified in a data modeling language.[3] Communication and precision are the two key benefits that make a data model important to applications that use and exchange data. A data model is the medium which project team members from different backgrounds and with different levels of experience can communicate with one another. Precision means that the terms and rules on a data model can be interpreted only one way and are not ambiguous. A data model can be sometimes referred to as a data structure, especially in the context of programming languages. Data models are often complemented by function models, especially in the context of enterprise models.

Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe structured data for storage in data management systems such as relational databases. They typically do not describe unstructured data, such as word processing documents, email messages, pictures, digital audio, and video.

The role of data models

How data models deliver benefit. The main aim of data models is to support the development of information systems by providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".


"Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces". "Entity types are often not identified, or incorrectly identified. This can lead to replication of data, data structure, and functionality, together with the attendant costs of that duplication in development and maintenance". "Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25-70% of the cost of current systems". "Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardized. For example, engineering design data and drawings for process plant are still sometimes exchanged on paper". The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.

Three perspectives

The ANSI/SPARC three level architecture. This shows that a data model can be an external model (or view), a conceptual model, or a physical model. This is not the only way to look at data models, but it is a useful way, particularly when comparing models. A data model instance may be one of three kinds according to ANSI in 1975

Conceptual schema : describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model. The use of conceptual schema has evolved to become a powerful communication tool with business users. Often called a subject area model (SAM) or high-level data model (HDM), this model is used to communicate core data concepts, rules, and definitions to a business user as part of an overall application development or enterprise initiative. The number of objects should be very small and focused on key concepts. Try to limit this model to one page, although for extremely large organizations or complex projects, the model might span two or more pages. Logical schema : describes the semantics, as represented by a particular data manipulation technology. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things. 24

Physical schema : describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.

The significance of this approach, according to ANSI, is that it allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual model. The table/column structure can change without (necessarily) affecting the conceptual model. In each case, of course, the structures must remain consistent with the other model. The table/column structure may be different from a direct translation of the entity classes and attributes, but it must ultimately carry out the objectives of the conceptual entity class structure. Early phases of many software development projects emphasize the design of a conceptual data model. Such a design can be detailed into a logical data model. In later stages, this model may be translated into physical data model. However, it is also possible to implement a conceptual model directly.

Attribute may refer to:
• • • • •

In research, a characteristic of an object (person, thing, etc.) - see attribute (research) In philosophy, property (philosophy), an abstraction of a characteristic of an entity or substance In art, an object that identifies a figure, most commonly referring to objects held by saints (earlier, by pagan gods) - see emblem In linguistics, a syntax unit, either a word, phrase or clause, that modifies a noun Attribute grammar, in formal computer languages

• • • • • •

Attribute (computing), a factor of an object or other kind of entity SMART attribute Self-Monitoring, Analysis, and Reporting Technology Attribute (network management), a property of a managed object that has a value A property inherent in a database entity or associated with that entity for database purposes. This definition is especially relevant for dimensional tables A parameter of an element in SGML-based markup languages like HTML and XML. See HTML#Attributes Local colour-palette properties of a part of the screen, in some early 8-bit home computers. Hence attribute clash.

Attribute (computing)
In computing, an attribute is a specification that defines a property of an object, element, or file. It may also refer to or set the specific value for a given instance of such. However, in actual usage, the term attribute can and is often treated as equivalent to a property depending on the technology being discussed. For clarity, attributes should more correctly be considered metadata. An attribute is frequently and generally a property of a property. A good example is the process of XML assigning values to properties (elements). Note that the element's value is found before the (separate) end tag, not in the element itself. The element itself may have a number of attributes set (NAME="IAMAPROPERTY"). 25

If the element in question could be considered a property (CUSTOMER_NAME) of another entity (lets say CUSTOMER), the element can have zero or more attributes (properties) of its own (CUSTOMER_NAME is of TYPE="KINDOFTEXT"). An attribute of an object usually consists of a name and a value; of an element, a type or class name; of a file, a name and extension.

Each named attribute has an associated set of rules called operations: one doesn't add characters or manipulate and process an integer array as an image object— one doesn't process text as type floating point (decimal numbers). It follows that an object definition can be extended by imposing data typing: a representation format, a default value, and legal operations (rules) and restrictions ("Division by zero is not to be tolerated!") are all potentially involved in defining an attribute, or conversely, may be spoken of as attributes of that object's type. A JPEG file is not decoded by the same operations (however similar they may be—these are all graphics data formats) as a PNG or BMP file, nor is a floating point typed number operated upon by the rules applied to typed long integers.

For example, in computer graphics, line objects can have attributes such as thickness (with real values), color (with descriptive values such as brown or green or values defined in a certain color model, such as RGB), dashing attributes, etc. A circle object can be defined in similar attributes plus an origin and radius. Markup languages, such as HTML and XML, use attributes to describe data and the formatting of data.

Attributes in C#
In the C# programming language, attributes are metadata attached to a field or a block of code, equivalent to annotations in Java. Attributes are accessible to both the compiler and programmatically through reflection. Users of the language see many examples where attributes are used to address cross-cutting concerns and other mechanistic or platform uses. This creates the false impression that this is their sole intended purpose. Their specific use as meta-data is left to the developer and can cover a wide range of types of information about any given application, classes and members that is not instance specific. The decision to expose any given attribute as a property is also left to the developer as is the decision to use them as part of a larger application framework. Attributes should be contrasted against XML Documentation which also defines meta-data but is not included in the compiled assembly and therefore cannot be accessed programmatically.

Attributes in multi-valued databases
On many post-relational or multi-valued databases systems, relative to SQL, tables are files, rows are items, and columns are attributes. Both in the database and code, attribute is synonymous with property and variable although attributes can be further defined to contain values and sub values.

Managers and Their Information Needs Information is needed for decision making at all levels of management. 26

Managers at different organizational levels make differenct types of decisions, control different types of processes, and have different information needs. Three classical levels of management include: strategic tactical (middle) operational. Titles have different values in different organizations. For example, a vice president at a financial organization may not even be a middle manager. Strategic managers operate in a highly unstructured environment and use EISs, and DSSs. Historically, the most common (organizational structure) was a generic pyramid shaped hierarchy with a few leaders at the top and an increasing number of workers at each subsequent lower managerial and operational level. The pyramid is getting flatter. In 1993 in the U.S. alone, some 450,000 middle managers lost their jobs. Some small, knowledge-intensive companies have adopted a matrix pattern as their organizational structure, with no one leader and leadership distributed among many more people, varying by project, product, or discipline. Matrix management includes having multiple bosses. Technology aside, the politics of information within an organization can undermine optimal business decision making if it is not taken into account when developing systems, and deciding how people will support these systems. Sub-optimization -- the optimization of an individual or a department at the expense of the larger organization. In many organizations, clerical and shop floor workers make up the largest group of workers. Operational managers are responsible for daily operations. They make decisions concerning a narrow time span about the deployment of small groups of clerical and/or shop floor workers. Middle, or tatical, managers receive strategic decisions from above as general directives. Using those directives as guidelines, they develop tatics to meet those strategic directives. That is, they make decisions concerning how and when specific resources will be utilized. Usually, a middle manager will be responsible for several operational managers. Responsible for finding the best operational measures to accomplish their superiors' strategic decisions. While a tactical decision concentrates on how to do something, a strategic decision focuses on what to do. Strategic managers, and directors, that make decisions that affect the entire organization, or large parts of it, and leave an impact in the long run. People in different management levels have different information needs. 27

Most of the information that managers require is used to make decisions. The decision making process of middle managers and above is less structured than that of operational managers; In general, strategic decisions have no proven methods for selecting a course of action that guarantees a predicted outcome. Data Characteristics determine where and how the data will be used. Data range refers to the amount of data from which information is extracted& Time span refers to how long a period of time the data covers. Level of detail is the degree to which the information generated is specific. Internally or externally sourced. Structured data are numbers and facts that can be conveniently stored and retrieved in an orderly manner. Unstructured data are drawn from meeting discussions, private conversations, textual documents, graphics, graphical representations, and other non uniform sources. The higher the manager, the less structured the decisions that a manager faces. The Web: The Great Equalizer Managers plan and control. Planning' s main ingredients include: scheduling budgeting resource allocation. Budget is the most important part of business planning. The plan is the basis for operations. Control is the activities that ensure operations according to the plan. Both planning and control involve decision making. A decision is a commitment to act. Most of a manager's day is devoted to meetings that produce decisions. Managers control actual activities by comparing actual results to expected results. When discrepancies between the planned and actual performance are found, managers determine the reason for the variance. Management by exception is where a manager only reviews those areas that have deviated from the expected. Characteristics of Effective Information Certain types of information can be grasped more quickly when presented graphically. 28

Many applications allow the user to select how the data will be selected. Three dimensional graphics are an option now as well. Dynamic representation usually includes moving images that represent either the speed or direction of what is happening in real time. Type of Information Systems POS and other TPS . Clerical and Shop Floor Workers TPSs are interfaced with applications that provide clerical workers and operational managers with upt to date information. They are also used by operational managers to generate ad hoc reports. Transaction Processing Systems --- Operational Decision Support and Expert Systems . Middle Managers Executive Information Systems (EIS) . Provide managers with timely and concise information about the performance of their organization. Online analytical processing (OLAP) applications are designed to let a user view a cube of tables showing relationships among several related variables. Politics is the decision to act in the interest of the individual decision maker rather than in the interest of the organization as a whole Political tactics include the insistence on adding features that will afford the manager more control, trying to derail the development effort by not cooperating with the developers, and promotion of alternatives to the system. Enterprise wide systems are shared by many business units and managerial levels.

Decision theory
Decision theory in economics, psychology, philosophy, mathematics, and statistics is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. It is very closely related to the field of game theory.

Types Of Decision Making
Main types
There are many types of decision making and these can be easily categorised into the following 4 groups:
• • • •

Rational Intuitive Recognition primed decision making The ultimate decision making model

Let's consider these in more detail.


Rational decision making is the commonest of the types of decision making that is taught and learned when people consider that they want to improve their decision making. These are logical, sequential models where the emphasis is on listing many potential options and then working out which is the best. Often the pros and cons of each option are also listed and scored in order of importance. The rational aspect indicates that there is considerable reasoning and thinking done in order to select the optimum choice. Because we put such a heavy emphasis on thinking and getting it right in our society, there are many of these models and they are very popular. People like to know what the steps are and many of these models have steps that are done in order. People would love to know what the future holds, which makes these models popular. Because the reasoning and rationale behind the various steps here, is that if you do x, then y should happen. However, most people have personal experience that the world usually doesn't operate that way!

The second of the types of decision making are the intuitive models. The idea here is that there may be absolutely no reason or logic to the decision making process. Instead, there is an inner knowing, or intuition, or some kind of sense of what the right thing to do is. And there are probably as many intuitive types of decision making as there are people. People can feel it in their heart, or in their bones, or in their gut and so on. There are also a variety of ways for people to receive information, either in pictures or words or voices. People talk about extra sensory perception as well. However, they are still actually picking up the information through their five senses. Clairsentience is where people feel things, clairaudience is hearing things and clairvoyance is seeing things. And of course we have phrases such as 'I smell a rat', ' it smells fishy' and 'I can taste success ahead'. Other types of decision making in the intuitive category might include tossing a coin, throwing dice, tarot cards, astrology, and so on. Decision wheels are usually more humorous than intuitive but they do have a serious application.

Recognition primed...
Gary Klein has spent considerable time studying human decision making and his results are very interesting. He believes that we make 90 to 95% of our decisions in a pattern recognition way. He suggests that what we actually do is gather information from our environment in relation to the decision we want to make. We then pick an option that we think will work. We rehearse it mentally and if we still think it will work, we go ahead. If it does not work mentally, we choose another option and run that through in our head instead. If that seems to work, we go with that one. We pick scenarios one by one, mentally check them out, and as soon as we find one that works, we choose it. He also points out that as we get more experience, we can recognise more patterns, and we make better choices more quickly. 30

Of interest here is that the military in many countries have adapted his methods because they are considerably more effective than either of the types of decision making we've discussed already. In fact, you could say that his model is a combination of the above two types of decision making.

The ultimate...
In terms of making decisions for your own life, this last of the types of decision making is my favored model. It includes the ideas of the recognition primed decision making model and much more. Firstly, before you even make a decision, you establish how and who you want to be. You obviously want to be in a good state so that you can make good decisions. But you also want to be true to yourself, and that means knowing who 'yourself' is. Once you learn how to be solid and centred, then and only then, do you make decisions. And the decisions are always organised around staying true to yourself and doing things that are good for and aligned who you are. Doing things that are on your own path, and that allow you to become even more solid and centred... The whole model is organised around having the kinds of experiences that you want to be having, and even when the world upsets your plans with its own, you learn how to use this and manipulate it so that you still get what you want anyway...

Types of Information Systems
Information systems differ in their business needs. Also depending upon different levels in organization information systems differ. Three major information systems are 1. Transaction processing systems 2. Management information systems 3. Decision support systems Figure 1.2 shows relation of information system to the levels of organization. The information needs are different at different organizational levels. Accordingly the information can be categorized as: strategic information, managerial information and operational information. Strategic information is the information needed by top most management for decision making. For example the trends in revenues earned by the organization are required by the top management for setting the policies of the organization. This information is not required by the lower levels in the organization. The information systems that provide these kinds of information are known as Decision Support Systems.


Figure 1.2 - Relation of information systems to levels of organization The second category of information required by the middle management is known as managerial information. The information required at this level is used for making short term decisions and plans for the organization. Information like sales analysis for the past quarter or yearly production details etc. fall under this category. Management information system (MIS) caters to such information needs of the organization. Due to its capabilities to fulfill the managerial information needs of the organization, Management Information Systems have become a necessity for all big organizations. And due to its vastness, most of the big organizations have separate MIS departments to look into the related issues and proper functioning of the system. The third category of information is relating to the daily or short term information needs of the organization such as attendance records of the employees. This kind of information is required at the operational level for carrying out the day-to-day operational activities. Due to its capabilities to provide information for processing transaction of the organization, the information system is known as Transaction Processing System or Data Processing System. Some examples of information provided by such systems areprocessing of orders, posting of entries in bank, evaluating overdue purchaser orders etc.

Transaction Processing Systems
TPS processes business transaction of the organization. Transaction can be any activity of the organization. Transactions differ from organization to organization. For example, take a railway reservation system. Booking, canceling, etc are all transactions. Any query made to it is a transaction. However, there are some transactions, which are common to almost all organizations. Like employee new employee, maintaining their leave status, maintaining employees accounts, etc. This provides high speed and accurate processing of record keeping of basic operational processes. These include calculation, storage and retrieval. Transaction processing systems provide speed and accuracy, and can be programmed to follow routines functions of the organization.


Management Information Systems
These systems assist lower management in problem solving and making decisions. They use the results of transaction processing and some other information also. It is a set of information processing functions. It should handle queries as quickly as they arrive. An important element of MIS is database. A database is a non-redundant collection of interrelated data items that can be processed through application programs and available to many users.

Decision Support Systems
These systems assist higher management to make long term decisions. These type of systems handle unstructured or semi structured decisions. A decision is considered unstructured if there are no clear procedures for making the decision and if not all the factors to be considered in the decision can be readily identified in advance. These are not of recurring nature. Some recur infrequently or occur only once. A decision support system must very flexible. The user should be able to produce customized reports by giving particular data and format specific to particular situations.

Summary of Information Systems
Catagories of Information System Transaction Processing System Characteristices Substitutes computer-based processing for manual procedures. Deals with well-structured processes. Includes record keeping applications. Management information system Provides input to be used in the managerial decision process. Deals with supporting well structured decision situations. Typical information requirements can be anticipated. Provides information to managers who must make judgments about particular situations. Supports decision-makers in situations that are not well structured.

Decision support system

Module-2 System Analysis and Development Methodologies

Systems analysis:
Systems analysis is the study of sets of interacting entities, including computer systems analysis. This field is closely related to requirements analysis or operations research. It is also "an explicit formal inquiry carried out to help someone (referred to as the decision maker) identify a better course of action and make a better decision than he might otherwise have made."


The Need For System Analysis
When you asked to computerize a system, as a requirement of the data processing or the information need, it is necessary to analyze the system from different angles. While satisfying such need, the analysis of the system is the basic necessity for an efficient system design. The need for analysis stems from the following point of view. System Objective: It is necessary to define the system objective(s). Many a times, it is observed that the systems are historically in operation and have lost their main purpose of achievement of the objectives. The users of the system and the personnel involved are not in a position to define the objective(s). Since you are going to develop a computer based system, it is necessary to redefine or reset the objective(s) as a reference point in the context of the current business requirement.

System Boundaries: It is necessary to establish the system boundaries which would define the scope and the coverage of the system. This helps to sort out and understand the functional boundaries of the system, the department boundaries in the system, and the people involved in the system. It also helps to identify the inputs and the outputs of the various sub-systems covering the entire system.

System Importance: It is necessary to understand the importance of the system in the organization. This would throw more light on its utility and would help the designer to decide the design features of the system. It would be possible then to position the system in relation to the other systems for deciding the design strategy and development.

Nature of The System: The analysis of the system will help the system designer to conclude whether the system is the closed type or open, and a deterministic or probabilistic. Such an understanding of the system is necessary, prior to design the process to ensure the necessary design architecture.

Role of the System as an Interface: The system, many a times, acts as an interface to the other systems. Hence through such an interface, it activates or promotes some changes in the other systems. It is necessary to understand the existing role of the system, as an interface, to safeguard the interests of the other systems. Any modifications or changes made should not affect the functioning or the objective of the other systems.

Participation of Users: The strategic purpose of the analysis of the system is to seek the acceptance of the people to a new development. System analysis process provides a sense of participation to the people. This helps in breaking the resistance to the new development and it also ensure the commitment to the new system.

Understanding of Resource Needs: The analysis of the system helps in defining the resource requirements in terms of hardware and software. Hence, if any additional resources are required, this would mean an investment. The management likes to evaluate the investment form the point of view of return on such investment. If the return on the investment is not attractive, the management may drop the project. 34

Assessment of Feasibility: The analysis of the system helps to establish the feasibility from different angles. The system should satisfy the technical, economic and operational feasibility.

Many times, the systems are feasible from the technical and economic point of view: but they may be infeasible from the operational point of view. The assessment of feasibility will save the investment and the system designer’s time. It would also save the embarrassment to the system designer as he is viewed as the key figure in such projects. One can approach the system analysis and design exercise in a systematic manner in steps, as shown in the Table below :

Steps Need for information Define the Systems Feasibility

Elaboration Define the nature of information. Also who wants and who uses. Decide the nature, type of the system and its scope Technical success

Explanation Identify the users and application of the information for achieving the objectives. Helps to determine the system ownership, its benefits and complexity. Hardware and software availability and capability, for implementation.

Economic viability

Study the investment and benefits. Assess the improvement in value of the information. Determine the return on investment.

Detailing the requirements

Conceptual system Detailing the system

Examine whether the system will perform as desired in terms of time and results. Are the users ready to use the system? Identify in precise terms, the Study the sources of strategic, functional and generating the Information. operational information needs. Establish I/O linkages. Modify the existing system to satisfy the needs. Determine the inputs, process Conceptualization is necessary and outputs, and design a to understand the system conceptual model. process. Draw the document flow Helps in bringing a clarity in Operational effectiveness 35

charts and the data-flow the data-flow. The diagrams, the data and system responsibility centres and the hierarchy diagrams, the data process centres are identified. information versus its users mapping table. Structuring the system Break the system into its Helps in understanding the design hierarchical structure. data-flow from one level to the other and the processes carried out at each level. Conceptual model of Define step by step the usage Helps to put down the data computer system of files, processes and processing flow in the interface. Define the data computerized system. Draw structures and the validation the computer system charts. procedures. Break the system in Make a physical conversion ofModules will be data entry, programme modules the system into the data validation, data programme structures in a processing, reporting and logical order. storing. Develop the test data for Test the modules and the Confirms whether the system checking the system ability integrity of the system in design is satisfactory. terms of input versus output. Suggests the modifications. Plan while box and black box testing. Install the system Install on the hardware. Install, test and run the system before the user is exposed in alive mode. Implementation Train the personnel. Run the Help to identify the problems system in parallel. Prepare a and provide solutions. system manual. Review and maintenance Review the system through Helps to maintain the system audit trail and test data, also quality and the quality of confirm whether the objective information through is fulfilled. Carry out the modification, if necessary. modifications, if any.

Stages in System Analysis
Stages of System Development Life Cycle (SDLC)
The System Development is the interactive process which consists of the following stages

Preliminary Investigation: One of the most tedious task is to recognize the real problem of the preinstalled system. The analysis has to spend hours and days for understanding the fault in the system. This fault could have however overcome if the Preliminary Investigation before installing the system was properly done. This is the first stage of the development of the system. In this stage the analyst makes a survey by gathering all the available information needed for the system elements and allocation of the requirements to the software. 36

Analysis of the requirement: The analyst understands the nature of the information and the functions of the software which is required for the system. The analyst makes a brief survey of the requirements and tries to analyze the performance of the system which is to be developed. He also makes sure that he gets enough information and resources for building the appropriate system. System Design: The analyst actually makes number of designs of the system on paper or on the computer and sees to it that the rough image made of the system comprises of all the requirements or not. Once this is done, the analyst selects and finalizes a best suited design for the development of the system. System Coding: The analyst translates the code or the programs in such a way that they become in machine readable form. The coding step is very time consuming and involves number of rooms for errors.

System Testing: Once the analyst is through with the coding stage he tests the systems and sees to it that it is working as per the expectations or not. He corrects the flaws in the system if any. System Implementation: This is one of the most vital phase as in this phase the analyst actually gives the system to the customer and expects for a positive feedback. System Maintenance: The last stage of the SDLC is that the analyst needs to maintain the system and see to it that it working within the standards set. He needs to maintain the system by removing the defects of flaws occurred.

Structured Analysis and Design Structured analysis is a set of techniques and graphical tools that allow the analyst to develop a new kind of system specification that are easily understandable to the user. Analysts work primarily with their wits, pencil and paper. Structured systems analysis and design separates process and data. The emphasis is on the procedural aspects of a system. The design is top-down, modular, hierarchical. A separate model is built to represent each of these two views: • • A process model A data model


This is the fundamental difference between the Object Oriented and Structured paradigms.

History of Structured Analysis and Systems Design
• • • • • Developed in the late 1970s by DeMarco, Yourdon, and Constantine after the emergence of structured programming. IBM incorporated SASD into their development cycle in the late 1970s and early 1980s. Classical SASD was modified due to its inability to represent real-time systems. In 1989, Yourdon published “Modern Structured Analysis”. The availability of CASE tools in the 1990s enabled analysts to develop and modify the graphical SASD models.

Philosophy of Structured analysis and design
Analysts attempt to divide large, complex problems into smaller, more easily handled ones. “Divide and Conquer” Top-Down approach (Classical SA), or Middle-Out (Modern SA) Functional view of the problem. “Form follows function” Analysts use graphics to illustrate their ideas whenever possible Analysts must keep a written record. The purpose of Structured analysis and Design is to develop a useful, high quality information system that will meet the needs of the end user.

Goals of Structured Analysis and Design
Improve Quality and reduce the risk of system failure Establish concrete requirements specifications and complete requirements documentation Focus on Reliability, Flexibility, and Maintainability of system

Characteristics of a Good analysis method
• • • • • Graphical with supporting text. Allow system to be viewed in a top-down and partitioned fashion. Minimum redundancies. Reader should be able to predict system behavior. Easy to understand by user.

Elements of Structured Analysis and Design
Essential Model Environmental Model Behavioral Model 38

Implementation Model

Essential Model
• • • Model of What the system must do Does not define how the system will accomplish its purpose Is a combination of the environmental and behavioral model

Environmental Model
• • • Defines the scope of the proposed system Defines the boundary and interaction between the system and the outside world Composed of: Statement of Purpose, Context diagram, and Event List

Behavioral Model
• • • Model of the internal behavior and data entities of the system Models the functional requirements Composed of Data Dictionary, Data Flow Diagram, Entity Relationship Diagram, Process Specification, and State Transition Diagram

Implementation Model
• • • • • Maps the functional requirements to the hardware and software. Minimizes the cost of development and maintenance Determines which functions should be manual vs. automated. Can be used to discuss the cost-benefits of functionality with the users/stakeholders Defines the Human-Computer interface Defines non-functional requirements Tools: Structure Charts

Statement of Purpose
• • • A clear and concise textual description of the purpose for the system It is deliberately vague It is intended for top-level management, user management, and others who are not directly involved in the system 39

Example of Statement of Purpose – The purpose of the credit card system is to provide a means for the company to extend credit to the customer. The system will handle details of credit application, credit management, billing, transaction capture, remittance, and management reporting. Information about transactions should be available to the corporate accounting system.

Analysis and Design Process

Context Diagram – Purpose
• • • Highlights the boundary between the system and the outside world Highlights the people, organizations, and outside systems that interact with the system under development Special case of the data flow diagram

Context Diagram – Notation
Process – Represents the proposed system Terminator – Represents the external entities Flow – Represents the in and out data flows


Context Diagram – Example

Event List – Purpose • A list of the events/stimuli outside of the system to which it must respond • Similar to “use-cases”

Event List – Types
• • • Flow Oriented Event. (Process is triggered by incoming data) Temporal Event. (Process is triggered by internal clock) Control Event. (Process is triggered by an external unpredictable event)

Event List – Example
• • • • • Customer applies for a credit card Customer makes a transaction at a store Customer pays a bill Customer disputes charges Customer service changes credit terms

Data Flow Diagram – Purpose

• •

Provides a means for functional decomposition Primary tool in analysis to model data transformation in the system

Data Flow Diagram – Notation
Represents functions in the system Represents the external entities Represents data flows Represents data stores

Data Flow Diagram – Leveling

Data Flow Diagram – Example


Data Flow Diagram – Validation Black Hole -

Spontaneous -

Data Flow Diagram – Validation


Data Dictionary – Purpose
• Defines data elements to avoid different interpretations

Data Dictionary – Notation
‘ = ‘ Is composed of ‘ + ‘ And ‘ ( ) ‘ Element is optional ‘ { } ‘ Interation ‘ [ ] ‘ Select one of a list of elements ‘ | ‘ Separates choices of elements ‘ ** ‘ Comments ‘ @ ‘ Identifier for a store (unique id)

Data Dictionary – Examples
• • • • • • • Element Name = Card Number Definition = Uniquely identifies a card Alias = None Format = LD+LD+LD+LD+SP+…LD SP = “ “ Space LD = {0-9} Legal Digits Range = 5191 0000 0000 0000 to 5191 9999 9999 9999

Entity Relationship Diagram (ERD) – Purpose
A graphical representation of the data layout of a system at a high level of abstraction Defines data elements and their inter-relationships in the system

Entity Relationship Diagram – Notation
Data Element Relationship Associated Object Cardinality – Exactly One 44

Cardinality – Zero or one Cardinality – Mandatory Many Cardinality – Optional Many

Entity Relationship Diagram – Example

Structure Charts – Purpose
• • • • • Functional Decomposition (Divide and Conquer) Information hiding Modularity Low coupling High internal cohesion


Structure Charts

Structure Charts – Notation
Modules Library Modules Module Call Data Flag

Structure Charts – Example


Structure Charts – Cohesion
• • • • • • • Function – Elements are combined to complete one specific function Sequential – Elements are combined because data flows from one step to another Communicational – Elements are combined because they all act on one data store Procedural – Elements are combined because control flows from one step to another Temporal – Statements are together because they occur at the same time Logical – Elements are together because of their type of function such as all edits Coincidental – Elements are grouped together randomly

Structure Charts – Coupling
• • • • • • • Indirect relationship – Modules are independent and there is no way to communicate Data – Only necessary data is passed between two modules Stamp – A data structure is passed to a module but the module only needs a portion of the data in the data structure Control – Flags are passed between modules External – Two or more modules reference the same piece of external data. This is unavoidable in traditional batch processing Common – Modules access data through global variables Content – One module changes the data of another module

Process Specifications – Purpose • Shows process details, which are implied but not shown in a Data Flow Diagram • Specifies the input, output, and algorithm of a module in the DFD • Normally written using pseudo-code


Process Specification – Example
Apply Payment • For all payments o If payment is to be applied today or earlier and has not yet been applied  Read account  Read amount  Add amount to account’s open to buy  Add amount to account’s balance  Update payment as applied

State Transition Diagram – Purpose
• Shows the time ordering between processes

State Transition Diagram – Notation
- Objects - Transitions

State Transition Diagram – Example

Pros of Structure Analysis and Design

• • • • • • • • • • • • • •

It has distinct milestones, which allows for easier project management tracking Very visual – easier for users/programmers to understand Makes good use of graphical tools Well known in industry A mature technique Process-oriented approach is a natural way of thinking Flexible Provides a means of requirements validation Relatively simple and easy to read

Context Diagram
Provides a black box overview of the system and the environment

Event list
Provides guidance for functionality Provides a list of system inputs and outputs Means of requirements summarization Can be used to define test cases

• • • • Ability to represent data-flows Functional decomposition – divide and conquer

Data Dictionary
Simplifies data requirements Used at high or low level analysis

• • • • • • • • • • Commonly used; well understood Graphical tool; easy to read by analysts Data objects and relationships are portrayed independently from the processes Can be used to design database architecture Effective tool to communicate with DBAs

Process specifications
Express the process specifications in a form that can be verified

State transition diagrams
Models real time behavior

Structure charts
Modularity improves system maintainability Provides a means for transition from analysis to design Provides a synchronous hierarchy of modules


Cons of Structure Analysis and Design
• • • • • • • • • • It ignores non-functional requirements Minimal management involvement Non-iterative; user-analyst interaction Doesn’t provide a communication process with users Hard to decide when to stop decomposing Doesn’t address stakeholders’ needs Doesn’t work well with Object-Oriented programming languages

Context Diagram
Does not provide a specific means to determine the scope of the system

Event List
Does not define all functionality (example, Edits) Does not define specific mechanism for interaction

• • • • • • • Weak display of input/output detail Users find it confusing initially Do not represent time No implied sequencing Assign data stores early in the analysis with little deliberation

Data Dictionary
No functional details Formal language is confusing to users

• • • • • • • May be confusing for users; formal notation Complex in large

Structure charts
Does not work well for asynchronous processes such as networks Could be too large to be effectively understood with large programs

Process specifications
They may be too technical for the users Difficult to stay away from the current ‘How’

State Transition Diagrams
Explains what action causes a state change but not when or how often


Where to use Structure Analysis and Design
• • • • In well know problem domains With contract projects where SRS is specified In both real-time systems and transaction processing systems Not appropriate when time to market is short

Structure Analysis and Design (SAD) vs. Object Oriented Analysis Design (OOAD)

• • • • Both SAD and OOAD had started off from programming techniques Both techniques use graphical design and graphical tools to analyze and model the requirements Both techniques provide a systematic step-by-step process for developers Both techniques focus on documentation of the requirements

• • • SAD is Process-oriented OOAD is Data-oriented Another difference is that OOAD encapsulates as much of the systems’ data and processes into objects

Data flow diagram
A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. DFDs can also be used for the visualization of data processing (structured design). On a DFD, data items flow from an external data source or an internal data store to an internal data store or an external data sink, via an internal process. A DFD provides no information about the timing of processes, or about whether processes will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows the flow of control through an algorithm, allowing a reader to determine what operations will be performed, in what order, and under what circumstances, but not what kinds of data will be input to and output from the system, nor where the data will come from and go to, nor where the data will be stored (all of which are shown on a DFD).


Developing a data flow diagram
Event partitioning approach

A context level Data flow diagram created using Select SSADM. This level shows the overall context of the system and its operating environment and shows the whole system as just one process. It does not usually show data stores, unless they are "owned" by external systems, e.g. are accessed by but not maintained by this system, however, these are often shown as external entities.[6]

Level 1 (high level diagram)
This level (level 1) shows all processes at the first level of numbering, data stores, external entities and the data flows between them. The purpose of this level is to show the major and high-level processes of the system and their model will have one, and only one, level-1 diagram. A level-1 diagram must be balanced with its parent context level diagram, i.e. there must be the same external entities and the same data flows, these can be broken down to more detail in the level 1, example the "enquiry" data flow could be split into "enquiry request" and "enquiry results" and still be valid.


Level 2 (low level diagram)

A Level 2 Data flow diagram showing the "Process Enquiry" process for the same system. This level is a decomposition of a process shown in a level-1 diagram, as such there should be a level-2 diagram for each and every process shown in a level-1 diagram. In this example, processes 1.1, 1.2 & 1.3 are all vimal of process 1. Together they wholly and completely describe process 1, and combined must perform the full capacity of this parent process. As before, a level-2 diagram must be balanced with its parent level-1 diagram.

Structure Diagram:
A Data Structure Diagram (DSD) is a data model used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that binds them. The basic graphic elements of DSDs are boxes, representing entities, and arrows, representing relationships. Data structure diagrams are most useful for documenting complex data entities


Data Structure Diagram is a diagram type that is used to depict the structure of data elements in the data dictionary. The data structure diagram is a graphical alternative to the composition specifications within such data dictionary entries. Data structure diagrams are an extension of the entity-relationship model (E-R model). In DSDs, attributes are specified inside the entity boxes rather than outside of them, while relationships are drawn as boxes composed of attributes which specify the constraints that bind entities together. The E-R model, while robust, doesn't provide a way to specify the constraints between relationships, and becomes visually cumbersome when representing entities with several attributes. DSDs differ from the E-R model in that the E-R model focuses on the relationships between different entities, whereas DSDs focus on the relationships of the elements within an entity and enable users to fully see the links and relationships between each entity.


Waterfall model
The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing and Maintenance.

The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.


In Royce's original waterfall model, the following phases are followed in order: 1. 2. 3. 4. 5. 6. 7. Requirements specification Design Construction (AKA implementation or coding) Integration Testing and debugging (AKA Validation) Installation Maintenance

The waterfall model proceeds from one phase to the next in a sequential manner. For example, one first completes requirements specification, which after sign-off are considered "set in stone." When requirements are completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow—this design should be a plan for implementing the requirements given. When the design is complete, an implementation of that design is made by coders. Towards the later stages of this implementation phase, separate software components produced are combined to introduce new functionality and reduced risk through the removal of errors.

Prototyping Software Life Cycle Model
The goal of prototyping based development is to counter the first two limitations of the waterfall model discussed earlier. The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway prototype is built to understand the requirements. This prototype is developed based on the currently known requirements. Development of the prototype obviously undergoes design, coding and testing. But each of these phases is not done very formally or thoroughly. By using this prototype, the client can get an "actual feel" of the system, since the interactions with prototype can enable the client to better understand the requirements of the desired system. Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determining the requirements. In such situations letting the client "plan" with the prototype provides invaluable and intangible inputs which helps in determining the requirements for the system. It is also an effective method to demonstrate the feasibility of a certain approach. This might be needed for novel systems where it is not clear those constraints can be met or that algorithms can be developed to implement the requirements. The process model of the prototyping approach is shown in the figure below.

Prototyping Model The basic reason for little common use of prototyping is the cost involved in this built-it-twice approach. However, some argue that prototyping need not be very costly and can actually reduce the overall development cost. The 56

prototype are usually not complete systems and many of the details are not built in the prototype. The goal is to provide a system with overall functionality. In addition, the cost of testing and writing detailed documents are reduced. These factors helps to reduce the cost of developing the prototype. On the other hand, the experience of developing the prototype will very useful for developers when developing the final system. This experience helps to reduce the cost of development of the final system and results in a more reliable and better designed system.

Advantages of Prototyping
1. Users are actively involved in the development 2. It provides a better system to users, as users have natural tendency to change their mind in specifying requirements and this method of developing systems supports this user tendency. 3. Since in this methodology a working model of the system is provided, the users get a better understanding of the system being developed. 4. Errors can be detected much earlier as the system is mode side by side. 5. Quicker user feedback is available leading to better solutions.

1. Leads to implementing and then repairing way of building systems. 2. Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond original plans.

Characteristics and limitations of prototypes
Engineers and prototyping specialists seek to understand the limitations of prototypes to exactly simulate the characteristics of their intended design. A degree of skill and experience is necessary to effectively use prototyping as a design verification tool. It is important to realize that by their very definition, prototypes will represent some compromise from the final production design. Due to differences in materials, processes and design fidelity, it is possible that a prototype may fail to perform acceptably whereas the production design may have been sound. A counter-intuitive idea is that prototypes may actually perform acceptably whereas the production design may be flawed since prototyping materials and processes may occasionally outperform their production counterparts. In general, it can be expected that individual prototype costs will be substantially greater than the final production costs due to inefficiencies in materials and processes. Prototypes are also used to revise the design for the purposes of reducing costs through optimization and refinement. It is possible to use prototype testing to reduce the risk that a design may not perform acceptably, however prototypes generally cannot eliminate all risk. There are pragmatic and practical limitations to the ability of a prototype to match the intended final performance of the product and some allowances and engineering judgment are often required before moving forward with a production design. Building the full design is often expensive and can be time-consuming, especially when repeated several times— building the full design, figuring out what the problems are and how to solve them, then building another full design. As an alternative, "rapid-prototyping" or "rapid application development" techniques are used for the initial prototypes, which implement part, but not all, of the complete design. This allows designers and manufacturers to rapidly and inexpensively test the parts of the design that are most likely to have problems, solve those problems, and then build the full design.


Spiral model
The spiral model is a software development process combining elements of both design and prototyping-instages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive and complicated projects.

The steps in the spiral model iteration can be generalized as follows:1. The system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. 2. A preliminary design is created for the new system. This phase is the most important part of "Spiral Model". In this phase all possible (and available) alternatives, which can help in developing a cost effective project are analyzed and strategies to use them are decided. This phase has been added specially in order to identify and resolve all the possible risks in the project development. If risks indicate any kind of uncertainty in requirements, prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements. 3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaleddown system, and represents an approximation of the characteristics of the final product. 4. A second prototype is evolved by a fourfold procedure: 1. evaluating the first prototype in terms of its strengths, weaknesses, and risks; 2. defining the requirements of the second prototype; 3. planning and designing the second prototype; 4. Constructing and testing the second prototype.

The spiral model is mostly used in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military had adopted the spiral model for its Future Combat Systems program. The FCS project was canceled after six years (2003–2009), it had a two year iteration (spiral). The FCS should have resulted in three consecutive prototypes (one prototype per spiral—every two years). It was canceled in May 2009. The spiral model thus may suit small (up to $3 million) software applications and not a complicated ($3 billion) distributed interoperable, system of systems. Also it is reasonable to use the spiral model in projects where business goals are unstable but the architecture must be realized well enough to provide high loading and stress ability. For example, the Spiral Architecture Driven Development is the spiral based SDLC which shows the possible way how to reduce a risk of non-effective architecture with the help of spiral model in conjunction with the best practices from other models.

Project Roles and Responsibilities
Projects of different sizes have different ways and requirements on how the people are organized. In a small project, little organization structure is needed. There might be a primary sponsor, project manager and a project team. However, for large projects, there are more and


more people involved, and it is important that people understand what they are expected to do, and what role people are expected to fill. This section identifies some of the common (and not so common) project roles that may need to be required for your project. Analyst. The analyst is responsible for ensuring that the requirements of the business clients are captured and documented correctly before a solution is developed and implemented. In some companies, this person might be called a Business Analyst, Business Systems Analyst, Systems Analyst or a Requirements Analyst. For more information on this role see 407.2 The Role of an Analyst. Change Control Board. The Change Control Board is usually made up as a group of decision makers authorized to accept changes to the projects requirements, budget, and timelines. This organization would be helpful if the project directly impacted a number of functional areas and the sponsor wanted to share the scope change authority with this broader group. The details of the Change Control Board and the processes they follow are defined in the project management processes. Client. This is the people (or groups) that are the direct beneficiaries of a project or service. They are the people for whom the project is being undertaken. (Indirect beneficiaries are probably stakeholders.) These might also be called "customers", but if they are internal to the company Lifecycle Step refers to them generically as clients. If they are outside your company, they would be referred to as "customers". Client Project Manager. If the project is large enough, the client may have a primary contact that is designated as a comparable project manager. As an example, if this were an IT project, the IT project manager would have overall responsibility for the IT solution. However, there may also be projects on the client side that are also needed to support the initiative, and the client project manager would be responsible for those. The IT project manager and the client project manager would be peers who work together to build and implement the complete solution. Designer. The Designer is responsible for understanding the business requirements and designing a solution that will meet the business needs. There are many potential solutions that will meet the client's needs. The designer determines the best approach. A designer typically needs to understand how technology can be used to create this optimum solution for the client. The designer determines the overall model and framework for the solution, down to the level of designing screens, reports, programs and other components. They also determine the data needs. The work of the designer is then handed off to the programmers and other people who will construct the solution based on the design specifications. Project Manager. This is the person with authority to manage a project. This includes leading the planning and the development of all project deliverables. The project manager is responsible for managing the budget and schedule and all project management procedures (scope management, issues management, risk management, etc.). Project Team. The project team consists of the full-time and part-time resources assigned to work on the deliverables of the project. This includes the analysts, designers, programmers, etc. They are responsible for. • • • • Understanding the work to be completed Planning out the assigned activities in more detail if needed Completing assigned work within the budget, timeline and quality expectations Informing the project manager of issues, scope changes, risk and quality concerns


Proactively communicating status and managing expectations

The project team can consist of human resources within one functional organization, or it can consist of members from many different functional organizations. A cross-functional team has members from multiple organizations. Having a cross-functional team is usually a sign of your organization utilizing matrix management. Sponsor (Executive Sponsor and Project Sponsor). This is the person who has ultimate authority over the project. The Executive Sponsor provides project funding, resolves issues and scope changes, approves major deliverables and provides high-level direction. They also champion the project within their organization. Depending on the project, and the organizational level of the Executive Sponsor, they may delegate day-to-day tactical management to a Project Sponsor. If assigned, the Project Sponsor represents the Executive Sponsor on a dayto-day basis, and makes most of the decisions requiring sponsor approval. If the decision is large enough, the Project Sponsor will take it to the Executive Sponsor for resolution. Stakeholder. These are the specific people or groups who have a stake, or an interest, in the outcome of the project. Normally stakeholders are from within the company, and could include internal clients, management, employees, administrators, etc. A project may also have external stakeholders, including suppliers, investors, community groups and government organization. Steering Committee. A Steering Committee is a group of high-level stakeholders who are responsible for providing guidance on overall strategic direction. They do not take the place of a Sponsor, but help to spread the strategic input and buy-in to a larger portion of the organization. The Steering Committee is usually made up of organizational peers, and is a combination of direct clients and indirect stakeholders. The members on the Steering Committee may also sit on the Change Control Board, although in many cases the Change Board is made up of representatives of the Steering Committee. Suppliers / Vendors. Although some companies may have internal suppliers, in the Lifecycle Step Process, these terms will always refer to third party companies, or specific people that work for third parties. They may be subcontractors who are working under your direction, or they may be supplying material, equipment, hardware, software or supplies to your project. Depending on their role, they may need to be identified on your organization chart. For instance, if you are partnering with a supplier to develop your requirements, you probably want them on your organization chart. On the other hand, if they are a vendor supplying a common piece of hardware, you probably would not consider them a part of the team. Users. These are the people who will actually use the deliverables of the project. These people are also involved heavily in the project in activities such as defining business requirements. In other cases, they may not get involved until the testing process. Sometimes you want to specifically identify the user organization or the specific users of the solution and assign a formal set of responsibilities to them, like developing use cases or user scenarios based on the needs of the business requirements.

Responsibility Matrix
In a large project, there may be many people who have some role in the creation and approval of project deliverables. Sometimes this is pretty straightforward, such as one person writing a document and one person approving it. In other cases, there may be many people who have a hand in the creation, and others that need to have varying levels of approval. The Responsibility Matrix is a technique used to define the general responsibilities for each role on a project. The matrix can then be used to communicate the roles to the appropriate people associated with the team. This helps set expectations, and ensures people know what is expected from them.


On the matrix, the different people, or roles, appear as columns, with the specific deliverables in question listed as rows. Then, use the intersecting points to describe each person's responsibility for each deliverable. A simple example matrix follows: Project Sponsor Requirements Management Plan Requirements Report Process Model Data Model Requirements Traceability Matrix • • • A - Approves the deliverable R - Reviews the deliverable (and provides feedback). C - Creates the deliverable (could be C (1) for primary, C (2) for backup). Usually there is only one person who is responsible for creating a deliverable, although many people may provide input. • • • I - Provides input N – Is notified when a deliverable is complete M - Manages the deliverables (such as a librarian, or person responsible for the document repository) A I, A R R R Project Manager C R R R R Client Managers A I, A I, A I, A R

Project Team R R R R R

Analysts R C C C C

In the table above, the Requirements Management Plan is created by the project manager, approved by the sponsor and client managers, and reviewed by the project team and analysts. The purpose of the matrix is to gain clarity and agreement on who does what, so you can define the columns with as much detail as makes sense. For instance, in the above example, the 'Project Team' could have been broken into specific people or the person responsible for creating the Data Model could have been broken out into a separate column. After the matrix is completed, it should be circulated for approval. If it is done in the Project Charter process, it can be an addendum to the Project Charter. If it is created as a part of the initial Analysis Phase, it should be circulated as a separate document. Examples of responsibility codes are as follows. Your project may define different codes, as long as you explain what they mean so that people know what the expectations are for them.

Database Administrator Roles and Responsibilities
A Database Admininstrator, Database Analyst or Database Developer is the person responsible for managing the information within an organization. As most companies continue to experience inevitable growth of their databases, these positions are probably the most solid within the IT industry. In most cases, it is not an area that is targeted for layoffs or downsizing. On the downside, however, most database departments are often understaffed, requiring adminstrators to perform a multitude of tasks. 61

Depending on the company and the department, this role can either be highly specialized or incredibly diverse. The primary role of the Database Administrator is to adminster, develop, maintain and implement the policies and procedures necessary to ensure the security and integrity of the corporate database. Sub roles within the Database Administrator classification may include security, architecture, warehousing and/or business analysis. Other primary roles will include:
• • • • • • •

Implementation of data models Database design Database accessibility Performance issues Capacity issues Data replication Table Maintainence

Database Administrators are often on-call and required to work as needed. This position carries an enormous amount of responsibility.

Potential opportunities for advancement :
Typically, Database Administrators either move into management or more specialized roles such as data warehousing, security or analysis.

Educational Requirements:
Normally, Database Administrator positions require a college degree, 1-3 years of technical experience and 1-3 months of formal training. Potential candidates should have excellent mathematical skills, and will be required to work independently. Database Administrators will be required to work within the framework of the organization, making teamwork a must. Candidates should also have the ability to change directions, priorities and focus quickly without losing productivity. Most positions require expertise in one or several of the following systems :
• • • • • • •

Windows NT Oracle 8 Enterprise Server TMS PL/SQL UNIX Informix DB2

It is an excellent idea to focus on one speciality area. Security, for example, is a very hot industry today. A specialization will not only increase your value but will also enhance your marketability. A Bachelor's degree in computer science or M.I.S. is preferable to a general degree and will add weight to your resume, salary negotiations and advancement potential. It is also extremely important to stay current regarding the IT industry and potential changes. A strong background in business is a definite plus as well as any previous experience in data modeling, fulfillment, mail order or distribution.

Industry Certifications :
It is highly recommended that any potential Database Administrators take the time to obtain certifications. Microsoft and Oracle seem to be the favored ones here. Oracle DBA's are currently earning some of the highest six62

figure salaries on the West Coast. An added plus is to diversify training and certifications within design, programming and data warehousing. The most recommended certifications include: Master CIW Enterprise Developer MCDBA Oracle8i DBA OCP Oracle9i DBA OCA Oracle9i DBA OCP Oracle9i OCM Security+

Database design
Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity.

The term database design can be used to describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term database design could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the database management system (DBMS) The process of doing database design generally consists of a number of steps which will be carried out by the database designer. Usually, the designer must:
• •

Determine the relationships between the different data elements. Superimpose a logical structure upon the data on the basis of these relationships.


ER Diagram (Entity-relationship model)

A sample Entity-relationship diagram Database designs also include ER(Entity-relationship model) diagrams. An ER diagram is a diagram that helps to design databases in an efficient way. Attributes in ER diagrams are usually modeled as an oval with the name of the attribute, linked to the entity or relationship that contains the attribute. Within the relational model the final step can generally be broken down into two further steps that of determining the grouping of information within the system, generally determining what are the basic objects about which information is being stored, and then determining the relationships between these groups of information, or objects. This step is not necessary with an Object database.

The Design Process
The design process consists of the following steps: 1. Determine the purpose of your database - This helps prepare you for the remaining steps. 2. Find and organize the information required - Gather all of the types of information you might want to record in the database, such as product name and order number. 3. Divide the information into tables - Divide your information items into major entities or subjects, such as Products or Orders. Each subject then becomes a table. 4. Turn information items into columns - Decide what information you want to store in each table. Each item becomes a field, and is displayed as a column in the table. For example, an Employees table might include fields such as Last Name and Hire Date. 5. Specify primary keys - Choose each table’s primary key. The primary key is a column that is used to uniquely identify each row. An example might be Product ID or Order ID. 6. Set up the table relationships - Look at each table and decide how the data in one table is related to the data in other tables. Add fields to tables or create new tables to clarify the relationships, as necessary. 7. Refine your design - Analyze your design for errors. Create the tables and add a few records of sample data. See if you can get the results you want from your tables. Make adjustments to the design, as needed. 8. Apply the normalization rules - Apply the data normalization rules to see if your tables are structured correctly. Make adjustments to the tables. 64

Determining data to be stored
In a majority of cases, a person who is doing the design of a database is a person with expertise in the area of database design, rather than expertise in the domain from which the data to be stored is drawn e.g. financial information, biological information etc. Therefore the data to be stored in the database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of what data must be stored within the system. This process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. This is because those with the necessary domain knowledge frequently cannot express clearly what their system requirements for the database are as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. Data to be stored can be determined by Requirement Specification.

In the field of relational database design, normalization is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion anomalies—that could lead to a loss of data integrity. A standard piece of database design guidance is that the designer should create a fully normalized design; selective denormalization can subsequently be performed, but only for performance reasons. However, some modeling disciplines, such as the dimensional modeling approach to data warehouse design, explicitly recommend nonnormalized designs, i.e. designs that in large part do not adhere to 3NF.

Types of Database design
Conceptual schema
Once a database designer is aware of the data which is to be stored within the database, they must then determine where dependancy is within the data. Sometimes when data is changed you can be changing other data that is not visible. For example, in a list of names and addresses, assuming a situation where multiple people can have the same address, but one person cannot have more than one address, the name is dependent upon the address, because if the address is different, then the associated name is different too. However, the other way around is different. One attribute can change and not another. (NOTE: A common misconception is that the relational model is so called because of the stating of relationships between data elements therein. This is not true. The relational model is so named because it is based upon the mathematical structures known as relations.)

Logically structuring data
Once the relationships and dependencies amongst the various pieces of information have been determined, it is possible to arrange the data into a logical structure which can then be mapped into the storage objects supported by the database management system. In the case of relational databases the storage objects are tables which store data in rows and columns. Each table may represent an implementation of either a logical object or a relationship joining one or more instances of one or more logical objects. Relationships between tables may then be stored as links connecting child tables with parents. Since complex logical relationships are themselves tables they will probably have links to more than one parent. 65

In an Object database the storage objects correspond directly to the objects used by the Object-oriented programming language used to write the applications that will manage and access the data. The relationships may be defined as attributes of the object classes involved or as methods that operate on the object classes.

Physical database design
The physical design of the database specifies the physical configuration of the database on the storage media. This includes detailed specification of data elements, data types, indexing options and other parameters residing in the DBMS data dictionary. It is the detailed design of a system that includes modules & the database's hardware & software specifications of the system.

Systems Development Life Cycle
The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering, information systems and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems. In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system.

System development phases The System Development Life Cycle framework provides a sequence of activities for system designers and developers to follow. It consists of a set of steps or phases in which each phase of the SDLC uses the results of the previous one. A Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. A number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize. The oldest of these, and the best known, is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:

Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals. 66

Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs. Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. Implementation: The real code is written here. Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business. Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

• •

In the following example (see picture) these stage of the Systems Development Life Cycle are divided in ten steps from definition to creation and modification of IT work products:

The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work products for each phase are described in subsequent chapters. [7] Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap.[7] 67

System analysis
The goal of system analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created and attempting to engage users so that definite requirements can be defined. Requirements analysis sometimes requires individuals/teams from client as well as service provider sides to get detailed and accurate requirements; often there has to be a lot of communication to and from to understand these requirements. Requirement gathering is the most crucial aspect as many times communication gaps arise in this phase and this leads to validation errors and bugs in the software program.

In systems design the design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems. The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input design.

Modular and subsystem programming code will be accomplished during this stage. Unit testing and module testing are done in this stage by the developers. This stage is intermingled with the next in that individual modules will need testing before integration to the main project.

The code is tested at various levels in software testing. Unit, system and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much if any iteration occurs. Iteration is not generally part of the waterfall model, but usually some occur at this stage. In the testing the whole system is test one by one Following are the types of testing:
• • • • • • • • • •

Defect testing Path testing Data set testing Unit testing System testing Integration testing Black box testing White box testing Regression testing Automation testing 68

• •

User acceptance testing Performance testing

Operations and maintenance
The deployment of the system includes changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is an important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented, which will require system updates.

Systems Analysis and Design
The Systems Analysis and Design (SAD) is the process of developing Information Systems (IS) that effectively use of hardware, software, data, process, and people to support the company’s business objectives.

Systems development life cycle topics
Management and control

SDLC Phases Related to Management Controls. The Systems Development Life Cycle (SDLC) phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. Control objectives help to provide a clear statement of the desired result or purpose and should be used throughout the entire SDLC process. Control objectives can be grouped into major categories (Domains), and relate to the SDLC phases as shown in the figure. To manage and control any SDLC initiative, each project will be required to establish some degree of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the project. The WBS and all programmatic material should be kept in the “Project Description” section of the project notebook. The WBS format is mostly left to the project manager to establish in a way that best describes the project work. There are some key areas that must be defined in the WBS as part of the SDLC policy. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager. 69

[edit] Work breakdown structured organization

Work Breakdown Structure. The upper section of the Work Breakdown Structure (WBS) should identify the major phases and milestones of the project in a summary fashion. In addition, the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. The middle section of the WBS is based on the seven Systems Development Life Cycle (SDLC) phases as a guide for WBS task development. The WBS elements should consist of milestones and “tasks” as opposed to “activities” and have a definitive period (usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems engineering) and may require close coordination with other tasks, either internal or external to the project. Any part of the project needing support from contractors should have a Statement of work (SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by external resources such as contractors and struct.

Baselines in the SDLC
Baselines are an important part of the Systems Development Life Cycle (SDLC). These baselines are established after four of the five phases of the SDLC and are critical to the iterative nature of the model . Each baseline is considered as a milestone in the SDLC.
• • • •

Functional Baseline: established after the conceptual design phase. Allocated Baseline: established after the preliminary design phase. Product Baseline: established after the detail design and development phase. Updated Product Baseline: established after the production construction phase.

Complementary to SDLC
Complementary Software development methods to Systems Development Life Cycle (SDLC) are:
• • • • • • •

Software Prototyping Joint Applications Design (JAD) Rapid Application Development (RAD) Extreme Programming (XP); extension of earlier work in Prototyping and RAD. Open Source Development End-user development Object Oriented Programming


Computer-aided software engineering
Computer-aided software engineering (CASE) is the scientific application of a set of tools and methods to a software system which is meant to result in high-quality, defect-free, and maintainable software products. It also refers to methods for the development of information systems together with automated tools that can be used in the software development process.

Supporting software
Alfonso Fuggetta classified CASE into 3 categories: 1. Tasks support only specific tasks in the software process. 2. Workbenches support only one or a few activities. 3. Environments support (a large part of) the software process. Workbenches and environments are generally built as collections of tools. Tools can therefore be either stand alone products or components of workbenches and environments.

that automate many of the activities involved in various life cycle phases. For example, when establishing the functional requirements of a proposed application, prototyping tools can be used to develop graphic models of application screens to assist end users to visualize how an application will look after development. Subsequently, system designers can use automated design tools to transform the prototyped functional requirements into detailed design documents. Programmers can then use automated code generators to convert the design documents into code. Automated tools can be used collectively, as mentioned, or individually. For example, prototyping tools could be used to define application requirements that get passed to design technicians who convert the requirements into detailed designs in a traditional manner using flowcharts and narrative documents, without the assistance of automated design software. Existing CASE tools can be classified along 4 different dimensions: 1. 2. 3. 4. Life-cycle support Integration dimension Construction dimension Knowledge-based CASE dimension

Let us take the meaning of these dimensions along with their examples one by one:

Life-Cycle Based CASE Tools This dimension classifies CASE Tools on the basis of the activities they support in the information systems life cycle. They can be classified as Upper or Lower CASE tools.


Upper CASE Tools support strategic planning and construction of concept-level products and ignore the design aspect. They support traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees, Decision tables, etc. Lower CASE Tools concentrate on the back end activities of the software life cycle, such as physical design, debugging, construction, testing, component integration, maintenance, reengineering and reverse engineering.

Integration dimension Three main CASE Integration dimensions have been proposed: 1. CASE Framework 2. ICASE Tools 3. Integrated Project Support Environment(IPSE)

Workbenches integrate several CASE tools into one application to support specific software-process activities. Hence they achieve:
• • •

a homogeneous and consistent interface (presentation integration). easy invocation of tools and tool chains (control integration). access to a common data set managed in a centralized way (data integration).

CASE workbenches can be further classified into following 8 classes: 1. 2. 3. 4. 5. 6. 7. 8. Business planning and modeling Analysis and design User-interface development Programming Verification and validation Maintenance and reverse engineering Configuration management Project management

An environment is a collection of CASE tools and workbenches that supports the software process. CASE environments are classified based on the focus/basis of integration 1. 2. 3. 4. 5. Toolkits Language-centered Integrated Fourth generation Process-centered

Toolkits Toolkits are loosely integrated collections of products easily extended by aggregating different tools and workbenches. Typically, the support provided by a toolkit is limited to programming, configuration management and project management. And the toolkit itself is environments extended from basic sets of operating system tools, 72

for example, the Unix Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose integration requires user to activate tools by explicit invocation or simple control mechanisms. The resulting files are unstructured and could be in different format, therefore the access of file from different tools may require explicit file format conversion. However, since the only constraint for adding a new component is the formats of the files, toolkits can be easily and incrementally extended. Language-centered The environment itself is written in the programming language for which it was developed, thus enabling users to reuse, customize and extend the environment. Integration of code in different languages is a major issue for language-centered environments. Lack of process and data integration is also a problem. The strengths of these environments include good level of presentation and control integration. Interlisp, Smalltalk, Rational, and KEE are examples of language-centered environments. Integrated These environments achieve presentation integration by providing uniform, consistent, and coherent tool and workbench interfaces. Data integration is achieved through the repository concept: they have a specialized database managing all information produced and accessed in the environment. Examples of integrated environment are IBM AD/Cycle and DEC Cohesion. Fourth-generation Fourth-generation environments were the first integrated environments. They are sets of tools and workbenches supporting the development of a specific class of program: electronic data processing and business-oriented applications. In general, they include programming tools, simple configuration management tools, document handling facilities and, sometimes, a code generator to produce code in lower level languages. Informix 4GL, and Focus fall into this category. Process-centered Environments in this category focus on process integration with other integration dimensions as starting points. A process-centered environment operates by interpreting a process model created by specialized tools. They usually consist of tools handling two functions:
• •

Process-model execution Process-model production

Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia.

All aspects of the software development life cycle can be supported by software tools, and so the use of tools from across the spectrum can, arguably, be described as CASE; from project management software through tools for business and functional analysis, system design, code storage, compilers, translation tools, test software, and so on. However, tools that are concerned with analysis and design, and with using design information to create parts (or all) of the software product, are most frequently thought of as CASE tools. CASE applied, for instance, to a database software product, might normally involve:

Modeling business / real-world processes and data flow 73

• •

Development of data models in the form of entity-relationship diagrams Development of process and function descriptions

Risks and associated controls
Common CASE risks and associated controls include:

Inadequate standardization: Linking CASE tools from different vendors (design tool from Company X, programming tool from Company Y) may be difficult if the products do not use standardized code structures and data classifications. File formats can be converted, but usually not economically. Controls include using tools from the same vendor, or using tools based on standard protocols and insisting on demonstrated compatibility. Additionally, if organizations obtain tools for only a portion of the development process, they should consider acquiring them from a vendor that has a full line of products to ensure future compatibility if they add more tools. Unrealistic expectations: Organizations often implement CASE technologies to reduce development costs. Implementing CASE strategies usually involves high start-up costs. Generally, management must be willing to accept a long-term payback period. Controls include requiring senior managers to define their purpose and strategies for implementing CASE technologies. Slow implementation: Implementing CASE technologies can involve a significant change from traditional development environments. Typically, organizations should not use CASE tools the first time on critical projects or projects with short deadlines because of the lengthy training process. Additionally, organizations should consider using the tools on smaller, less complex projects and gradually implementing the tools to allow more training time. Weak repository controls: Failure to adequately control access to CASE repositories may result in security breaches or damage to the work documents, system designs, or code modules stored in the repository. Controls include protecting the repositories with appropriate access, version, and backup controls.

Information technology audit
An information technology audit, or information systems audit, is an examination of the controls within an Information technology (IT) infrastructure. The evaluation of obtained evidence determines if the information systems are safeguarding assets, maintaining data integrity, and operating effectively to achieve the organization's goals or objectives. These reviews may be performed in conjunction with a financial statement audit, internal audit, or other form of attestation engagement. IT audits are also known as automated data processing (ADP) audits and computer audits. They were formerly called electronic data processing (EDP) audits.

Types of IT audits
Various authorities have created differing taxonomies to distinguish the various types of IT audits. Goodman & Lawless state that there are three specific systematic approaches to carry out an IT audit [1]: Technological innovation process audit. This audit constructs a risk profile for existing and new projects. The audit will assess the length and depth of the company's experience in its chosen technologies, as well as its presence in relevant markets, the organization of each project, and the


structure of the portion of the industry that deals with this project or product, organization and industry structure. • Innovative comparison audit. This audit is an analysis of the innovative abilities of the company being audited, in comparison to its competitors. This requires examination of company's research and development facilities, as well as its track record in actually producing new products. • Technological position audit: This audit reviews the technologies that the business currently has and that it needs to add. Technologies are characterized as being either "base", "key", "pacing", or "emerging". Others describe the spectrum of IT audits with five categories of audits: Systems and Applications: An audit to verify that systems and applications are appropriate, are efficient, and are adequately controlled to ensure valid, reliable, timely, and secure input, processing, and output at all levels of a system's activity. • Information Processing Facilities: An audit to verify that the processing facility is controlled to ensure timely, accurate, and efficient processing of applications under normal and potentially disruptive conditions. • Systems Development: An audit to verify that the systems under development meet the objectives of the organization, and to ensure that the systems are developed in accordance with generally accepted standards for systems development. • Management of IT and Enterprise Architecture: An audit to verify that IT management has developed an organizational structure and procedures to ensure a controlled and efficient environment for information processing. • Client/Server, Telecommunications, Intranets, and Extranets: An audit to verify that controls are in place on the client (computer receiving services), server, and on the network connecting the clients and servers.

And some lump all IT audits as being one of only two type: "general control review" audits or "application control review" audits. A number of IT Audit professionals from the Information Assurance realm consider there to be three fundamental types of controls regardless of the type of audit to be performed, especially in the IT realm. Many frameworks and standards try to break controls into different disciplines or arenas, terming them “Security Controls“, ”Access Controls“, “IA Controls” in an effort to define the types of controls involved. At a more fundamental level, these controls can be shown to consist of three types of fundamental controls: Protective/Preventative Controls, Detective Controls and Reactive/Corrective Controls.

IT Audit Process
The following are basic steps in performing the Information Technology Audit Process: 1. 2. 3. 4. 5. Planning Studying and Evaluating Controls Testing and Evaluating Controls Reporting Follow-up

Auditing information security is a vital part of any IT audit and is often understood to be the primary purpose of an IT Audit. The broad scope of auditing information security includes such topics as data centers (the physical 75

security of data centers and the logical security of databases, servers and network infrastructure components), networks and application security. Like most technical realms, these topics are always evolving; IT auditors must constantly continue to expand their knowledge and understanding of the systems and environment& pursuit in system company. A number of training and certification organizations have evolved. Currently, the major certifying bodies in the field are the Institute of Internal Auditors (IIA), the SANS Institute (specifically, the audit specific branch of SANS and GIAC) and ISACA. While CPAs and other traditional auditors can be engaged for IT Audits, organizations are well advised to require that individuals with some type of IT specific audit certification are employed when validating the controls surrounding IT systems.

History of IT Auditing
The concept of IT auditing was formed in the mid-1960s. Since that time, IT auditing has gone through numerous changes, largely due to advances in technology and the incorporation of technology into business.

Law regarding IT auditing
Several information technology audit related laws and regulations have been introduced in the United States since 1977. These include the Gramm-Leach-Bliley Act, the Sarbanes-Oxley Act, the Health Insurance Portability and Accountability Act, Part 11, the London Stock Exchange Combined Code, King II, and the Foreign Corrupt Practices Act.

Audit Personnel
The CISM and CAP credentials are the two newest security auditing credentials, offered by the ISACA and ISC2, respectively. Strictly speaking, only the CISA or GSNA title would sufficiently demonstrate competences regarding both information technology and audit aspects with the CISA being more audit focused and the GSNA being more information technology focused. Outside of the US, various credentials exist. For example, the Netherlands has the RE credential (as granted by the NOREA(Dutch site) IT-auditors' association), which among others requires a post-graduate IT-audit education from an accredited university, subscription to a Code of Ethics, and adherence to strict continuous education requirements.


Enterprise Systems
Enterprise systems (ES) are large-scale, integrated application-software packages that use the computational, data storage, and data transmission power of modern information technology to support business processes, information flows, reporting, and data analytics within and between complex organizations. In short, ES are packaged enterprise application software (PEAS) systems. Some people[1] have equated the terms "enterprise system" and "Enterprise resource planning (ERP) system," but the term "ERP" now has a reasonably clear meaningso it is convenient to use the term "enterprise system" to refer to the larger set of all large organization-wide packaged applications with a process orientation. Enterprise systems are built on, though do not include, software platforms such as SAP’s NetWeaver and Oracle's Fusion and, usually, a relational database. Although data warehousing or business intelligence systems are enterprise-wide packaged application software often sold by ES vendors, since they do not directly support execution of business processes, it is often convenient to exclude them from the definition of ES.

Enterprise Resource Planning (ERP)
Enterprise Resource Planning or ERP is actually a process or approach which attempts to consolidate all of a company's departments and functions into a single computer system that services each department's specific needs. It is, in a sense, a convergence of people, hardware and software into an efficient production, service and delivery system that creates profit for the company. ERP is defines as an integrated software package which integrate all the department of an organization. Several department of an organization are marketing, sales, finance, production etc. Since an ERP package integrates these entire departments, thus the performance of an organization will be improved. It is used to manage the important part of business including product planning, purchasing, maintaining inventories, customer service etc. Benefits of an ERP System or reasons for growth of ERP
           

Effective utilization of resources Improvement in business performance Reduction of inventory due to JIT approach Integration of information i.e. integration of all the department of an organization. Sharing of common data and information Global Adaptation Improvement in the quality of the product at the same price ERP targeted all types of business organization whether large business or small business organization Lowers the total cost in supply chain management through JIT approach. Eliminate limitation in legacy system i.e. traditional system Order fulfillment improvement Improvement in customer service.

Evaluation of ERP Inventory control or Re-order point (1960): In 1960’s most of the ERP system was concentrate on the inventory control ability also called re-order point system. Historical data were used to forecast future inventory demand. When an item falls below the predetermined level additional inventory is ordered. Material requirement planning (1970): In 1970’s this system was introduced and it focused on demand based approach for planning and manufacturing of product and ordering inventory.


Manufacturing resource planning (1980): In 1980’s this approach was introduced. It was used for adding tool for sales promotion, customer satisfaction, customer order processing, production plan and focus on quality and reducing overhead cost and detailed reporting. Manufacturing resource planning-II with manufacturing executive system (MES) (1990): In 1990 this system was introduced. It provides ability to adapt production schedules to meet customer needs. Main focused was on ability and adopt new products and services to meet customer needs. ERP in late 1990 Hidden Cost of ERP System:
     

Training expenses Customization: Core of ERP system is actual customization of ERP system itself. Integration and testing Replacing the staff Implementation team can never stop Wait for ROI

Integrated System Approach ERP package integrate the different department of an organization with the help of integrated system approach. Integrated system approach requires successful implementation of re-engineering process for better result of an ERP system. Business re-engineering revolves around the IT and continues change. It is a fundamental rethinking of business process to improve the quality and output of the product or service. Designing Phase: The fundamental decision in the designing phase is whether re-engineering the business process or customizes the ERP package. When the business process is re-engineered then team select commercial ERP package from the shelf and install it. The process is re-engineered in such a manner that it will:
 

Reduce the cycle time There is no unnecessary to and fro of information

In customizing team select a commercial ERP package and customize it according to their unique requirements. However poor customization is one of the reasons of the failure of ERP system while re-engineering of the business process will give the full benefits of the ERP system. Customization has to take the following parameters:
  

End user requirement Vision of the top management Technical requirements of the products

The higher the degree of customization lower will be the benefit of ERP system because packaged software purchased and installed mainly to refine existing business process and to improve overall performance of the organization. Or we can say that higher the customization will not give the full benefits of ERP system. Implementation phase and steps of implementation 78

Implementation includes addressing and configuration as it involves the migration the data from old system to new system, building interface, order processing, implementing reports and testing of package. Many companies take the help of expert from software supplier to assist implementation. The unwritten rule of implementation is synchronizing the existing company package with ERP package rather than changing the source code and customizing the ERP system to suit the company. Features that are to be considered for configuring ERP system: Data ownership: It includes that, which will be responsible for data integrity. Whether it is centralized responsibility or local responsibility. Distribution of procedures: It includes that which is to be centralized. Data management: Will ERP support centralized data management or local data management. Steps in implementation of an ERP package
   

   

Establishing securities Migrating the data from old system to new system and ensure that data to be migrated is accurate and authentic. Building interfaces to other system like office system. User training: Training of the user can be started at the tine of test run and user of different department are to be trained in there respective areas. It includes: logging in and out, getting to know the system, trying sample transaction in the entire department. Parallel run: Under this business transaction are carried on both new system as well as old system and the implementation team take care of any problem or errors which comes to the light of parallel run. User documentation: It is different from general documentation. It includes how to carry out transaction. Post implementation: It generally involves queries from user and minor changes can be possible in the formats. System monitoring and fine tuning: IT peoples monitors system closely to see performance aspects so that end user can get the full benefits of ERP system. ERP & Related Technology BPR: BPR stand for Business Process Re-engineering. It can be define as a management approach aiming at improvement by means of elevating efficiency and effectiveness of process that exits within and across the organization. Its main focus is on better business processes. It is a radical transition that a company must make to keep pace with today’s ever changing global market and to achieve dramatic improvement in critic contemporary measures of performance such as quality, cost, service etc. One of the main tools for making this change is IT. IT and BPR goes together and merger of these two concepts is known as business engineering. ERP system helps in integrating various business processes with the help of modern development in IT and with a good ERP system an organization will be capability of greater achievement in improvement in cost, quality, and service. The main requirement of BPR decentralize decision making to decision maker to be responsive to customer needs and also use of IT, facilitating newly re-engineering process. Principles of BPR:

It assumes that current process is irrelevant and does not work. 79

  

To break away from outdated rules. Linking of parallel activities. Capture information once and at source.

Elements of BPR:
     

Business Process Integration of business process Technology to redesign business process Cross functional coordination Timing improvement process continuously Complement market driven strategies designed to provide a competitive edge

OLAP: OLAP stands for Online Analytical Processing. According to business intelligence ltd. OLAP can be define in five words i.e. fast analysis of shared multidimensional information. Fast: fast mans that system is targeted to deliver most fast response to user within a few seconds. Analysis: IT means that system can cope with any business logic and statistical analysis that is relevant for application and user. Shared: It means that system implements all security requirements for confidentiality i.e. system is prevent from unauthorized person and competitors. Multidimensional: It means that system must provide multidimensional conceptual view of data. Information: It means that data is refined i.e. data is accurate, timely and relevant to user. OLAP can be used in variety of business area including sales, marketing, financial reporting, profitability analysis budgeting and planning and many others. Data Warehouse: Since operational data can’t be kept in the database of ERP system because as time passes volume of data will increase and this will affect the performance of ERP system, thus need arise to save this data. The primary concept of data warehouse is that data stored for business analysis can be accessed most effectively by separating it from the data in operational system. Thus Data warehouse provides the analytical tools. It combines data from sales, marketing, finance, and other departments. In other words it can be define as copy of business transaction data specially structured for query and analysis. If operational data is kept in database then it will create a lot of problem by affecting the performance of ERP system. So it is better to archive operational data once its use is over. ‘Use is over’ does not mean that archived data is useless rather it is one of the most valuable resources of organization. However once the operational use of data is over it should be removed from operational database. Once the data is imported into Data warehouse it becomes non-volatile i.e. no modification can be made afterward once data has been imported in data warehouse. Data warehouse is a blending of technologies including multidimensional and relational databases because it is integrated data from different sources and is transformed into a single data format which make it possible to analyze data in a consistent way. A well implemented data warehouse is key for understanding business decisions so care must be taken in entering data into data warehouse otherwise invalid data will produce the wrong results. Advantages of Data warehouse:
  

Increased quality and flexibility of enterprise analysis as multidimensional database is used. More cost effective decision making. Enhanced customer service by maintaining better customer relationship. 80

Better enterprise intelligence.

Data Mining: Data mining can be defines as extraction of hidden predictive information from large databases and is a new powerful technology with great potential to help company’s focus on most important information in data warehouse. It uses advance algorithms, multiprocessor computations, massive database etc. Today, for a business organization quality decision making is very important for its success and decision making is depend upon availability of information. Data mining is the process of identifying valid, novel, potentially useful and ultimately comprehensible information from database that is used to make crucial business decision. Advantages of data warehouse:
  

Automated prediction of tends and behavior: It automate process of finding predictive information in large databases. It has a much lower cost than hiring highly trained professional statistician. It can be incorporated with DSS (Decision Support System) which helps the manager to take wise and quality business decision.

Definition of ERP
An Enterprise Resource planning system is a packaged business software system that allows a company to: • Automate and integrate the majority of its business processes • Share common data and practices across the entire enterprise • Produce and access information in a real-time environment
ERP definition: “Enterprise Resource Planning software is complete integrated business management software, which captures data in chronological order, and is used to link businesses processes automatically, and give real time information, to authorized user”. Enterprise Resource Planning System is a multi-user, multi-location, and multi-company, software solution.”

Basic ERP Features COMPARING midmarket ERP packages is not exactly an apples-to-apples type of exercise. Each vendor wraps its midmarket offering with different functionality, tailored to the needs of the kinds of companies the solution is intended for and based on the vendor's particular areas of expertise. However, almost every midmarket ERP suite shares several common modules: BI, CRM, financial management, HCM, manufacturing operations and SCM. The differences among solutions tend to be quite granular within these modules. Also, even if different packages offer the same feature - say, sales-order management - it might not be bundled in the same module; some vendors include sales-order management in their CRM suites while others package it in their SCM suites. Key to an ERP package is tight integration between modules, so that all of the core business modules are related. For instance, manufacturing operations are integrated with customer service, logistics and delivery. Business Intelligence One of the newer components of most modern midmarket ERP packages, BI shines a bright light into the heart of a company's performance. In general, an ERP suite's analytics or BI tools allow users to share and analyze the data that the ERP applications collect from across the enterprise from a unified repository. The end result is more 81

informed decision making by everyone from executives to line managers to human-resources professionals to accountants. A variety of automated reporting and analysis tools can help streamline operations, as well as improve an organization's business performance. With greater control and visibility of data across the enterprise, business leaders can better align the company's operations with its overarching strategic goals. CRM (Customer Relationship Management) CRM has long been a core component of any ERP offering, giving manufacturers a way to improve customer service by pulling together tools to fulfill customers' orders, respond to customers' service needs, and often, create marketing campaigns to reach customers. Most vendors include sales tools to provide customers with sales quotes, process their orders and offer flexible pricing on their products. Another important CRM component is service management, which may arm customerservice agents with scripts for talking to customers, as well as allow them to authorize product returns and search a knowledge base of support information. The third main component is usually marketing, which may include tools to manage campaigns, create sales literature and develop a library of marketing collateral. Additionally, CRM often has tools for account management, SFA, and opportunity or lead management, as well as self-service tools for customers and an e-commerce storefront builder. Financial Management Of all the ERP modules, the financials applications tend to be the most frequently utilized. Across the board, these include general ledger, accounts receivable and accounts payable, billing, and fixed asset management. Because many mid market companies deploy ERP to support efforts at breaking into global markets, it is imperative that their ERP packages support multiple currencies and languages. The financial-management applications may also include tools for creating and adhering to budgets, cash-flow management, expense management, risk management and tax management. HCM (Human Capital Management) For the most part, the HCM module includes tools for human-resources management, performance management, payroll, and time and labor tracking. Some vendors also provide functionality for administering benefits, managing compensation, dealing with salary taxes, recruiting new employees and planning workforce needs. Some also include self-service tools for managers and employees. Even though HCM is generally considered core ERP functionality, some vendors offer it as an add-on module. Manufacturing Operations The manufacturing module is where much product differentiation happens, including industry-specific functionality. In general, these applications are intended to make manufacturing operations more efficient and simple. Most vendors support different modes of manufacturing, include configurable product capabilities, perform different types of job costing and offer a BOM (bill of materials) tool. Applications often include PDM (Product Data Management), CRP (Capacity Requirements Planning), MRP (Materials Requirements Planning), forecasting, MPS (Master Production Scheduling), work-order management and shop-floor control. SCM (Supply Chain Management) Of all the ERP modules, SCM has the greatest variability between vendors: It is vast and varied, yet often adapted to the needs of specific industries. In general, SCM improves the flow of materials through an organization's supply chain by "managing planning, scheduling, procurement, and fulfillment for optimum service levels and maximum profitability," according to Lawson Software. Some vendors segment their SCM into smaller modules. Oracle's JD 82

Edwards, for instance, breaks it down into Supply Chain Planning, Supply Chain Execution (Logistics) and Supply Management (Procurement). SCM features tend to include also production scheduling, demand management, distribution management, inventory management, warehouse management, procurement, sourcing and order management. ERP Selection Criteria An ERP system is the information backbone of an organization and reaches into all areas of the business and value-chain. Thus, long-term business strategy of the organization will form the basis of the selection criteria of an ERP system. The selection of the most appropriate solution is a semi-structured decision problem because only a part of it can be handled by a definite or accepted procedure such as standard investment calculations and on the other hand the decision maker needs to judge and evaluate all relevant business impact aspects. There is no agreed-upon and formal procedure for this important task (Laudon and Laudon, 1998; Hecht, 1997). The modules that an ERP offers, are the most important selection reasons; varying according to the needs of the organization. In this paper, it is assumed that the decision-maker has gone through the module selection process, has found very similar applications on modular design, and thus eliminated modules according to preference. And, there remain the following criteria, which are listed in order of priority; • Customization: Since different organizations need different software, they need to adapt the available software in the market for their own use. But, customizations shouldn’t cause difficulties in updating to future software releases. • Implement ability: Different ERPs have different requirements, thus it is important to choose animplementable one. If the organization ventures infrastructural change, the feasibility problem this change may cause shouldn’t be disregarded. • Maintenance: The software should support multi-company, multi-division and multi-currency environments. There shouldn’t be any restrictions to this type of environment so that when ever an add-on procedure or a patch is available, it can be updated immediately. • Real Time Changes: The modules should work in real time with online and batch-processing capabilities, so that no errors would occur because of the system being not up-to-date and information available to a department wouldn’t be different than the other departments. • Flexibility: Flexibility denotes the capability of the system to support the needs of the business over its lifetime.1 As the business requirements of the organization change, it should be able toad extra modules. The ERP should be flexible in order to suit the organizational culture and business strategy. • User Friendliness: Most of the time, the end-users of an ERP system are not computer experts ,thus their opinions about the software are highly valuable. The product shouldn’t be too complex or sophisticated for an average user since the efficiency of end users directly affects the efficiency or the organization. • Cost: Cost is an important issue since the implementing organization may be a small or medium sized enterprise (SME) that may not act as comfortable as a large, multi-national organization.ERPs are generally complex systems involving high cost, so the software should be among the edges of the foreseen budget. • Systems Requirements: Technology determines the longetivity of the product.2 It is important to choose an ERP that is independent of hardware, operating system and database systems. Advantages of ERP Industry wise advantages • Manufacturing Sector--------------------Speeding up the whole process. 83

• • •

Distribution and retail Stores-----------Accessing the status of the goods Transport Sector---------------------------Transmit commodities through online transactions. Project Service industry-----------------Fastens the compilation of reports.

The advantage and disadvantage of ERP is best understood by studying them under different categories. Hence the next paragraph presents information on corporates as a whole because the advantage of ERP systems in a company is different when compared industry wise. Advantages in a corporate entity The accounts department personnel can act independently. They don't have to be behind the technical persons every time to record the financial transactions. Ensures quicker processing of information and reduces the burden of paperwork. Serving the customers efficiently by way of prompt response and follow up. Disposing queries immediately and facilitating the payments from customers with ease and well ahead of the stipulated deadline. It helps in having a say over your competitor and adapting to the whims and fancies of the market and business fluctuations. The swift movement of goods to rural areas and in lesser known places has now become a reality with the use of ERP. The database not only becomes user friendly but also helps to do away with unwanted ambiguity. ERP is suitable for global operations as it encompasses all the domestic jargons, currency conversions, diverse accounting standards, and multilingual facilities .In short it is the perfect commercial and scientific epitome of the verse "Think Local. Act Global". ERP helps to control and data and facilitates the necessary contacts to acquire the same. Disadvantage Inspite of rendering marvelous services ERP is not free from its own limitations. ERP calls for a voluminous and exorbitant investment of time and money. The amount of cash required would even be looming on the management given the fact that such an outlay is not a guarantee to the said benefits but subject to proper implementation, training and use. In the ever expanding era of information theft ERP is no exception. It is alarming to note the time taken to implement the system in the organization. These means large amounts of workers have to shun their regular labor and undertake training. This not only disturbs the regular functioning of the organization but also runs the organization in the huge risk of losing potential business in that particular period. There are great benefits rendered by the system. On the other hand when one thinks of this information reach in the hands of undeserving persons who could do more than misuse ,it is evident that there is no way of ensuring secrecy of information and larger chances of risk will be generated as long as they are in the public domain.

Supply chain management

Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers (Harland, 1996).[1] Supply chain management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain).

More common and accepted definitions of supply chain management are:

Supply chain management is the systemic, strategic coordination of the traditional business functions and the tactics across these business functions within a particular company and across businesses within the supply chain, for the purposes of improving the long-term performance of the individual companies and the supply chain as a whole (Mentzer et al., 2001). A customer focused definition is given by Hines (2004:p76) "Supply chain strategies require a total systems view of the linkages in the chain that work together efficiently to create customer satisfaction at the end point of delivery to the consumer. As a consequence costs must be lowered throughout the chain by driving out unnecessary costs and focusing attention on adding value. Throughput efficiency must be increased, bottlenecks removed and performance measurement must focus on total systems efficiency and equitable reward distribution to those in the supply chain adding value. The supply chain system must be responsive to customer requirements." Global supply chain forum - supply chain management is the integration of key business processes across the supply chain for the purpose of creating value for customers and stakeholders (Lambert, 2008). According to the Council of Supply Chain Management Professionals (CSCMP), supply chain management encompasses the planning and management of all activities involved in sourcing, procurement, conversion, and logistics management. It also includes the crucial components of coordination and collaboration with channel partners, which can be suppliers, intermediaries, third-party service providers, and customers. In essence, supply chain management integrates supply and demand management within and across companies. More recently, the loosely coupled, self-organizing network of businesses that cooperate to provide product and service offerings has been called the Extended Enterprise.

A supply chain, as opposed to supply chain management, is a set of organizations directly linked by one or more of the upstream and downstream flows of products, services, finances, and information from a source to a customer. Managing a supply chain is 'supply chain management' (Mentzer et al., 2001). Supply chain management software includes tools or modules used to execute supply chain transactions, manage supplier relationships and control associated business processes. Supply chain event management (abbreviated as SCEM) is a consideration of all possible events and factors that can disrupt a supply chain. With SCEM possible scenarios can be created and solutions devised.

Problems addressed by supply chain management

Supply chain management must address the following problems:


• •

• • •

Distribution Network Configuration: number, location and network missions of suppliers, production facilities, distribution centers, warehouses, cross-docks and customers. Distribution Strategy: questions of operating control (centralized, decentralized or shared); delivery scheme, e.g., direct shipment, pool point shipping, cross docking, DSD (direct store delivery), closed loop shipping; mode of transportation, e.g., motor carrier, including truckload, LTL, parcel; railroad; intermodal transport, including TOFC (trailer on flatcar) and COFC (container on flatcar); ocean freight; airfreight; replenishment strategy (e.g., pull, push or hybrid); and transportation control (e.g., owner-operated, private carrier, common carrier, contract carrier, or 3PL). Trade-Offs in Logistical Activities: The above activities must be well coordinated in order to achieve the lowest total logistics cost. Trade-offs may increase the total cost if only one of the activities is optimized. For example, full truckload (FTL) rates are more economical on a cost per pallet basis than less than truckload (LTL) shipments. If, however, a full truckload of a product is ordered to reduce transportation costs, there will be an increase in inventory holding costs which may increase total logistics costs. It is therefore imperative to take a systems approach when planning logistical activities. This trade-offs are key to developing the most efficient and effective Logistics and SCM strategy. Information: Integration of processes through the supply chain to share valuable information, including demand signals, forecasts, inventory, transportation, potential collaboration, etc. Inventory Management: Quantity and location of inventory, including raw materials, work-in-progress (WIP) and finished goods. Cash-Flow: Arranging the payment terms and methodologies for exchanging funds across entities within the supply chain.

Supply chain execution means managing and coordinating the movement of materials, information and funds across the supply chain. The flow is bi-directional.

Supply chain management is a cross-function approach including managing the movement of raw materials into an organization, certain aspects of the internal processing of materials into finished goods, and the movement of finished goods out of the organization and toward the end-consumer. As organizations strive to focus on core competencies and becoming more flexible, they reduce their ownership of raw materials sources and distribution channels. These functions are increasingly being outsourced to other entities that can perform the activities better or more cost effectively. The effect is to increase the number of organizations involved in satisfying customer demand, while reducing management control of daily logistics operations. Less control and more supply chain partners led to the creation of supply chain management concepts. The purpose of supply chain management is to improve trust and collaboration among supply chain partners, thus improving inventory visibility and the velocity of inventory movement. Several models have been proposed for understanding the activities required to manage material movements across organizational and functional boundaries. SCOR is a supply chain management model promoted by the Supply Chain Council. Another model is the SCM Model proposed by the Global Supply Chain Forum (GSCF). Supply chain activities can be grouped into strategic, tactical, and operational levels . The CSCMP has adopted The American Productivity & Quality Center (APQC) Process Classification Frameworka high-level, industry-neutral enterprise process model that allows organizations to see their business processes from a cross-industry viewpoint.

Strategic level

Strategic network optimization, including the number, location, and size of warehousing, distribution centers, and facilities. 86

• • • • •

Strategic partnerships with suppliers, distributors, and customers, creating communication channels for critical information and operational improvements such as cross docking, direct shipping, and third-party logistics. Product life cycle management, so that new and existing products can be optimally integrated into the supply chain and capacity management activities. Information technology chain operations. Where-to-make and make-buy decisions. Aligning overall organizational strategy with supply strategy. It is for long term and needs resource commitment.

Tactical level
• • • • • • •

Sourcing contracts and other purchasing decisions. Production decisions, including contracting, scheduling, and planning process definition. Inventory decisions, including quantity, location, and quality of inventory. Transportation strategy, including frequency, routes, and contracting. Benchmarking of all operations against competitors and implementation of best practices throughout the enterprise. Milestone payments. Focus on customer demand and Habits.

Operational level
• • • • • • • • •

Daily production and distribution planning, including all nodes in the supply chain. Production scheduling for each manufacturing facility in the supply chain (minute by minute). Demand planning and forecasting, coordinating the demand forecast of all customers and sharing the forecast with all suppliers. Sourcing planning, including current inventory and forecast demand, in collaboration with all suppliers. Inbound operations, including transportation from suppliers and receiving inventory. Production operations, including the consumption of materials and flow of finished goods. Outbound operations, including all fulfillment activities, warehousing and transportation to customers. Order promising, accounting for all constraints in the supply chain, including all suppliers, manufacturing facilities, distribution centers, and other customers. From production level to supply level accounting all transit damage cases & arrange to settlement at customer level by maintaining company loss through insurance company.

Importance of supply chain management
Organizations increasingly find that they must rely on effective supply chains, or networks, to compete in the global market and networked economy. In Peter Drucker's (1998) new management paradigms, this concept of business relationships extends beyond traditional enterprise boundaries and seeks to organize entire business processes throughout a value chain of multiple companies. During the past decades, globalization, outsourcing and information technology have enabled many organizations, such as Dell and Hewlett Packard, to successfully operate solid collaborative supply networks in which each specialized business partner focuses on only a few key strategic activities (Scott, 1993). This inter-organizational supply network can be acknowledged as a new form of organization. However, with the complicated interactions among the players, the network structure fits neither "market" nor "hierarchy" categories (Powell, 1990). It is not clear what kind of performance impacts different supply network structures could have on firms, and little is known about the coordination conditions and trade-offs that may exist among the players. From a systems perspective, a complex network structure can be decomposed into individual component firms (Zhang and Dilts, 2004). 87

Traditionally, companies in a supply network concentrate on the inputs and outputs of the processes, with little concern for the internal management working of other individual players. Therefore, the choice of an internal management control structure is known to impact local firm performance (Mintzberg, 1979). In the 21st century, changes in the business environment have contributed to the development of supply chain networks. First, as an outcome of globalization and the proliferation of multinational companies, joint ventures, strategic alliances and business partnerships, significant success factors were identified, complementing the earlier "Just-In-Time", "Lean Manufacturing" and "Agile Manufacturing" practices. Second, technological changes, particularly the dramatic fall in information communication costs, which are a significant component of transaction costs, have led to changes in coordination among the members of the supply chain network (Coase, 1998). Many researchers have recognized these kinds of supply network structures as a new organization form, using terms such as "Keiretsu", "Extended Enterprise", "Virtual Corporation", "Global Production Network", and "Next Generation Manufacturing System". In general, such a structure can be defined as "a group of semi-independent organizations, each with their capabilities, which collaborate in ever-changing constellations to serve one or more markets in order to achieve some business goal specific to that collaboration" (Akkermans, 2001). The security management system for supply chains is described in ISO/IEC 28000 and ISO/IEC 28001 and related standards published jointly by ISO and IEC.

Historical developments in supply chain management
Six major movements can be observed in the evolution of supply chain management studies: Creation, Integration, and Globalization (Movahedi et al., 2009), Specialization Phases One and Two, and SCM 2.0. 1. creation era The term supply chain management was first coined by a U.S. industry consultant in the early 1980s. However, the concept of a supply chain in management was of great importance long before, in the early 20th century, especially with the creation of the assembly line. The characteristics of this era of supply chain management include the need for large-scale changes, re-engineering, downsizing driven by cost reduction programs, and widespread attention to the Japanese practice of management. 2. integration era This era of supply chain management studies was highlighted with the development of Electronic Data Interchange (EDI) systems in the 1960s and developed through the 1990s by the introduction of Enterprise Resource Planning (ERP) systems. This era has continued to develop into the 21st century with the expansion of internet-based collaborative systems. This era of supply chain evolution is characterized by both increasing value-adding and cost reductions through integration. In fact a supply chain can be classified as a Stage 1, 2 or 3 network. In stage 1 type supply chain, various systems such as Make, Storage, Distribution, Material control, etc are not linked and are independent of each other. In a stage 2 supply chain, these are integrated under one plan and is ERP enabled. A stage 3 supply chain is one in which vertical integration with the suppliers in upstream direction and customers in downstream direction is achieved. An example of this kind of supply chain is Tesco. 3. globalization era The third movement of supply chain management development, the globalization era, can be characterized by the attention given to global systems of supplier relationships and the expansion of supply chains over national 88

boundaries and into other continents. Although the use of global sources in the supply chain of organizations can be traced back several decades (e.g., in the oil industry), it was not until the late 1980s that a considerable number of organizations started to integrate global sources into their core business. This era is characterized by the globalization of supply chain management in organizations with the goal of increasing their competitive advantage, value-adding, and reducing costs through global sourcing. 4. specialization era—phase one: outsourced manufacturing and distribution In the 1990s, industries began to focus on “core competencies” and adopted a specialization model. Companies abandoned vertical integration, sold off non-core operations, and outsourced those functions to other companies. This changed management requirements by extending the supply chain well beyond company walls and distributing management across specialized supply chain partnerships. This transition also re-focused the fundamental perspectives of each respective organization. OEMs became brand owners that needed deep visibility into their supply base. They had to control the entire supply chain from above instead of from within. Contract manufacturers had to manage bills of material with different part numbering schemes from multiple OEMs and support customer requests for work -in-process visibility and vendor-managed inventory (VMI). The specialization model creates manufacturing and distribution networks composed of multiple, individual supply chains specific to products, suppliers, and customers who work together to design, manufacture, distribute, market, sell, and service a product. The set of partners may change according to a given market, region, or channel, resulting in a proliferation of trading partner environments, each with its own unique characteristics and demands. 5. specialization era—phase two: supply chain management as a service Specialization within the supply chain began in the 1980s with the inception of transportation brokerages, warehouse management, and non-asset-based carriers and has matured beyond transportation and logistics into aspects of supply planning, collaboration, execution and performance management. At any given moment, market forces could demand changes from suppliers, logistics providers, locations and customers, and from any number of these specialized participants as components of supply chain networks. This variability has significant effects on the supply chain infrastructure, from the foundation layers of establishing and managing the electronic communication between the trading partners to more complex requirements including the configuration of the processes and work flows that are essential to the management of the network itself. Supply chain specialization enables companies to improve their overall competencies in the same way that outsourced manufacturing and distribution has done; it allows them to focus on their core competencies and assemble networks of specific, best-in-class partners to contribute to the overall value chain itself, thereby increasing overall performance and efficiency. The ability to quickly obtain and deploy this domain-specific supply chain expertise without developing and maintaining an entirely unique and complex competency in house is the leading reason why supply chain specialization is gaining popularity. Outsourced technology hosting for supply chain solutions debuted in the late 1990s and has taken root primarily in transportation and collaboration categories. This has progressed from the Application Service Provider (ASP) model from approximately 1998 through 2003 to the On-Demand model from approximately 2003-2006 to the Software as a Service (SaaS) model currently in focus today. 6. supply chain management 2.0 (SCM 2.0)


Building on globalization and specialization, the term SCM 2.0 has been coined to describe both the changes within the supply chain itself as well as the evolution of the processes, methods and tools that manage it in this new "era". Web 2.0 is defined as a trend in the use of the World Wide Web that is meant to increase creativity, information sharing, and collaboration among users. At its core, the common attribute that Web 2.0 brings is to help navigate the vast amount of information available on the Web in order to find what is being sought. It is the notion of a usable pathway. SCM 2.0 follows this notion into supply chain operations. It is the pathway to SCM results, a combination of the processes, methodologies, tools and delivery options to guide companies to their results quickly as the complexity and speed of the supply chain increase due to the effects of global competition, rapid price fluctuations, surging oil prices, short product life cycles, expanded specialization, near-/far- and off-shoring, and talent scarcity.

SCM 2.0 leverages proven solutions designed to rapidly deliver results with the agility to quickly manage future change for continuous flexibility, value and success. This is delivered through competency networks composed of best-of-breed supply chain domain expertise to understand which elements, both operationally and organizationally, are the critical few that deliver the results as well as through intimate understanding of how to manage these elements to achieve desired results. Finally, the solutions are delivered in a variety of options, such as no-touch via business process outsourcing, mid-touch via managed services and software as a service (SaaS), or high touch in the traditional software deployment model.

Supply chain business process integration
Successful SCM requires a change from managing individual functions to integrating activities into key supply chain processes. An example scenario: the purchasing department places orders as requirements become known. The marketing department, responding to customer demand, communicates with several distributors and retailers as it attempts to determine ways to satisfy this demand. Information shared between supply chain partners can only be fully leveraged through process integration. Supply chain business process integration involves collaborative work between buyers and suppliers, joint product development, common systems and shared information. According to Lambert and Cooper (2000), operating an integrated supply chain requires a continuous information flow. However, in many companies, management has reached the conclusion that optimizing the product flows cannot be accomplished without implementing a process approach to the business. The key supply chain processes stated by Lambert (2004) are:
• • • • • • • •

Customer relationship management Customer service management Demand management Order fulfillment Manufacturing flow management Supplier relationship management Product development and commercialization Returns management

Much has been written about demand management. Best-in-Class companies have similar characteristics, which include the following: a) Internal and external collaboration b) Lead time reduction initiatives c) Tighter feedback from customer and market demand d) Customer level forecasting One could suggest other key critical supply business processes which combine these processes stated by Lambert such as: 90

a. b. c. d. e. f. g.

Customer service management Procurement Product development and commercialization Manufacturing flow management/support Physical distribution Outsourcing/partnerships Performance measurement

a) Customer service management process Customer Relationship Management concerns the relationship between the organization and its customers. Customer service is the source of customer information. It also provides the customer with real-time information on scheduling and product availability through interfaces with the company's production and distribution operations. Successful organizations use the following steps to build customer relationships:
• • •

determine mutually satisfying goals for organization and customers establish and maintain customer rapport produce positive feelings in the organization and the customers

b) Procurement process Strategic plans are drawn up with suppliers to support the manufacturing flow management process and the development of new products. In firms where operations extend globally, sourcing should be managed on a global basis. The desired outcome is a win-win relationship where both parties benefit, and a reduction in time required for the design cycle and product development. Also, the purchasing function develops rapid communication systems, such as electronic data interchange (EDI) and Internet linkage to convey possible requirements more rapidly. Activities related to obtaining products and materials from outside suppliers involve resource planning, supply sourcing, negotiation, order placement, inbound transportation, storage, handling and quality assurance, many of which include the responsibility to coordinate with suppliers on matters of scheduling, supply continuity, hedging, and research into new sources or programs. c) Product development and commercialization Here, customers and suppliers must be integrated into the product development process in order to reduce time to market. As product life cycles shorten, the appropriate products must be developed and successfully launched with ever shorter time-schedules to remain competitive. According to Lambert and Cooper (2000), managers of the product development and commercialization process must: 1. coordinate with customer relationship management to identify customer-articulated needs; 2. select materials and suppliers in conjunction with procurement, and 3. develop production technology in manufacturing flow to manufacture and integrate into the best supply chain flow for the product/market combination. d) Manufacturing flow management process The manufacturing process produces and supplies products to the distribution channels based on past forecasts. Manufacturing processes must be flexible to respond to market changes and must accommodate mass customization. Orders are processes operating on a just-in-time (JIT) basis in minimum lot sizes. Also, changes in the manufacturing flow process lead to shorter cycle times, meaning improved responsiveness and efficiency in meeting customer demand. Activities related to planning, scheduling and supporting manufacturing operations, such as work-in-process storage, handling, transportation, and time phasing of components, inventory at 91

manufacturing sites and maximum flexibility in the coordination of geographic and final assemblies postponement of physical distribution operations. e) Physical distribution This concerns movement of a finished product/service to customers. In physical distribution, the customer is the final destination of a marketing channel, and the availability of the product/service is a vital part of each channel participant's marketing effort. It is also through the physical distribution process that the time and space of customer service become an integral part of marketing, thus it links a marketing channel with its customers (e.g., links manufacturers, wholesalers, retailers). f) Outsourcing/partnerships This is not just outsourcing the procurement of materials and components, but also outsourcing of services that traditionally have been provided in-house. The logic of this trend is that the company will increasingly focus on those activities in the value chain where it has a distinctive advantage, and outsource everything else. This movement has been particularly evident in logistics where the provision of transport, warehousing and inventory control is increasingly subcontracted to specialists or logistics partners. Also, managing and controlling this network of partners and suppliers requires a blend of both central and local involvement. Hence, strategic decisions need to be taken centrally, with the monitoring and control of supplier performance and day-to-day liaison with logistics partners being best managed at a local level. g) Performance measurement Experts found a strong relationship from the largest arcs of supplier and customer integration to market share and profitability. Taking advantage of supplier capabilities and emphasizing a long-term supply chain perspective in customer relationships can both be correlated with firm performance. As logistics competency becomes a more critical factor in creating and maintaining competitive advantage, logistics measurement becomes increasingly important because the difference between profitable and unprofitable operations becomes more narrow. A.T. Kearney Consultants (1985) noted that firms engaging in comprehensive performance measurement realized improvements in overall productivity. According to experts, internal measures are generally collected and analyzed by the firm including 1. 2. 3. 4. 5. Cost Customer Service Productivity measures Asset measurement, and Quality.

External performance measurement is examined through customer perception measures and "best practice" benchmarking, and includes 1) customer perception measurement, and 2) best practice benchmarking. h) Warehousing management: As a case of reducing company cost & expenses, warehousing management is carrying the valuable role against operations. In case of perfect storing & office with all convenient facilities in company level, reducing manpower cost, dispatching authority with on time delivery, loading & unloading facilities with proper area, area for service station, stock management system etc. Components of supply chain management are as follows: 1. Standardization 92

2. Postponement 3. Customization

Customer relationship management
Customer relationship management (CRM) is a widely-implemented strategy for managing a company’s interactions with customers, clients and sales prospects. It involves using technology to organize, automate, and synchronize business processes—principally sales activities, but also those for marketing, customer service, and technical support. The overall goals are to find, attract, and win new clients, nurture and retain those the company already has, entice former clients back into the fold, and reduce the costs of marketing and client service. Customer relationship management describes a company-wide business strategy including customer-interface departments as well as other departments.

The three phases in which CRM support the relationship between a business and its customers are to:
• • •

Acquire: CRM can help a business acquire new customers through contact management, selling, and fulfillment. Enhance: web-enabled CRM combined with customer service tools offers customers service from a team of sales and service specialists, which offers customers the convenience of one-stop shopping. Retain: CRM software and databases enable a business to identify and reward its loyal customers and further develop its targeted marketing and relationship marketing initiatives.

Benefits of CRM
The use of a CRM system will confer several advantages to a company:
• • • •

Quality and efficiency Decreased costs Decision support Enterprise agility

Tools and workflows can be complex, especially for large businesses. Previously these tools were generally limited to contact management: monitoring and recording interactions and communications. Software solutions then expanded to embrace deal tracking, territories, opportunities, and at the sales pipeline itself. Next came the advent of tools for other client-interface business functions, as described below. These tools have been, and still are, offered as on-premises software that companies purchase and run on their own IT infrastructure. Often, implementations are fragmented—isolated initiatives by individual departments to address their own needs. Systems that start disunited usually stay that way: siloed thinking and decision processes frequently lead to separate and incompatible systems, and dysfunctional processes. Business reputation has become a growing challenge. The outcome of internal fragmentation that is observed and commented upon by customers is now visible to the rest of the world in the era of the social customer, where in the past, only employees or partners were aware of it. Addressing the fragmentation requires a shift in philosophy and mindset within an organization so that everyone considers the impact to the customer of policy, decisions and 93

actions. Human response at all levels of the organization can affect the customer experience for good or ill. Even one unhappy customer can deliver a body blow to a business.

Sales force automation
Sales force automation (SFA) involves using software to streamline all phases of the sales process, minimizing the time that sales representatives need to spend on each phase. This allows sales representatives to pursue more clients in a shorter amount of time than would otherwise be possible. At the heart of SFA is a contact management system for tracking and recording every stage in the sales process for each prospective client, from initial contact to final disposition. Many SFA applications also include insights into opportunities, territories, sales forecasts and workflow automation, quote generation, and product knowledge. Modules for Web 2.0 e-commerce and pricing are new, emerging interests in SFA.

CRM systems for marketing help the enterprise identify and target potential clients and generate leads for the sales team. A key marketing capability is tracking and measuring multichannel campaigns, including email, search, social media, telephone and direct mail. Metrics monitored include clicks, responses, leads, deals, and revenue. Alternatively, Prospect Relationship Management (PRM) solutions offer to track customer behaviour and nurture them from first contact to sale, often cutting out the active sales process altogether. In a web-focused marketing CRM solution, organizations create and track specific web activities that help develop the client relationship. These activities may include such activities as free downloads, online video content, and online web presentations.

Customer service and support
Recognizing that service is an important factor in attracting and retaining customers, organizations are increasingly turning to technology to help them improve their clients’ experience while aiming to increase efficiency and minimize costs. Even so, a 2009 study revealed that only 39% of corporate executives believe their employees have the right tools and authority to solve client problems.“.

Relevant analytics capabilities are often interwoven into applications for sales, marketing, and service. These features can be complemented and augmented with links to separate, purpose-built applications for analytics and business intelligence. Sales analytics let companies monitor and understand client actions and preferences, through sales forecasting and data quality. Marketing applications generally come with predictive analytics to improve segmentation and targeting, and features for measuring the effectiveness of online, offline, and search marketing campaign. Web analytics have evolved significantly from their starting point of merely tracking mouse clicks on Web sites. By evaluating “buy signals,” marketers can see which prospects are most likely to transact and also identify those who are bogged down in a sales process and need assistance. Marketing and finance personnel also use analytics to assess the value of multi-faceted programs as a whole. These types of analytics are increasing in popularity as companies demand greater visibility into the performance of call centers and other service and support channels, in order to correct problems before they affect satisfaction 94

levels. Support-focused applications typically include dashboards similar to those for sales, plus capabilities to measure and analyze response times, service quality, agent performance, and the frequency of various issues.

Departments within enterprises — especially large enterprises — tend to function with little collaboration. More recently, the development and adoption of these tools and services have fostered greater fluidity and cooperation among sales, service, and marketing. This finds expression in the concept of collaborative systems which uses technology to build bridges between departments. For example, feedback from a technical support center can enlighten marketers about specific services and product features clients are asking for. Reps, in their turn, want to be able to pursue these opportunities without the burden of re-entering records and contact data into a separate SFA system.

Small business
For small business, basic client service can be accomplished by a contact manager system: an integrated solution that lets organizations and individuals efficiently track and record interactions, including emails, documents, jobs, faxes, scheduling, and more. These tools usually focus on accounts rather than on individual contacts. They also generally include opportunity insight for tracking sales pipelines plus added functionality for marketing and service. As with larger enterprises, small businesses are finding value in online solutions, especially for mobile and telecommuting workers.

Social media
Social media sites like Twitter, LinkedIn and Facebook are amplifying the voice of people in the marketplace and are having profound and far-reaching effects on the ways in which people buy. Customers can now research companies online and then ask for recommendations through social media channels, making their buying decision without contacting the company. People also use social media to share opinions and experiences on companies, products and services. As social media is not as widely moderated or censored as mainstream media, individuals can say anything they want about a company or brand, positive or negative. Increasingly, companies are looking to gain access to these conversations and take part in the dialogue. More than a few systems are now integrating to social networking sites. Social media promoters cite a number of business advantages, such as using online communities as a source of high-quality leads and a vehicle for crowd sourcing solutions to client-support problems. Companies can also leverage client stated habits and preferences to "hypertarget" their sales and marketing communications.[9] Some analysts take the view that business-to-business marketers should proceed cautiously when weaving social media into their business processes. These observers recommend careful market research to determine if and where the phenomenon can provide measurable benefits for client interactions, sales and support.[10] It is stated[by whom?] that people feel their interactions are peer-to-peer between them and their contacts, and resent company involvement, sometimes responding with negatives about that company.

Non-profit and membership-based
Systems for non-profit and membership-based organizations help track constituents and their involvement in the organization. Capabilities typically include tracking the following: fund-raising, demographics, membership levels, membership directories, volunteering and communications with individuals. 95

Many include tools for identifying potential donors based on previous donations and participation. In light of the growth of social networking tools, there may be some overlap between social/community driven tools and nonprofit/membership tools.

For larger-scale enterprises, a complete and detailed plan is required to obtain the funding, resources, and companywide support that can make the initiative of choosing and implementing a system successfully. Benefits must be defined, risks assessed, and cost quantified in three general areas:

Processes: Though these systems have many technological components, business processes lie at its core. It can be seen as a more client-centric way of doing business, enabled by technology that consolidates and intelligently distributes pertinent information about clients, sales, marketing effectiveness, responsiveness, and market trends. Therefore, a company must analyze its business workflows and processes before choosing a technology platform; some will likely need re-engineering to better serve the overall goal of winning and satisfying clients. Moreover, planners need to determine the types of client information that are most relevant, and how best to employ them. People: For an initiative to be effective, an organization must convince its staff that the new technology and workflows will benefit employees as well as clients. Senior executives need to be strong and visible advocates who can clearly state and support the case for change. Collaboration, teamwork, and two-way communication should be encouraged across hierarchical boundaries, especially with respect to process improvement. Technology: In evaluating technology, key factors include alignment with the company’s business process strategy and goals, including the ability to deliver the right data to the right employees and sufficient ease of adoption and use. Platform selection is best undertaken by a carefully chosen group of executives who understand the business processes to be automated as well as the software issues. Depending upon the size of the company and the breadth of data, choosing an application can take anywhere from a few weeks to a year or more.

Strategic management
Strategic management is a field that deals with the major intended and emergent initiatives taken by general managers on behalf of owners, involving utilization of resources, to enhance the performance of firms in their external environments.[1] It entails specifying the organization's mission, vision and objectives, developing policies and plans, often in terms of projects and programs, which are designed to achieve these objectives, and then allocating resources to implement the policies and plans, projects and programs. A balanced scorecard is often used to evaluate the overall performance of the business and its progress towards objectives. Recent studies and leading management theorists have advocated that strategy needs to start with stakeholders expectations and use a modified balanced scorecard which includes all stakeholders. Strategic management is a level of managerial activity under setting goals and over Tactics. Strategic management provides overall direction to the enterprise and is closely related to the field of Organization Studies. In the field of business administration it is useful to talk about "strategic alignment" between the organization and its environment or "strategic consistency." According to Arieu (2007), "there is strategic consistency when the actions of an organization are consistent with the expectations of management, and these in turn are with the market and the context." Strategic management includes not only the management team but can also include the Board of Directors and other stakeholders of the organization. It depends on the organizational structure. 96

“Strategic management is an ongoing process that evaluates and controls the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly [i.e. regularly] to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment., or a new social, financial, or political environment.”

Strategy formation
Strategic formation is a combination of three main processes which are as follows:
• •

Performing a situation analysis, self-evaluation and competitor analysis: both internal and external; both micro-environmental and macro-environmental. Concurrent with this assessment, objectives are set. These objectives should be parallel to a time-line; some are in the short-term and others on the long-term. This involves crafting vision statements (long term view of a possible future), mission statements (the role that the organization gives itself in society), overall corporate objectives (both financial and strategic), strategic business unit objectives (both financial and strategic), and tactical objectives. These objectives should, in the light of the situation analysis, suggest a strategic plan. The plan provides the details of how to achieve these objectives.

Strategy evaluation

Measuring the effectiveness of the organizational strategy, it's extremely important to conduct a SWOT analysis to figure out the strengths, weaknesses, opportunities and threats (both internal and external) of the entity in business. This may require taking certain precautionary measures or even changing the entire strategy.

In corporate strategy, Johnson, Scholes and Whittington present a model in which strategic options are evaluated against three key success criteria:
• • •

Suitability (would it work?) Feasibility (can it be made to work?) Acceptability (will they work it?)

Suitability deals with the overall rationale of the strategy. The key point to consider is whether the strategy would address the key strategic issues underlined by the organisation's strategic position.
• • •

Does it make economic sense? Would the organization obtain economies of scale or economies of scope? Would it be suitable in terms of environment and capabilities?

Tools that can be used to evaluate suitability include:
• •

Ranking strategic options Decision trees


Feasibility is concerned with whether the resources required to implement the strategy are available, can be developed or obtained. Resources include funding, people, time and information. Tools that can be used to evaluate feasibility include:
• • •

cash flow analysis and forecasting break-even analysis resource deployment analysis

Acceptability is concerned with the expectations of the identified stakeholders (mainly shareholders, employees and customers) with the expected performance outcomes, which can be return, risk and stakeholder reactions.

• •

Return deals with the benefits expected by the stakeholders (financial and non-financial). For example, shareholders would expect the increase of their wealth, employees would expect improvement in their careers and customers would expect better value for money. Risk deals with the probability and consequences of failure of a strategy (financial and non-financial). Stakeholder reactions deals with anticipating the likely reaction of stakeholders. Shareholders could oppose the issuing of new shares, employees and unions could oppose outsourcing for fear of losing their jobs, customers could have concerns over a merger with regards to quality and support.

Tools that can be used to evaluate acceptability include:
• •

what-if analysis stakeholder mapping

General approaches
In general terms, there are two main approaches, which are opposite but complement each other in some ways, to strategic management:

The Industrial Organizational Approach o based on economic theory — deals with issues like competitive rivalry, resource allocation, economies of scale o assumptions — rationality, self discipline behaviour, profit maximization The Sociological Approach o deals primarily with human interactions o assumptions — bounded rationality, satisfying behaviour, profit sub-optimality. An example of a company that currently operates this way is Google. The stakeholder focused approach is an example of this modern approach to strategy.

Strategic management techniques can be viewed as bottom-up, top-down, or collaborative processes. In the bottomup approach, employees submit proposals to their managers who, in turn, funnel the best ideas further up the organization. This is often accomplished by a capital budgeting process. Proposals are assessed using financial criteria such as return on investment or cost-benefit analysis. Cost underestimation and benefit overestimation are major sources of error. The proposals that are approved form the substance of a new strategy, all of which is done without a grand strategic design or a strategic architect. The top-down approach is the most common by far. In it, the CEO, possibly with the assistance of a strategic planning team, decides on the overall direction the company 98

should take. Some organizations are starting to experiment with collaborative strategic planning techniques that recognize the emergent nature of strategic decisions. Strategic decisions should focus on Outcome, Time remaining, and current Value/priority. The outcome comprises both the desired ending goal and the plan designed to reach that goal. Managing strategically requires paying attention to the time remaining to reach a particular level or goal and adjusting the pace and options accordingly. Value/priority relates to the shifting, relative concept of value-add. Strategic decisions should be based on the understanding that the value-add of whatever you are managing is a constantly changing reference point. An objective that begins with a high level of value-add may change due to influence of internal and external factors. Strategic management by definition, is managing with a heads-up approach to outcome, time and relative value, and actively making course corrections as needed.

The strategy hierarchy
In most (large) corporations there are several levels of management. Strategic management is the highest of these levels in the sense that it is the broadest - applying to all parts of the firm - while also incorporating the longest time horizon. It gives direction to corporate values, corporate culture, corporate goals, and corporate missions. Under this broad corporate strategy there are typically business-level competitive strategies and functional unit strategies. Corporate strategy refers to the overarching strategy of the diversified firm. Such a corporate strategy answers the questions of "which businesses should we be in?" and "how does being in these businesses create synergy and/or add to the competitive advantage of the corporation as a whole?" Business strategy refers to the aggregated strategies of single business firm or a strategic business unit (SBU) in a diversified corporation. According to Michael Porter, a firm must formulate a business strategy that incorporates either cost leadership, differentiation, or focus to achieve a sustainable competitive advantage and long-term success. Alternatively, according to W. Chan Kim and Renée Mauborgne, an organization can achieve high growth and profits by creating a Blue Ocean Strategy that breaks the previous value-cost trade off by simultaneously pursuing both differentiation and low cost. Functional strategies include marketing strategies, new product development strategies, human resource strategies, financial strategies, legal strategies, supply-chain strategies, and information technology management strategies. The emphasis is on short and medium term plans and is limited to the domain of each department’s functional responsibility. Each functional department attempts to do its part in meeting overall corporate objectives, and hence to some extent their strategies are derived from broader corporate strategies. Many companies feel that a functional organizational structure is not an efficient way to organize activities so they have reengineered according to processes or SBUs. A strategic business unit is a semi-autonomous unit that is usually responsible for its own budgeting, new product decisions, hiring decisions, and price setting. An SBU is treated as an internal profit centre by corporate headquarters. A technology strategy, for example, although it is focused on technology as a means of achieving an organization's overall objective(s), may include dimensions that are beyond the scope of a single business unit, engineering organization or IT department. An additional level of strategy called operational strategy was encouraged by Peter Drucker in his theory of management by objectives (MBO). It is very narrow in focus and deals with day-to-day operational activities such as scheduling criteria. It must operate within a budget but is not at liberty to adjust or create that budget. Operational level strategies are informed by business level strategies which, in turn, are informed by corporate level strategies. Since the turn of the millennium, some firms have reverted to a simpler strategic structure driven by advances in information technology. It is felt that knowledge management systems should be used to share information and create common goals. Strategic divisions are thought to hamper this process. This notion of strategy has been captured under the rubric of dynamic strategy, popularized by Carpenter and Sanders's textbook. This work builds 99

on that of Brown and Eisenhart as well as Christensen and portrays firm strategy, both business and corporate, as necessarily embracing ongoing strategic change, and the seamless integration of strategy formulation and implementation. Such change and implementation are usually built into the strategy through the staging and pacing facets.

Historical development of strategic management
Birth of strategic management
Strategic management as a discipline originated in the 1950s and 60s. Although there were numerous early contributors to the literature, the most influential pioneers were Alfred D. Chandler, Philip Selznick, Igor Ansoff, and Peter Drucker. Alfred Chandler recognized the importance of coordinating the various aspects of management under one allencompassing strategy. Prior to this time the various functions of management were separate with little overall coordination or strategy. Interactions between functions or between departments were typically handled by a boundary position, that is, there were one or two managers that relayed information back and forth between two departments. Chandler also stressed the importance of taking a long term perspective when looking to the future. In his 1962 groundbreaking work Strategy and Structure, Chandler showed that a long-term coordinated strategy was necessary to give a company structure, direction, and focus. He says it concisely, “structure follows strategy.” In 1957, Philip Selznick introduced the idea of matching the organization's internal factors with external environmental circumstances.[5] This core idea was developed into what we now call SWOT analysis by Learned, Andrews, and others at the Harvard Business School General Management Group. Strengths and weaknesses of the firm are assessed in light of the opportunities and threats from the business environment. Igor Ansoff built on Chandler's work by adding a range of strategic concepts and inventing a whole new vocabulary. He developed a strategy grid that compared market penetration strategies, product development strategies, market development strategies and horizontal and vertical integration and diversification strategies. He felt that management could use these strategies to systematically prepare for future opportunities and challenges. In his 1965 classic Corporate Strategy, he developed the gap analysis still used today in which we must understand the gap between where we are currently and where we would like to be, then develop what he called “gap reducing actions”. Peter Drucker was a prolific strategy theorist, author of dozens of management books, with a career spanning five decades. His contributions to strategic management were many but two are most important. Firstly, he stressed the importance of objectives. An organization without clear objectives is like a ship without a rudder. As early as 1954 he was developing a theory of management based on objectives. This evolved into his theory of management by objectives (MBO). According to Drucker, the procedure of setting objectives and monitoring your progress towards them should permeate the entire organization, top to bottom. His other seminal contribution was in predicting the importance of what today we would call intellectual capital. He predicted the rise of what he called the “knowledge worker” and explained the consequences of this for management. He said that knowledge work is non-hierarchical. Work would be carried out in teams with the person most knowledgeable in the task at hand being the temporary leader. In 1985, Ellen-Earle Chaffee summarized what she thought were the main elements of strategic management theory by the 1970s:
• •

Strategic management involves adapting the organization to its business environment. Strategic management is fluid and complex. Change creates novel combinations of circumstances requiring unstructured non-repetitive responses. 100

• • • • •

Strategic management affects the entire organization by providing direction. Strategic management involves both strategy formation (she called it content) and also strategy implementation (she called it process). Strategic management is partially planned and partially unplanned. Strategic management is done at several levels: overall corporate strategy, and individual business strategies. Strategic management involves both conceptual and analytical thought processes.

Growth and portfolio theory
In the 1970s much of strategic management dealt with size, growth, and portfolio theory. The PIMS study was a long term study, started in the 1960s and lasted for 19 years, that attempted to understand the Profit Impact of Marketing Strategies (PIMS), particularly the effect of market share. Started at General Electric, moved to Harvard in the early 1970s, and then moved to the Strategic Planning Institute in the late 1970s, it now contains decades of information on the relationship between profitability and strategy. Their initial conclusion was unambiguous: The greater a company's market share, the greater will be their rate of profit. The high market share provides volume and economies of scale. It also provides experience and learning curve advantages. The combined effect is increased profits.[9] The studies conclusions continue to be drawn on by academics and companies today: "PIMS provides compelling quantitative evidence as to which business strategies work and don't work" - Tom Peters. The benefits of high market share naturally lead to an interest in growth strategies. The relative advantages of horizontal integration, vertical integration, diversification, franchises, mergers and acquisitions, joint ventures, and organic growth were discussed. The most appropriate market dominance strategies were assessed given the competitive and regulatory environment. There was also research that indicated that a low market share strategy could also be very profitable. Schumacher (1973),[10] Woo and Cooper (1982),[11] Levenson (1984),[12] and later Traverso (2002)[13] showed how smaller niche players obtained very high returns. By the early 1980s the paradoxical conclusion was that high market share and low market share companies were often very profitable but most of the companies in between were not. This was sometimes called the “hole in the middle” problem. This anomaly would be explained by Michael Porter in the 1980s. The management of diversified organizations required new techniques and new ways of thinking. The first CEO to address the problem of a multi-divisional company was Alfred Sloan at General Motors. GM was decentralized into semi-autonomous “strategic business units” (SBU's), but with centralized support functions. One of the most valuable concepts in the strategic management of multi-divisional companies was portfolio theory. In the previous decade Harry Markowitz and other financial theorists developed the theory of portfolio analysis. It was concluded that a broad portfolio of financial assets could reduce specific risk. In the 1970s marketers extended the theory to product portfolio decisions and managerial strategists extended it to operating division portfolios. Each of a company’s operating divisions were seen as an element in the corporate portfolio. Each operating division (also called strategic business units) was treated as a semi-independent profit center with its own revenues, costs, objectives, and strategies. Several techniques were developed to analyze the relationships between elements in a portfolio. B.C.G. Analysis, for example, was developed by the Boston Consulting Group in the early 1970s. This was the theory that gave us the wonderful image of a CEO sitting on a stool milking a cash cow. Shortly after that the G.E. multi factoral model was developed by General Electric. Companies continued to diversify until the 1980s when it was realized that in many cases a portfolio of operating divisions was worth more as separate completely independent companies.


The marketing revolution
The 1970s also saw the rise of the marketing oriented firm. From the beginnings of capitalism it was assumed that the key requirement of business success was a product of high technical quality. If you produced a product that worked well and was durable, it was assumed you would have no difficulty selling them at a profit. This was called the production orientation and it was generally true that good products could be sold without effort, encapsulated in the saying "Build a better mousetrap and the world will beat a path to your door." This was largely due to the growing numbers of affluent and middle class people that capitalism had created. But after the untapped demand caused by the second world war was saturated in the 1950s it became obvious that products were not selling as easily as they had been. The answer was to concentrate on selling. The 1950s and 1960s is known as the sales era and the guiding philosophy of business of the time is today called the sales orientation. In the early 1970s Theodore Levitt and others at Harvard argued that the sales orientation had things backward. They claimed that instead of producing products then trying to sell them to the customer, businesses should start with the customer, find out what they wanted, and then produce it for them. The customer became the driving force behind all strategic business decisions. This marketing orientation, in the decades since its introduction, has been reformulated and repackaged under numerous names including customer orientation, marketing philosophy, customer intimacy, customer focus, customer driven, and market focused.

The Japanese challenge
By the late 70s, Americans had started to notice how successful Japanese industry had become. In industry after industry, including steel, watches, ship building, cameras, autos, and electronics, the Japanese were surpassing American and European companies. Westerners wanted to know why. Numerous theories purported to explain the Japanese success including:
• • • • • • •

Higher employee morale, dedication, and loyalty; Lower cost structure, including wages; Effective government industrial policy; Modernization after WWII leading to high capital intensity and productivity; Economies of scale associated with increased exporting; Relatively low value of the Yen leading to low interest rates and capital costs, low dividend expectations, and inexpensive exports; Superior quality control techniques such as Total Quality Management and other systems introduced by W. Edwards Deming in the 1950s and 60s.[14]

Although there was some truth to all these potential explanations, there was clearly something missing. In fact by 1980 the Japanese cost structure was higher than the American. And post WWII reconstruction was nearly 40 years in the past. The first management theorist to suggest an explanation was Richard Pascale. In 1981, Richard Pascale and Anthony Athos in The Art of Japanese Management claimed that the main reason for Japanese success was their superior management techniques.[15] They divided management into 7 aspects (which are also known as McKinsey 7S Framework): Strategy, Structure, Systems, Skills, Staff, Style, and Supraordinate goals (which we would now call shared values). The first three of the 7 S's were called hard factors and this is where American companies excelled. The remaining four factors (skills, staff, style, and shared values) were called soft factors and were not well understood by American businesses of the time (for details on the role of soft and hard factors see Wickens P.D. 1995.) Americans did not yet place great value on corporate culture, shared values and beliefs, and social cohesion in the workplace. In Japan the task of management was seen as managing the whole complex of human needs, economic, social, psychological, and spiritual. In America work was seen as something that was separate from the rest of one's life. It was quite common for Americans to exhibit a very different personality at work compared to the rest of their lives. Pascale also highlighted the difference between decision 102

making styles; hierarchical in America, and consensus in Japan. He also claimed that American business lacked long term vision, preferring instead to apply management fads and theories in a piecemeal fashion. One year later, The Mind of the Strategist was released in America by Kenichi Ohmae, the head of McKinsey & Co.'s Tokyo office.[16] (It was originally published in Japan in 1975.) He claimed that strategy in America was too analytical. Strategy should be a creative art: It is a frame of mind that requires intuition and intellectual flexibility. He claimed that Americans constrained their strategic options by thinking in terms of analytical techniques, rote formula, and step-by-step processes. He compared the culture of Japan in which vagueness, ambiguity, and tentative decisions were acceptable, to American culture that valued fast decisions. Also in 1982, Tom Peters and Robert Waterman released a study that would respond to the Japanese challenge head on.[17] Peters and Waterman, who had several years earlier collaborated with Pascale and Athos at McKinsey & Co. asked “What makes an excellent company?”. They looked at 62 companies that they thought were fairly successful. Each was subject to six performance criteria. To be classified as an excellent company, it had to be above the 50th percentile in 4 of the 6 performance metrics for 20 consecutive years. Forty-three companies passed the test. They then studied these successful companies and interviewed key executives. They concluded in In Search of Excellence that there were 8 keys to excellence that were shared by all 43 firms. They are:
• • • • • • • •

A bias for action — Do it. Try it. Don’t waste time studying it with multiple reports and committees. Customer focus — Get close to the customer. Know your customer. Entrepreneurship — Even big companies act and think small by giving people the authority to take initiatives. Productivity through people — Treat your people with respect and they will reward you with productivity. Value-oriented CEOs — The CEO should actively propagate corporate values throughout the organization. Stick to the knitting — Do what you know well. Keep things simple and lean — Complexity encourages waste and confusion. Simultaneously centralized and decentralized — Have tight centralized control while also allowing maximum individual autonomy.

The basic blueprint on how to compete against the Japanese had been drawn. But as J.E. Rehfeld (1994) explains it is not a straight forward task due to differences in culture.[18] A certain type of alchemy was required to transform knowledge from various cultures into a management style that allows a specific company to compete in a globally diverse world. He says, for example, that Japanese style kaizen (continuous improvement) techniques, although suitable for people socialized in Japanese culture, have not been successful when implemented in the U.S. unless they are modified significantly. In 2009, industry consultants Mark Blaxill and Ralph Eckardt suggested that much of the Japanese business dominance that began in the mid 1970s was the direct result of competition enforcement efforts by the Federal Trade Commission (FTC) and U.S. Department of Justice (DOJ). In 1975 the FTC reached a settlement with Xerox Corporation in its anti-trust lawsuit. (At the time, the FTC was under the direction of Frederic M. Scherer). The 1975 Xerox consent decree forced the licensing of the company’s entire patent portfolio, mainly to Japanese competitors. (See "compulsory license.") This action marked the start of an activist approach to managing competition by the FTC and DOJ, which resulted in the compulsory licensing of tens of thousands of patent from some of America's leading companies, including IBM, AT&T, DuPont, Bausch & Lomb, and Eastman Kodak.[original

Within four years of the consent decree, Xerox's share of the U.S. copier market dropped from nearly 100% to less than 14%. Between 1950 and 1980 Japanese companies consummated more than 35,000 foreign licensing agreements, mostly with U.S. companies, for free or low-cost licenses made possible by the FTC and DOJ. The post-1975 era of anti-trust initiatives by Washington D.C. economists at the FTC corresponded directly with the 103

rapid, unprecedented rise in Japanese competitiveness and a simultaneous stalling of the U.S. manufacturing economy.[19]

Competitive advantage
The Japanese challenge shook the confidence of the western business elite, but detailed comparisons of the two management styles and examinations of successful businesses convinced westerners that they could overcome the challenge. The 1980s and early 1990s saw a plethora of theories explaining exactly how this could be done. They cannot all be detailed here, but some of the more important strategic advances of the decade are explained below. Gary Hamel and C. K. Prahalad declared that strategy needs to be more active and interactive; less “arm-chair planning” was needed. They introduced terms like strategic intent and strategic architecture.[20][21] Their most well known advance was the idea of core competency. They showed how important it was to know the one or two key things that your company does better than the competition.[22] Active strategic management required active information gathering and active problem solving. In the early days of Hewlett-Packard (HP), Dave Packard and Bill Hewlett devised an active management style that they called management by walking around (MBWA). Senior HP managers were seldom at their desks. They spent most of their days visiting employees, customers, and suppliers. This direct contact with key people provided them with a solid grounding from which viable strategies could be crafted. The MBWA concept was popularized in 1985 by a book by Tom Peters and Nancy Austin.[23] Japanese managers employ a similar system, which originated at Honda, and is sometimes called the 3 G's (Genba, Genbutsu, and Genjitsu, which translate into “actual place”, “actual thing”, and “actual situation”). Probably the most influential strategist of the decade was Michael Porter. He introduced many new concepts including; 5 forces analysis, generic strategies, the value chain, strategic groups, and clusters. In 5 forces analysis he identifies the forces that shape a firm's strategic environment. It is like a SWOT analysis with structure and purpose. It shows how a firm can use these forces to obtain a sustainable competitive advantage. Porter modifies Chandler's dictum about structure following strategy by introducing a second level of structure: Organizational structure follows strategy, which in turn follows industry structure. Porter's generic strategies detail the interaction between cost minimization strategies, product differentiation strategies, and market focus strategies. Although he did not introduce these terms, he showed the importance of choosing one of them rather than trying to position your company between them. He also challenged managers to see their industry in terms of a value chain. A firm will be successful only to the extent that it contributes to the industry's value chain. This forced management to look at its operations from the customer's point of view. Every operation should be examined in terms of what value it adds in the eyes of the final customer. In 1993, John Kay took the idea of the value chain to a financial level claiming “ Adding value is the central purpose of business activity”, where adding value is defined as the difference between the market value of outputs and the cost of inputs including capital, all divided by the firm's net output. Borrowing from Gary Hamel and Michael Porter, Kay claims that the role of strategic management is to identify your core competencies, and then assemble a collection of assets that will increase value added and provide a competitive advantage. He claims that there are 3 types of capabilities that can do this; innovation, reputation, and organizational structure. The 1980s also saw the widespread acceptance of positioning theory. Although the theory originated with Jack Trout in 1969, it didn’t gain wide acceptance until Al Ries and Jack Trout wrote their classic book “Positioning: The Battle For Your Mind” (1979). The basic premise is that a strategy should not be judged by internal company factors but by the way customers see it relative to the competition. Crafting and implementing a strategy involves creating a position in the mind of the collective consumer. Several techniques were applied to positioning theory, some newly invented but most borrowed from other disciplines. Perceptual mapping for example, creates visual displays of the relationships between positions. Multidimensional scaling, discriminant analysis, factor analysis, 104

and conjoint analysis are mathematical techniques used to determine the most relevant characteristics (called dimensions or factors) upon which positions should be based. Preference regression can be used to determine vectors of ideal positions and cluster analysis can identify clusters of positions. Others felt that internal company resources were the key. In 1992, Jay Barney, for example, saw strategy as assembling the optimum mix of resources, including human, technology, and suppliers, and then configure them in unique and sustainable ways.[24] Michael Hammer and James Champy felt that these resources needed to be restructured. [25] This process, that they labeled reengineering, involved organizing a firm's assets around whole processes rather than tasks. In this way a team of people saw a project through, from inception to completion. This avoided functional silos where isolated departments seldom talked to each other. It also eliminated waste due to functional overlap and interdepartmental communications. In 1989 Richard Lester and the researchers at the MIT Industrial Performance Center identified seven best practices and concluded that firms must accelerate the shift away from the mass production of low cost standardized products. The seven areas of best practice were:[26]
• • • • • • •

Simultaneous continuous improvement in cost, quality, service, and product innovation Breaking down organizational barriers between departments Eliminating layers of management creating flatter organizational hierarchies. Closer relationships with customers and suppliers Intelligent use of new technology Global focus Improving human resource skills

The search for “best practices” is also called benchmarking.[27] This involves determining where you need to improve, finding an organization that is exceptional in this area, then studying the company and applying its best practices in your firm. A large group of theorists felt the area where western business was most lacking was product quality. People like W. Edwards Deming,[28] Joseph M. Juran,[29] A. Kearney,[30] Philip Crosby,[31] and Armand Feignbaum[32] suggested quality improvement techniques like total quality management (TQM), continuous improvement (kaizen), lean manufacturing, Six Sigma, and return on quality (ROQ). An equally large group of theorists felt that poor customer service was the problem. People like James Heskett (1988),[33] Earl Sasser (1995), William Davidow,[34] Len Schlesinger,[35] A. Paraurgman (1988), Len Berry,[36] Jane Kingman-Brundage,[37] Christopher Hart, and Christopher Lovelock (1994), gave us fishbone diagramming, service charting, Total Customer Service (TCS), the service profit chain, service gaps analysis, the service encounter, strategic service vision, service mapping, and service teams. Their underlying assumption was that there is no better source of competitive advantage than a continuous stream of delighted customers. Process management uses some of the techniques from product quality management and some of the techniques from customer service management. It looks at an activity as a sequential process. The objective is to find inefficiencies and make the process more effective. Although the procedures have a long history, dating back to Taylorism, the scope of their applicability has been greatly widened, leaving no aspect of the firm free from potential process improvements. Because of the broad applicability of process management techniques, they can be used as a basis for competitive advantage. Some realized that businesses were spending much more on acquiring new customers than on retaining current ones. Carl Sewell,[38] Frederick F. Reichheld,[39] C. Gronroos,[40] and Earl Sasser[41] showed us how a competitive 105

advantage could be found in ensuring that customers returned again and again. This has come to be known as the loyalty effect after Reicheld's book of the same name in which he broadens the concept to include employee loyalty, supplier loyalty, distributor loyalty, and shareholder loyalty. They also developed techniques for estimating the lifetime value of a loyal customer, called customer lifetime value (CLV). A significant movement started that attempted to recast selling and marketing techniques into a long term endeavor that created a sustained relationship with customers (called relationship selling, relationship marketing, and customer relationship management). Customer relationship management (CRM) software (and its many variants) became an integral tool that sustained this trend. James Gilmore and Joseph Pine found competitive advantage in mass customization.[42] Flexible manufacturing techniques allowed businesses to individualize products for each customer without losing economies of scale. This effectively turned the product into a service. They also realized that if a service is mass customized by creating a “performance” for each individual client, that service would be transformed into an “experience”. Their book, The Experience Economy,[43] along with the work of Bernd Schmitt convinced many to see service provision as a form of theatre. This school of thought is sometimes referred to as customer experience management (CEM). Like Peters and Waterman a decade earlier, James Collins and Jerry Porras spent years conducting empirical research on what makes great companies. Six years of research uncovered a key underlying principle behind the 19 successful companies that they studied: They all encourage and preserve a core ideology that nurtures the company. Even though strategy and tactics change daily, the companies, nevertheless, were able to maintain a core set of values. These core values encourage employees to build an organization that lasts. In Built To Last (1994) they claim that short term profit goals, cost cutting, and restructuring will not stimulate dedicated employees to build a great company that will endure.[44] In 2000 Collins coined the term “built to flip” to describe the prevailing business attitudes in Silicon Valley. It describes a business culture where technological change inhibits a long term focus. He also popularized the concept of the BHAG (Big Hairy Audacious Goal). Arie de Geus (1997) undertook a similar study and obtained similar results. He identified four key traits of companies that had prospered for 50 years or more. They are:
• • • •

Sensitivity to the business environment — the ability to learn and adjust Cohesion and identity — the ability to build a community with personality, vision, and purpose Tolerance and decentralization — the ability to build relationships Conservative financing

A company with these key characteristics he called a living company because it is able to perpetuate itself. If a company emphasizes knowledge rather than finance, and sees itself as an ongoing community of human beings, it has the potential to become great and endure for decades. Such an organization is an organic entity capable of learning (he called it a “learning organization”) and capable of creating its own processes, goals, and persona. There are numerous ways by which a firm can try to create a competitive advantage - some will work but many will not. To help firms avoid a hit and miss approach to the creation of competitive advantage, Will Mulcaster suggests that firms engage in a dialogue that centres around the question "Will the proposed competitive advantage create Perceived Differential Value?" The dialogue should raise a series of other pertinent questions, including:
• • •

"Will the proposed competitive advantage create something that is different from the competition?" "Will the difference add value in the eyes of potential customers?" - This question will entail a discussion of the combined effects of price, product features and consumer perceptions. "Will the product add value for the firm?" - Answering this question will require an examination of cost effectiveness and the pricing strategy.


The military theorists
In the 1980s some business strategists realized that there was a vast knowledge base stretching back thousands of years that they had barely examined. They turned to military strategy for guidance. Military strategy books such as The Art of War by Sun Tzu, On War by von Clausewitz, and The Red Book by Mao Zedong became instant business classics. From Sun Tzu, they learned the tactical side of military strategy and specific tactical prescriptions. From Von Clausewitz, they learned the dynamic and unpredictable nature of military strategy. From Mao Zedong, they learned the principles of guerrilla warfare. The main marketing warfare books were:
• • •

Business War Games by Barrie James, 1984 Marketing Warfare by Al Ries and Jack Trout, 1986 Leadership Secrets of Attila the Hun by Wess Roberts, 1987

Philip Kotler was a well-known proponent of marketing warfare strategy. There were generally thought to be four types of business warfare theories. They are:
• • • •

Offensive marketing warfare strategies Defensive marketing warfare strategies Flanking marketing warfare strategies Guerrilla marketing warfare strategies

The marketing warfare literature also examined leadership and motivation, intelligence gathering, types of marketing weapons, logistics, and communications. By the turn of the century marketing warfare strategies had gone out of favour. It was felt that they were limiting. There were many situations in which non-confrontational approaches were more appropriate. In 1989, Dudley Lynch and Paul L. Kordis published Strategy of the Dolphin: Scoring a Win in a Chaotic World. "The Strategy of the Dolphin” was developed to give guidance as to when to use aggressive strategies and when to use passive strategies. A variety of aggressiveness strategies were developed. In 1993, J. Moore used a similar metaphor.[46] Instead of using military terms, he created an ecological theory of predators and prey (see ecological model of competition), a sort of Darwinian management strategy in which market interactions mimic long term ecological stability.

Strategic change
In 1968, Peter Drucker (1969) coined the phrase Age of Discontinuity to describe the way change forces disruptions into the continuity of our lives.[47] In an age of continuity attempts to predict the future by extrapolating from the past can be somewhat accurate. But according to Drucker, we are now in an age of discontinuity and extrapolating from the past is hopelessly ineffective. We cannot assume that trends that exist today will continue into the future. He identifies four sources of discontinuity: new technologies, globalization, cultural pluralism, and knowledge capital. In 1970, Alvin Toffler in Future Shock described a trend towards accelerating rates of change.[48] He illustrated how social and technological norms had shorter lifespans with each generation, and he questioned society's ability to cope with the resulting turmoil and anxiety. In past generations periods of change were always punctuated with times of stability. This allowed society to assimilate the change and deal with it before the next change arrived. But these periods of stability are getting shorter and by the late 20th century had all but disappeared. In 1980 in The Third Wave, Toffler characterized this shift to relentless change as the defining feature of the third phase of civilization (the first two phases being the agricultural and industrial waves).[49] He claimed that the dawn of this 107

new phase will cause great anxiety for those that grew up in the previous phases, and will cause much conflict and opportunity in the business world. Hundreds of authors, particularly since the early 1990s, have attempted to explain what this means for business strategy. In 2000, Gary Hamel discussed strategic decay, the notion that the value of all strategies, no matter how brilliant, decays over time.[50] In 1978, Dereck Abell (Abell, D. 1978) described strategic windows and stressed the importance of the timing (both entrance and exit) of any given strategy. This has led some strategic planners to build planned obsolescence into their strategies.[51] In 1989, Charles Handy identified two types of change.[52] Strategic drift is a gradual change that occurs so subtly that it is not noticed until it is too late. By contrast, transformational change is sudden and radical. It is typically caused by discontinuities (or exogenous shocks) in the business environment. The point where a new trend is initiated is called a strategic inflection point by Andy Grove. Inflection points can be subtle or radical. In 2000, Malcolm Gladwell discussed the importance of the tipping point, that point where a trend or fad acquires critical mass and takes off.[53] In 1983, Noel Tichy wrote that because we are all beings of habit we tend to repeat what we are comfortable with. [54] He wrote that this is a trap that constrains our creativity, prevents us from exploring new ideas, and hampers our dealing with the full complexity of new issues. He developed a systematic method of dealing with change that involved looking at any new issue from three angles: technical and production, political and resource allocation, and corporate culture. In 1990, Richard Pascale (Pascale, R. 1990) wrote that relentless change requires that businesses continuously reinvent themselves.[55] His famous maxim is “Nothing fails like success” by which he means that what was a strength yesterday becomes the root of weakness today, We tend to depend on what worked yesterday and refuse to let go of what worked so well for us in the past. Prevailing strategies become self-confirming. To avoid this trap, businesses must stimulate a spirit of inquiry and healthy debate. They must encourage a creative process of self renewal based on constructive conflict. Peters and Austin (1985) stressed the importance of nurturing champions and heroes. They said we have a tendency to dismiss new ideas, so to overcome this, we should support those few people in the organization that have the courage to put their career and reputation on the line for an unproven idea. In 1996, Adrian Slywotzky showed how changes in the business environment are reflected in value migrations between industries, between companies, and within companies.[56] He claimed that recognizing the patterns behind these value migrations is necessary if we wish to understand the world of chaotic change. In “Profit Patterns” (1999) he described businesses as being in a state of strategic anticipation as they try to spot emerging patterns. Slywotsky and his team identified 30 patterns that have transformed industry after industry.[57] In 1997, Clayton Christensen (1997) took the position that great companies can fail precisely because they do everything right since the capabilities of the organization also defines its disabilities. [58] Christensen's thesis is that outstanding companies lose their market leadership when confronted with disruptive technology. He called the approach to discovering the emerging markets for disruptive technologies agnostic marketing, i.e., marketing under the implicit assumption that no one - not the company, not the customers - can know how or in what quantities a disruptive product can or will be used before they have experience using it. A number of strategists use scenario planning techniques to deal with change. The way Peter Schwartz put it in 1991 is that strategic outcomes cannot be known in advance so the sources of competitive advantage cannot be 108

predetermined.[59] The fast changing business environment is too uncertain for us to find sustainable value in formulas of excellence or competitive advantage. Instead, scenario planning is a technique in which multiple outcomes can be developed, their implications assessed, and their likeliness of occurrence evaluated. According to Pierre Wack, scenario planning is about insight, complexity, and subtlety, not about formal analysis and numbers.

In 1988, Henry Mintzberg looked at the changing world around him and decided it was time to reexamine how strategic management was done.[61][62] He examined the strategic process and concluded it was much more fluid and unpredictable than people had thought. Because of this, he could not point to one process that could be called strategic planning. Instead Mintzberg concludes that there are five types of strategies:
• • • • •

Strategy as plan - a direction, guide, course of action - intention rather than actual Strategy as ploy - a maneuver intended to outwit a competitor Strategy as pattern - a consistent pattern of past behaviour - realized rather than intended Strategy as position - locating of brands, products, or companies within the conceptual framework of consumers or other stakeholders - strategy determined primarily by factors outside the firm Strategy as perspective - strategy determined primarily by a master strategist

In 1998, Mintzberg developed these five types of management strategy into 10 “schools of thought”. These 10 schools are grouped into three categories. The first group is prescriptive or normative. It consists of the informal design and conception school, the formal planning school, and the analytical positioning school. The second group, consisting of six schools, is more concerned with how strategic management is actually done, rather than prescribing optimal plans or positions. The six schools are the entrepreneurial, visionary, or great leader school, the cognitive or mental process school, the learning, adaptive, or emergent process school, the power or negotiation school, the corporate culture or collective process school, and the business environment or reactive school. The third and final group consists of one school, the configuration or transformation school, an hybrid of the other schools organized into stages, organizational life cycles, or “episodes”. In 1999, Constantinos Markides also wanted to reexamine the nature of strategic planning itself.[64] He describes strategy formation and implementation as an on-going, never-ending, integrated process requiring continuous reassessment and reformation. Strategic management is planned and emergent, dynamic, and interactive. J. Moncrieff (1999) also stresses strategy dynamics.[65] He recognized that strategy is partially deliberate and partially unplanned. The unplanned element comes from two sources: emergent strategies (result from the emergence of opportunities and threats in the environment) and Strategies in action (ad hoc actions by many people from all parts of the organization). Some business planners are starting to use a complexity theory approach to strategy. Complexity can be thought of as chaos with a dash of order. Chaos theory deals with turbulent systems that rapidly become disordered. Complexity is not quite so unpredictable. It involves multiple agents interacting in such a way that a glimpse of structure may appear.

Information- and technology-driven strategy
Peter Drucker had theorized the rise of the “knowledge worker” back in the 1950s. He described how fewer workers would be doing physical labor, and more would be applying their minds. In 1984, John Nesbitt theorized that the future would be driven largely by information: companies that managed information well could obtain an advantage, however the profitability of what he calls the “information float” (information that the company had and others desired) would all but disappear as inexpensive computers made information more accessible. Daniel Bell (1985) examined the sociological consequences of information technology, while Gloria Schuck and Shoshana Zuboff looked at psychological factors.[66] Zuboff, in her five year study of eight pioneering corporations 109

made the important distinction between “automating technologies” and “infomating technologies”. She studied the effect that both had on individual workers, managers, and organizational structures. She largely confirmed Peter Drucker's predictions three decades earlier, about the importance of flexible decentralized structure, work teams, knowledge sharing, and the central role of the knowledge worker. Zuboff also detected a new basis for managerial authority, based not on position or hierarchy, but on knowledge (also predicted by Drucker) which she called “participative management”.[67] In 1990, Peter Senge, who had collaborated with Arie de Geus at Dutch Shell, borrowed de Geus' notion of the learning organization, expanded it, and popularized it. The underlying theory is that a company's ability to gather, analyze, and use information is a necessary requirement for business success in the information age. (See organizational learning.) To do this, Senge claimed that an organization would need to be structured such that:[68]
• • • •

People can continuously expand their capacity to learn and be productive, New patterns of thinking are nurtured, Collective aspirations are encouraged, and People are encouraged to see the “whole picture” together.

Senge identified five disciplines of a learning organization. They are:

• • • •

Personal responsibility, self reliance, and mastery — We accept that we are the masters of our own destiny. We make decisions and live with the consequences of them. When a problem needs to be fixed, or an opportunity exploited, we take the initiative to learn the required skills to get it done. Mental models — We need to explore our personal mental models to understand the subtle effect they have on our behaviour. Shared vision — The vision of where we want to be in the future is discussed and communicated to all. It provides guidance and energy for the journey ahead. Team learning — We learn together in teams. This involves a shift from “a spirit of advocacy to a spirit of enquiry”. Systems thinking — We look at the whole rather than the parts. This is what Senge calls the “Fifth discipline”. It is the glue that integrates the other four into a coherent strategy. For an alternative approach to the “learning organization”, see Garratt, B. (1987).

Since 1990 many theorists have written on the strategic importance of information, including J.B. Quinn, [69] J. Carlos Jarillo,[70] D.L. Barton,[71] Manuel Castells,[72] J.P. Lieleskin,[73] Thomas Stewart,[74] K.E. Sveiby,[75] Gilbert J. Probst,[76] and Shapiro and Varian[77] to name just a few. Thomas A. Stewart, for example, uses the term intellectual capital to describe the investment an organization makes in knowledge. It is composed of human capital (the knowledge inside the heads of employees), customer capital (the knowledge inside the heads of customers that decide to buy from you), and structural capital (the knowledge that resides in the company itself). Manuel Castells, describes a network society characterized by: globalization, organizations structured as a network, instability of employment, and a social divide between those with access to information technology and those without. Geoffrey Moore (1991) and R. Frank and P. Cook[78] also detected a shift in the nature of competition. In industries with high technology content, technical standards become established and this gives the dominant firm a near monopoly. The same is true of networked industries in which interoperability requires compatibility between users. An example is word processor documents. Once a product has gained market dominance, other products, even far superior products, cannot compete. Moore showed how firms could attain this enviable position by using E.M. Rogers five stage adoption process and focusing on one group of customers at a time, using each group as a base for 110

marketing to the next group. The most difficult step is making the transition between visionaries and pragmatists (See Crossing the Chasm). If successful a firm can create a bandwagon effect in which the momentum builds and your product becomes a de facto standard. Evans and Wurster describe how industries with a high information component are being transformed. [79] They cite Encarta's demolition of the Encyclopedia Britannica (whose sales have plummeted 80% since their peak of $650 million in 1990). Encarta’s reign was speculated to be short-lived, eclipsed by collaborative encyclopedias like Wikipedia that can operate at very low marginal costs. Encarta's service was subsequently turned into an on-line service and dropped at the end of 2009. Evans also mentions the music industry which is desperately looking for a new business model. The upstart information savvy firms, unburdened by cumbersome physical assets, are changing the competitive landscape, redefining market segments, and disintermediating some channels. One manifestation of this is personalized marketing. Information technology allows marketers to treat each individual as its own market, a market of one. Traditional ideas of market segments will no longer be relevant if personalized marketing is successful. The technology sector has provided some strategies directly. For example, from the software development industry agile software development provides a model for shared development processes. Access to information systems have allowed senior managers to take a much more comprehensive view of strategic management than ever before. The most notable of the comprehensive systems is the balanced scorecard approach developed in the early 1990s by Drs. Robert S. Kaplan (Harvard Business School) and David Norton (Kaplan, R. and Norton, D. 1992). It measures several factors financial, marketing, production, organizational development, and new product development to achieve a 'balanced' perspective.

Knowledge Adaptive Strategy
Most current approaches to business "strategy" focus on the mechanics of management—e.g., Drucker's operational "strategies" -- and as such are not true business strategy. In a post-industrial world these operationally focused business strategies hinge on conventional sources of advantage have essentially been eliminated:
• • •

Scale used to be very important. But now, with access to capital and a global marketplace, scale is achievable by multiple organizations simultaneously. In many cases, it can literally be rented. Process improvement or “best practices” were once a favored source of advantage, but they were at best temporary, as they could be copied and adapted by competitors. Owning the customer had always been thought of as an important form of competitive advantage. Now, however, customer loyalty is far less important and difficult to maintain as new brands and products emerge all the time.

In such a world, differentiation, as elucidated by Michael Porter, Botten and McManus is the only way to maintain economic or market superiority (i.e., comparative advantage) over competitors. A company must OWN the thing that differentiates it from competitors. Without IP ownership and protection, any product, process or scale advantage can be compromised or entirely lost. Competitors can copy them without fear of economic or legal consequences, thereby eliminating the advantage. This principle is based on the idea of evolution: differentiation, selection, amplification and repetition. It is a form of strategy to deal with complex adaptive systems which individuals, businesses, the economy are all based on. The principle is based on the survival of the "fittest". The fittest strategy employed after trail and error and combination is then employed to run the company in its current market. Failed strategic plans are either discarded or used for another aspect of a business. The trade off between risk and return is taken into account when deciding which strategy to take. Cynefin model and the adaptive cycles of businesses are both good ways to develop KAS, 111

reference Panarchy and Cynefin. Analyze the fitness landscapes for a product, idea, or service to better develop a more adaptive strategy. (For an explanation and elucidation of the "post-industrial" worldview, see George Ritzer and Daniel Bell.)

Strategic decision making processes
Will Mulcaster argues that while much research and creative thought has been devoted to generating alternative strategies, too little work has been done on what influences the quality of strategic decision making and the effectiveness with which strategies are implemented. For instance, in retrospect it can be seen that the financial crisis of 2008-9 could have been avoided if the banks had paid more attention to the risks associated with their investments, but how should banks change the way they make decisions to improve the quality of their decisions in the future? Mulcaster's Managing Forces framework addresses this issue by identifying 11 forces that should be incorporated into the processes of decision making and strategic implementation. The 11 forces are: Time; Opposing forces; Politics; Perception; Holistic effects; Adding value; Incentives; Learning capabilities; Opportunity cost; Risk; Style—which can be remembered by using the mnemonic 'TOPHAILORS'.

The psychology of strategic management
Several psychologists have conducted studies to determine the psychological patterns involved in strategic management. Typically senior managers have been asked how they go about making strategic decisions. A 1938 treatise by Chester Barnard, that was based on his own experience as a business executive, sees the process as informal, intuitive, non-routinized, and involving primarily oral, 2-way communications. Bernard says “The process is the sensing of the organization as a whole and the total situation relevant to it. It transcends the capacity of merely intellectual methods, and the techniques of discriminating the factors of the situation. The terms pertinent to it are “feeling”, “judgement”, “sense”, “proportion”, “balance”, “appropriateness”. It is a matter of art rather than science.”[81] In 1973, Henry Mintzberg found that senior managers typically deal with unpredictable situations so they strategize in ad hoc, flexible, dynamic, and implicit ways. . He says, “The job breeds adaptive information-manipulators who prefer the live concrete situation. The manager works in an environment of stimulous-response, and he develops in his work a clear preference for live action.”[82] In 1982, John Kotter studied the daily activities of 15 executives and concluded that they spent most of their time developing and working a network of relationships that provided general insights and specific details for strategic decisions. They tended to use “mental road maps” rather than systematic planning techniques.[83] Daniel Isenberg's 1984 study of senior managers found that their decisions were highly intuitive. Executives often sensed what they were going to do before they could explain why.[84] He claimed in 1986 that one of the reasons for this is the complexity of strategic decisions and the resultant information uncertainty.[85] Shoshana Zuboff (1988) claims that information technology is widening the divide between senior managers (who typically make strategic decisions) and operational level managers (who typically make routine decisions). She claims that prior to the widespread use of computer systems, managers, even at the most senior level, engaged in both strategic decisions and routine administration, but as computers facilitated (She called it “deskilled”) routine processes, these activities were moved further down the hierarchy, leaving senior management free for strategic decision making. In 1977, Abraham Zaleznik identified a difference between leaders and managers. He describes leadershipleaders as visionaries who inspire. They care about substance. Whereas managers are claimed to care about process, plans, and form.[86] He also claimed in 1989 that the rise of the manager was the main factor that caused the decline of 112

American business in the 1970s and 80s.The main difference between leader and manager is that, leader has followers and manager has subordinates. In capitalistic society leaders make decisions and manager usually follow or execute.[87] Lack of leadership is most damaging at the level of strategic management where it can paralyze an entire organization.[88] According to Corner, Kinichi, and Keats, strategic decision making in organizations occurs at two levels: individual and aggregate. They have developed a model of parallel strategic decision making. The model identifies two parallel processes that both involve getting attention, encoding information, storage and retrieval of information, strategic choice, strategic outcome, and feedback. The individual and organizational processes are not independent however. They interact at each stage of the process.

Reasons why strategic plans fail
There are many reasons why strategic plans fail, especially:

• •

Failure to execute by overcoming the four key organizational hurdles[90] o Cognitive hurdle o Motivational hurdle o Resource hurdle o Political hurdle Failure to understand the customer o Why do they buy o Is there a real need for the product o inadequate or incorrect marketing research Inability to predict environmental reaction o What will competitors do  Fighting brands  Price wars o Will government intervene Over-estimation of resource competence o Can the staff, equipment, and processes handle the new strategy o Failure to develop new employee and management skills Failure to coordinate o Reporting and control relationships not adequate o Organizational structure not flexible enough Failure to obtain senior management commitment o Failure to get management involved right from the start o Failure to obtain sufficient company resources to accomplish task Failure to obtain employee commitment o New strategy not well explained to employees o No incentives given to workers to embrace the new strategy Under-estimation of time requirements o No critical path analysis done Failure to follow the plan o No follow through after initial planning o No tracking of progress against plan o No consequences for above Failure to manage change o Inadequate understanding of the internal resistance to change o Lack of vision on the relationships between processes, technology and organization 113

Poor communications o Insufficient information sharing among stakeholders o Exclusion of stakeholders and delegates

Limitations of strategic management
Although a sense of direction is important, it can also stifle creativity, especially if it is rigidly enforced. In an uncertain and ambiguous world, fluidity can be more important than a finely tuned strategic compass. When a strategy becomes internalized into a corporate culture, it can lead to group think. It can also cause an organization to define itself too narrowly. An example of this is marketing myopia. Many theories of strategic management tend to undergo only brief periods of popularity. A summary of these theories thus inevitably exhibits survivorship bias (itself an area of research in strategic management). Many theories tend either to be too narrow in focus to build a complete corporate strategy on, or too general and abstract to be applicable to specific situations. Populism or faddishness can have an impact on a particular theory's life cycle and may see application in inappropriate circumstances. See business philosophies and popular management theories for a more critical view of management theories. In 2000, Gary Hamel coined the term strategic convergence to explain the limited scope of the strategies being used by rivals in greatly differing circumstances. He lamented that strategies converge more than they should, because the more successful ones are imitated by firms that do not understand that the strategic process involves designing a custom strategy for the specifics of each situation.[50] Ram Charan, aligning with a popular marketing tagline, believes that strategic planning must not dominate action. "Just do it!" while not quite what he meant, is a phrase that nevertheless comes to mind when combatting analysis paralysis.

Module 4:
Security and Ethical Challenges

Business ethics
Business ethics (also known as corporate ethics) is a form of applied ethics or professional ethics that examines ethical principles and moral or ethical problems that arise in a business environment. It applies to all aspects of business conduct and is relevant to the conduct of individuals and business organizations as a whole. Applied ethics is a field of ethics that deals with ethical questions in many fields such as medical, technical, legal and environmental ethics. Business ethics can be both a normative and a descriptive discipline. As a corporate practice and a career specialization, the field is primarily normative. In academia descriptive approaches are also taken. The range and quantity of business ethical issues reflects the degree to which business is perceived to be at odds with noneconomic social values. Historically, interest in business ethics accelerated dramatically during the 1980s and 1990s, both within major corporations and within academia. For example, today most major corporate websites lay emphasis on commitment to promoting non-economic social values under a variety of headings such as ethics codes 114

and social responsibility charters. In some cases, corporations have redefined their core values in the light of business ethical considerations, for example, BP's "beyond petroleum" environmental tilt.

Computer crime
Computer crime, or cybercrime, refers to any crime that involves a computer and a network.[1] The computer may have been used in the commission of a crime, or it may be the target.Netcrime refers, more precisely, to criminal exploitation of the Internet.[3] Issues surrounding this type of crime have become high-profile, particularly those surrounding hacking, copyright infringement, child pornography, and child grooming. There are also problems of privacy when confidential information is lost or intercepted, lawfully or otherwise.

Computer crime encompasses a broad range of potentially illegal activities. Generally, however, it may be divided into one of two types of categories: (1) crimes that target computer networks or devices directly; (2) crimes facilitated by computer networks or devices, the primary target of which is independent of the computer network or device. Examples of crimes that primarily target computer networks or devices would include:
• • •

Computer viruses Denial-of-service attacks Malware (malicious code)

Examples of crimes that merely use computer networks or devices would include:
• • • •

Cyberstalking Fraud and identity theft Information warfare Phishing scams

A computer can be a source of evidence. Even though the computer is not directly used for criminal purposes, it is an excellent device for record keeping, particularly given the power to encrypt the data. If this evidence can be obtained and decrypted, it can be of great value to criminal investigators.

Spam, or the unsolicited sending of bulk email for commercial purposes, is unlawful to varying degrees. As applied to email, specific anti-spam laws are relatively new, but however, limits on unsolicited electronic communications have existed in few forms for some time.[5]

Computer fraud is any dishonest misrepresentation of fact intended to let another to do or refrain from doing something which causes loss.[citation needed] In this context, the fraud will result in obtaining a benefit by:


• • •

Altering computer input in an unauthorized way. This requires little technical expertise and is not an uncommon form of theft by employees altering the data before entry or entering false data, or by entering unauthorized instructions or using unauthorized processes; Altering, destroying, suppressing, or stealing output, usually to conceal unauthorized transactions: this is difficult to detect; Altering or deleting stored data; Altering or misusing existing system tools or software packages, or altering or writing code for fraudulent purposes.

Other forms of fraud may be facilitated using computer systems, including bank fraud, identity theft, extortion, and theft of classified information. A variety of Internet scams target consumers direct.

Obscene or offensive content
The content of websites and other electronic communications may be distasteful, obscene or offensive for a variety of reasons. In some instances these communications may be illegal. Many risdictions place limits on certain speech and ban racist, blasphemous, politically subversive, libelous or slanderous, seditious, or inflammatory material that tends to incite hate crimes. The extent to which these communications are unlawful varies greatly between countries, and even within nations. It is a sensitive area in which the courts can become involved in arbitrating between groups with strong beliefs. One area of Internet pornography that has been the target of the strongest efforts at curtailment is child pornography.

Whereas content may be offensive in a non-specific way, harassment directs obscenities and derogatory comments at specific individuals focusing for example on gender, race, religion, nationality, sexual orientation. This often occurs in chat rooms, through newsgroups, and by sending hate e-mail to interested parties (see cyber bullying, cyber stalking, harassment by computer, hate crime, Online predator, and stalking). Any comment that may be found derogatory or offensive is considered harassment.

Drug trafficking
Drug traffickers are increasingly taking advantage of the Internet to sell their illegal substances through encrypted e-mail and other Internet Technology.[citation needed] Some drug traffickers arrange deals at internet cafes, use courier Web sites to track illegal packages of pills, and swap recipes for amphetamines in restricted-access chat rooms. The rise in Internet drug trades could also be attributed to the lack of face-to-face communication. These virtual exchanges allow more intimidated individuals to more comfortably purchase illegal drugs. The sketchy effects that are often associated with drug trades are severely minimized and the filtering process that comes with physical interaction fades away.

Government officials and Information Technology security specialists have documented a significant increase in Internet problems and server scans since early 2001. But there is a growing concern among federal officials [who?] that 116

such intrusions are part of an organized effort by cyberterrorists, foreign intelligence services, or other groups to map potential security holes in critical systems. A cyberterrorist is someone who intimidates or coerces a government or organization to advance his or her political or social objectives by launching computer-based attack against computers, network, and the information stored on them. Cyberterrorism in general, can be defined as an act of terrorism committed through the use of cyberspace or computer resources (Parker 1983). As such, a simple propaganda in the Internet, that there will be bomb attacks during the holidays can be considered cyberterrorism. As well there are also hacking activities directed towards individuals, families, organized by groups within networks, tending to cause fear among people, demonstrate power, collecting information relevant for ruining peoples' lives, robberies, blackmailing etc. Cyberextortion is a form of cyberterrorism in which a website, e-mail server, or computer system is subjected to repeated denial of service or other attacks by malicious hackers, who demand money in return for promising to stop the attacks. According to the Federal Bureau of Investigation, cyberextortionists are increasingly attacking corporate websites and networks, crippling their ability to operate and demanding payments to restore their service. More then 20 cases are reported each month to the FBI and many go unreported in order to keep the victim's name out of the domain. Perpetrators typically use a distributed denial-of-service attack.

Cyber warfare
The U.S. Department of Defense (DoD) notes that cyberspace has emerged as a national-level concern through several recent events of geo-strategic significance. Among those are included the attack on Estonia's infrastructure in 2007, allegedly by Russian hackers. "In August 2008, Russia again allegedly conducted cyber attacks, this time in a coordinated and synchronized kinetic and non-kinetic campaign against the country of Georgia. Fearing that such attacks may become the norm in future warfare among nation-states, the concept of cyberspace operations impacts and will be adapted by warfighting military commanders in the future..

Cyber Crimes And Solutions
We are currently living in Cyber age, where Internet and computers have major impacts on our way of living, social life and the way we conduct businesses. The usage of information technology has posed great security challenges and ethical questions in front of us. Just as every thing has positives and negatives, usage of information technology is beneficial as well as insecure. With the growth of the internet, network security has become a major concern. Cyber crimes have emerged rapidly in the last few years and have major consequences. Cyber criminals are doing every thing from stealing money, hacking into others computer, stealing intellectual property, spreading viruses and worms to damage computers connected on the internet and committing frauds. Stoppage of cyber crimes is a major concern today. Cyber criminal make use of the vulnerabilities in computer soft wares and networks to their advantage.

Hacking or Cracking is a major cyber crime committed today. Hacker makes use of the weaknesses and loop holes in operating systems to destroy data and steal important information from victim’s computer. Cracking is normally done through the use of a backdoor program installed on your machine. A lot of crackers also try to gain access to resources through the use of password cracking softwares. Hackers can also monitor what u do on your computer and can also import files on your computer. A hacker could install several programs on to your system 117

without your knowledge. Such programs could also be used to steal personal information such as passwords and credit card information. Important data of a company can also be hacked to get the secret information of the future plans of the company.

Cyber-Theft is the use of computers and communication systems to steal information in electronic format. Hackers crack into the systems of banks and transfer money into their own bank accounts. This is a major concern, as larger amounts of money can be stolen and illegally transferred. Many newsletters on the internet provide the investors with free advice recommending stocks where they should invest. Sometimes these recommendations are totally bogus and cause loss to the investors. Credit card fraud is also very common. Most of the companies and banks don’t reveal that they have been the victims of cyber -theft because of the fear of loosing customers and share holders. Cyber-theft is the most common and the most reported of all cyber-crimes. Cyber-theft is a popular cyber-crime because it can quickly bring experienced cyber-criminal large cash resulting from very little effort. Furthermore, there is little chance a professional cyber-criminal will be apprehended by law enforcement.

Viruses and worms:
Viruses and worms is a very major threat to normal users and companies. Viruses are computer programs that are designed to damage computers. It is named virus because it spreads from one computer to another like a biological virus. A virus must be attached to some other program or documents through which it enters the computer. A worm usually exploits loop holes in soft wares or the operating system. Trojan horse is dicey. It appears to do one thing but does something else. The system may accept it as one thing. Upon execution, it may release a virus, worm or logic bomb. A logic bomb is an attack triggered by an event, like computer clock reaching a certain date. Chernobyl and Melissa viruses are the recent examples. Experts estimate that the Mydoom worm infected approximately a quarter-million computers in a single day in January 2004. Back in March 1999, the Melissa virus was so powerful that it forced Microsoft and a number of other very large companies to completely turn off their e-mail systems until the virus could be contained. Solutions: An important question arises that how can these crimes be prevented. A number of techniques and solutions have been presented but the problems still exists and are increasing day by day.

Antivirus And Anti spyware Software:
Аntivirus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software. Anti spy wares are used to restrict backdoor program, trojans and other spy wares to be installed on the computer. Firewalls: A firewall protects a computer network from unauthorized access. Network firewalls may be hardware devices, software programs, or a combination of the two. A network firewall typically guards an internal computer network against malicious access from outside the network. 118

Cryptography: Cryptography is the science of encrypting and decrypting information. Encryption is like sending a postal mail to another party with a lock code on the envelope which is known only to the sender and the recipient. A number of cryptographic methods have been developed and some of them are still not cracked. Cyber Ethics and Laws: Cyber ethics and cyber laws are also being formulated to stop cyber crimes. It is a responsibility of every individual to follow cyber ethics and cyber laws so that the increasing cyber crimes shall reduce. Security software’s like anti viruses and anti spy wares should be installed on all computers, in order to remain secure from cyber crimes. Internet Service Providers should also provide high level of security at their servers in order to keep their clients secure from all types of viruses and malicious programs.

Hacking (English verb to hack, singular noun a hack) refers to the re-configuring or re-programming of a system to function in ways not facilitated by the owner, administrator, or designer. The term(s) have several related meanings in the technology and computer science fields, wherein a "hack" may refer to a clever or quick fix to a computer program problem, or to what may be perceived to be a clumsy or inelegant (but usually relatively quick) solution to a problem, such as a "kludge". The terms "hack" and "hacking" are also used to refer to a modification of a program or device to give the user access to features that were otherwise unavailable, such as by circuit bending. It is from this usage that the term "hacking" is often used to refer to more nefarious criminal uses such as identity theft, credit card fraud or other actions categorized as computer crime. Even attempting to define the term "hacker" is difficult. Perhaps the premiere WWW resource in introducing individuals to hacking is the The New Hacker's Dictionary (http://www.logophilia.com/jargon/jargon_toc.html), a resource which encompasses everything from hacker slang, jargon, hacker folklore, writing style and speech to general appearance, dress, education and personality characteristics. According to TheNew Hacker's Dictionary, a hacker can be defined as: 1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary. 2. One who programs enthusiastically (even obsessively) or who enjoys programming rather than just theorizing about programming. 3. A person capable of appreciating hack value. 4. A person who is good at programming quickly. 5. An expert at a particular program, or one who frequently does work using it or on it. 6. An expert or enthusiast of any kind. One might be an astronomy hacker, for example. 7. One who enjoys the intellectual challenge of creatively overcoming or circumventing limitations. 8. [deprecated] A malicious meddler who tries to discover sensitive information by poking around. Hence 'password hacker', 'network hacker'. The correct term for this sense is cracker.

Unauthorized Use of Work Computers: It Might Get You Fired, but It’s Not a Crime
. 119

You may remember a story from a couple years ago about Lori Drew – she was a woman who, for whatever reason, decided to torment one of her teenaged daughter’s “enemies” (really, as a teenager, who in the world has any real enemies?), who was also a teenage girl. She did this by creating a fake MySpace profile, and posing as a boy who was about the girl’s age. Over a period of time, the girl fell in love with this fictional boy. When the time was right, Lori Drew revealed her trick, crushing the girl’s spirits. She committed suicide some time later.

As one might expect, the public was outraged when this happened, and demanded that the legal system “Do Something™”. State authorities came to the conclusion that there was no criminal statute under which Drew could be charged. A U.S. attorney, however, decided to charge her under federal law. Now, there is no federal cyber-bullying statute. In fact, there is really no federal law that directly deals with Ms. Drew’s conduct. So, the U.S. attorney charged her with violation of the Computer Fraud and Abuse Act, which makes any unauthorized access to a computer system a criminal offense. The prosecutor argued that, by accessing MySpace using a fake profile (a violation of the MySpace terms of service), Drew had gained unauthorized access to that computer system. She was convicted of misdemeanor charges, but her conviction was overturned by the judge. An interesting case (PDF) (also reported here) on this same matter has just come out of an appeals court in New Jersey. A police officer accessed video files from a central computer that showed cameras in other squad cars. One officer, who apparently disliked another officer, accessed the files, and viewed a recording which allegedly showed him breaching protocol during a traffic stop (allowing a drunk driver to urinate in the bushes before being taken in). He then apparently showed the video to officers below the minimum rank allowed to view it, for the purpose of harming and embarrassing the other officer. He was charged under New Jersey’s equivalent of the Computer Fraud and Abuse Act, which criminalizes unauthorized access to computer systems, and the court has dismissed the indictment. Essentially, it reasoned that, without more, anyone who has been granted a password in good standing to that system, and accesses the information that is meant to be available to the user’s account, no crime has been committed. Now, the use that the user makes of the information might make it a crime, obviously. For example, he could disclose confidential material. However, in this case, the officer’s use was not sufficient to elevate it to criminal conduct. In an increasingly wired world, this seems like a reasonable position to take. Every time we access the Internet, we’re constantly accessing a huge number of different computer systems, in the form of websites. And most major websites have terms of use. How many people actually read these walls of text? Not many, I’d wager. While the conduct of Lori Drew was reprehensible, and this police officer doesn’t exactly come away smelling like roses, the law should not be twisted to criminalize conduct we don’t like, when no criminal law explicitly dealing with the subject exists. Now, we’re starting to see laws designed to criminalize conduct like that of Ms. Drew, which I’ve previously discussed at length. I’ve so far concluded that the one legislative response I’ve seen has been pretty asinine, and probably unconstitutional. As for the conduct of the police officer, it seems like something that could be easily dealt with internally, through retraining or discipline, such as suspension or termination of employment. 120

I’m beginning to think that our society is becoming over-criminalized. That is to say, we’re trying to punish people criminally, even though they likely committed no crime (no matter how horrible the conduct was). There are some things, I believe, that don’t need to be crimes, and can be better dealt with by the victims seeking civil remedies, in the case of Lori Drew, or internal discipline, as in the case of the police officer. We have to accept that, in a free society, a lot of things that we don’t like are going to be legal. We’re free to morally condemn those things, but knee-jerk criminalization is not the proper response.

What is software piracy?

Unlike other things you purchase, the software applications and fonts you buy don't belong to you. Instead, you become a licensed user — you purchase the right to use the software on a single computer, but you can't put copies on other machines or pass that software along to colleagues. Software piracy is the illegal distribution and/or reproduction of Adobe software applications or fonts for business or personal use. Whether software piracy is deliberate or not, it is still illegal and punishable by law. Piracy comes in many forms. Here are some common piracy methods:
• • • • •

Licensed user duplication for unlicensed users Illegal Internet distribution Illegal use of Adobe® Acrobat® over a network Distributing specialized education versions to unauthorized markets Distribution of inauthentic Adobe software or fonts

Licensed user duplication for unlicensed users
When someone copies software without buying the appropriate number of licenses, it is copyright infringement. Each of these activities is a form of software piracy:
• • •

An individual copying software for a friend A business underreporting the number of computers using the software Including copies of Adobe fonts when sending files

Learn more about the legal use of Adobe fonts. 121

Illegal Internet distribution
Be cautious when ordering software over the Internet. Many resellers with Internet storefronts or those who sell from auction sites knowingly distribute copies of software illegally. Estimates reveal that as much as 90% of software sold over Internet auction sites is either bootlegged or gray market. So, if the pricing seems too good to be true, it probably is. Some Web sites promise prospects free software downloads. These sites are distributing software illegally. There is also no guarantee that the software is secure or will work properly when installed. The only time it's legal to download Adobe software free of charge is when special tryout promotions are offered. Typically, you'll find these only on Adobe.com. These offers enable the use of the software only for a limited time. To buy knowing that you'll receive the protection and full functionality of legal software, we recommend that you purchase from either the or from an Adobe Authorized Reseller. For more Internet software piracy information, download this study from the Software & Information Industry Association.

Illegal use of Adobe Acrobat over a network
Adobe Acrobat is a powerful tool to help employees communicate efficiently and securely enterprise-wide. Given the need to share Acrobat files with a wide network of employees and partners, there are some specific licensing requirements to be aware of. Please check out the Acrobat licensing FAQ for more information.

Distributing specialized education versions to unauthorized markets
Adobe creates special versions of its software to meet the needs of the education market. These versions are clearly labeled to avoid confusion with other market segments. Duplication of these specialized versions for distribution to other markets is prohibited. To find out who qualifies for educational pricing, visit Adobe in Education.

Distribution of in authentic Adobe software or fonts
Be cautious when ordering software over the Internet. Many resellers with Internet storefronts or those who sell from auction sites knowingly distribute copies of software illegally. According to the Software & Information Industry Association, estimates reveal that as much as 90% of software sold over Internet auction sites is bootlegged or gray market. So, if the pricing seems too good to be true, it probably is. Some resellers attempt to alter Adobe software or fonts and unlawfully sell it under a different product name, resulting in quality and file transfer problems. Buy only true Adobe products.

Intellectual property (IP):
Intellectual property (IP) is a term referring to a number of distinct types of creations of the mind for which a set of exclusive rights are recognized—and the corresponding fields of law.[1] While these rights are not actually property rights, the term "Property" is used because they resemble property rights in many ways. Under intellectual property law, owners are granted certain exclusive rights to a variety of intangible assets, such as musical, literary, and artistic works; discoveries and inventions; and words, phrases, symbols, and designs. Common types of intellectual property include copyrights, trademarks, patents, industrial design rights and trade secrets in some jurisdictions.


Financial incentive
These exclusive rights allow owners of intellectual property to benefit from the property they have created, providing a financial incentive for the creation of and investment in intellectual property, and, in case of patents, pay associated research and development costs.[11] Some commentators, such as David Levine and Michele Boldrin, dispute this justification.[12]

Economic growth
The existence of IP laws is credited with significant contributions toward economic growth.[citation needed] Economists estimate that two-thirds of the value of large businesses in the U.S. can be traced to intangible assets. [citation needed] "IPintensive industries" are estimated to generate 72 percent more value added (price minus material cost) per employee than "non-IP-intensive industries".[13][dubious – discuss] A joint research project of the WIPO and the United Nations University measuring the impact of IP systems on six Asian countries found "a positive correlation between the strengthening of the IP system and subsequent economic growth." [14] Other models, such as the Nash equilibrium, would not expect that this correlation necessarily means causation: The Nash equilibrium model predicts that patent holders will prefer to operate in countries with stronger IP laws.[neutrality is disputed] In some of the cases, as was shown for Taiwan [15] after the 1986 reform, the economic growth that comes with a stronger IP system might be due to an increase in stock capital from direct foreign investment.

The term itself
Richard Stallman argues that, although the term intellectual property is in wide use, it should be rejected altogether, because it "systematically distorts and confuses these issues, and its use was and is promoted by those who gain from this confusion." He claims that the term "operates as a catch-all to lump together disparate laws [which] originated separately, evolved differently, cover different activities, have different rules, and raise different public policy issues" and that it creates a "bias" by confusing these monopolies with ownership of limited physical things, likening them to "property rights".[16] Stallman advocates referring to copyrights, patents and trademarks in the singular and warns against abstracting disparate laws into a collective term.

Some critics of intellectual property, such as those in the free culture movement, point at intellectual monopolies as harming health, preventing progress, and benefiting concentrated interests to the detriment of the masses,[17][18] and argue that the public interest is harmed by ever expansive monopolies in the form of copyright extensions, software patents and business method patents. There is also criticism because strict intellectual property rights can inhibit the flow of innovations to poor nations. Developing countries have benefitted from the spread of developed country technologies, such as the internet, mobile phone, vaccines, and high-yielding grains. Many intellectual property rights, such as patent laws, arguably go too far in protecting those who produce innovations at the expense of those who use them. The Commitment to Development Index measures donor government policies and ranks them on the "friendliness" of their intellectual property rights to the developing world. Some libertarian critics of intellectual property have argued that allowing property rights in ideas and information creates artificial scarcity and infringes on the right to own tangible property. Stephan Kinsella uses the following scenario to argue this point: 123

[I]magine the time when men lived in caves. One bright guy—let's call him Galt-Magnon—decides to build a log cabin on an open field, near his crops. To be sure, this is a good idea, and others notice it. They naturally imitate Galt-Magnon, and they start building their own cabins. But the first man to invent a house, according to IP advocates, would have a right to prevent others from building houses on their own land, with their own logs, or to charge them a fee if they do build houses. It is plain that the innovator in these examples becomes a partial owner of the tangible property (e.g., land and logs) of others, due not to first occupation and use of that property (for it is already owned), but due to his coming up with an idea. Clearly, this rule flies in the face of the first-user homesteading rule, arbitrarily and groundlessly overriding the very homesteading rule that is at the foundation of all property right. Other criticism of intellectual property law concerns the tendency of the protections of intellectual property to expand, both in duration and in scope. The trend has been toward longer copyright protection (raising fears that it may some day be eternal). In addition, the developers and controllers of items of intellectual property have sought to bring more items under the protection. Patents have been granted for living organisms, and colors have been trademarked. Because they are systems of government-granted monopolies copyrights, patents, and trademarks are called intellectual monopoly privileges, (IMP) a topic on which several academics, including Birgitte Andersen and Thomas Alured Faunce have written.

Internet privacy
Internet privacy is the desire or mandate of personal privacy concerning transactions or transmission of data via the Internet. It involves the exercise of control over the type and amount of information a person reveals about themself on the Internet and who may access such information. The term is often understood to mean universal Internet privacy, i.e. every user of the Internet possessing Internet privacy. Internet privacy forms a subset of computer privacy. A number of experts within the field of Internet security and privacy believe that privacy doesn't exist; "Privacy is dead – get over it" This should be more encouraged according to Steve Rambam, private investigator specializing in Internet privacy cases. In fact, it has been suggested that the "appeal of online services is to broadcast personal information on purpose." On the other hand, in his essay The Value of Privacy, security expert Bruce Schneier says, "Privacy protects us from abuses by those in power, even if we're doing nothing wrong at the time of surveillance.

Levels of privacy
People with only a casual concern for Internet privacy need not achieve total anonymity. Internet users may achieve an adequate level of privacy through controlled disclosure of personal information. The revelation of IP addresses, non-personally-identifiable profiling, and similar information might become acceptable trade-offs for the convenience that users could otherwise lose using the workarounds needed to suppress such details rigorously. On the other hand, some people desire much stronger privacy. In that case, they may try to achieve Internet anonymity to ensure privacy — use of the Internet without giving any third parties the ability to link the Internet activities to personally-identifiable information (P.I.I.) of the Internet user. In order to keep your information private, people need to be careful on what they submit and look at online. When filling out forms and buying merchandise, that becomes tracked and because your information was not private, companies are now sending you spam and advertising on similar products. Related State Laws Privacy of Personal Information: Nevada and Minnesota require Internet Service Providers to keep information private regarding their customers. This is only unless a customer approves their information being given out. According to the National Conference of State Legislator, the following states have certain laws on the personal privacy of its citizens. 124

Minnesota Statutes §§ 325M.01 to .09 -Prohibits Internet service providers from disclosing personally identifiable information, including a consumer's physical or electronic address or telephone number; Internet or online sites visited; or any of the contents of a consumer's data storage devices. Provides for certain circumstances under which information must be disclosed, such as to a grand jury; to a state or federal law enforcement officer acting as authorized by law; pursuant to a court order or court action. Provides for civil damages of $500 or actual damages and attorney fees for violation of the law. Nevada Revised Statutes § 205.498 -In addition, California and Utah laws, although not specifically targeted to online businesses, require all nonfinancial businesses to disclose to customers, in writing or by electronic mail, the types of personal information the business shares with or sells to a third party for direct marketing purposes or for compensation. Under the California law, businesses may post a privacy statement that gives customers the opportunity to choose not to share information at no cost. There are also certain laws for employees and businesses and privacy policies for [5] websites. California, Connecticut, Nebraska and Pennsylvania all have specific privacy policies regarding websites, these include: "California (Calif. Bus. & Prof. Code §§ 22575-22578) California's Online Privacy Protection Act requires an operator, defined as a person or entity that collects personally identifiable information from California residents through an Internet Web site or online service for commercial purposes, to post conspicuously its privacy policy on its Web site or online service and to comply with that policy. The bill, among other things, would require that the privacy policy identify the categories of personally identifiable information that the operator collects about individual consumers who use or visit its Web site or online service and third parties with whom the operator may share the information. Connecticut (Conn. Gen Stat. § 42-471) Requires any person who collects Social Security numbers in the course of business to create a privacy protection policy. The policy must be "publicly displayed" by posting on a web page and the policy must (1) protect the confidentiality of Social Security numbers, (2) prohibit unlawful disclosure of Social Security numbers, and (3) limit access to Social Security numbers. Nebraska (Nebraska Stat. § 87-302(14)) Nebraska prohibits knowingly making a false or misleading statement in a privacy policy, published on the Internet or otherwise distributed or published, regarding the use of personal information submitted by members of the public. Pennsylvania (18 Pa. C.S.A. § 4107(a)(10)) Pennsylvania includes false and misleading statements in privacy policies published on Web sites or otherwise distributed in its deceptive or fraudulent business practices statute." There are also at least 16 states that require government websites to create privacy policies and procedures or to include machine-readable privacy policies into their websites. These states include Arizona, Arkansas, California, Colorado, Delaware, Iowa, Illinois, Maine, Maryland, Michigan, Minnesota, Montana, New York, Sourth Carolina, Texas, Utah, and Virginia.

Risks to internet privacy
In today’s technological world, millions of individuals are subject to privacy threats. Companies are hired not only to watch what you visit online, but to infiltrate the information and send advertising based on your browsing history. People set up accounts for Facebook; enter bank and credit card information to various websites.


Those concerned about Internet privacy often cite a number of privacy risks — events that can compromise privacy — which may be encountered through Internet use. [7] These methods of compromise can range from the gathering of statistics on users, to more malicious acts such as the spreading of spyware and various forms of bugs (software errors) exploitation. Privacy measures are provided on several social networking sites to try to provide their users with protection for their personal information. On Facebook for example privacy settings are available for all registered users. The settings available on Facebook include the ability to block certain individuals from seeing your profile, the ability to choose your "friends," and the ability to limit who has access to your pictures and videos. Privacy settings are also available on other social networking sites such as E-harmony and MySpace. It is the user's prerogative to apply such settings when providing personal information on the internet. In late 2007 Facebook launched the Beacon program where user rental records were released on the public for friends to see. Many people were enraged by this breach in privacy, and the Lane v. Facebook, Inc. case ensued.

HTTP cookies
An HTTP cookie is data stored on a user's computer that assists in automated access to websites or web features, or other state information required in complex web sites. It may also be used for user-tracking by storing special usage history data in a cookie. Cookies are a common concern in the field of privacy. As a result, some types of cookies are classified as a tracking cookie. Although website developers most commonly use cookies for legitimate technical purposes, cases of abuse occur. In 2009, two researchers noted that social networking profiles could be connected to cookies, allowing the social networking profile to be connected to browsing habits.[8] Systems do not generally make the user explicitly aware of the storing of a cookie. (Although some users object to that, it does not properly relate to Internet privacy. It does however have implications for computer privacy, and specifically for computer forensics. The original developers of cookies intended that only the website that originally distributed cookies to users so they could retrieve them, therefore returning only data already possessed by the website. However, in practice programmers can circumvent this restriction. Possible consequences include:
• •

the placing of a personally-identifiable tag in a browser to facilitate web profiling (see below), or, use of cross-site scripting or other techniques to steal information from a user's cookies.

Some users choose to disable cookies in their web browsers – as of 2000 a Pew survey estimated the proportion of users at 4%.[9] Such an action eliminates the potential privacy risks, but may severely limit or prevent the functionality of many websites. All significant web browsers have this disabling ability built-in, with no external program required. As an alternative, users may frequently delete any stored cookies. Some browsers (such as Mozilla Firefox and Opera) offer the option to clear cookies automatically whenever the user closes the browser. A third option involves allowing cookies in general, but preventing their abuse. There are also a host of wrapper applications that will redirect cookies and cache data to some other location. The process of profiling (also known as "tracking") assembles and analyzes several events, each attributable to a single originating entity, in order to gain information (especially patterns of activity) relating to the originating entity. Some organizations engage in the profiling of people's web browsing, collecting the URLs of sites visited. The resulting profiles can potentially link with information that personally identifies the individual who did the browsing. Some web-oriented marketing-research organizations may use this practice legitimately, for example: in order to construct profiles of 'typical Internet users'. Such profiles, which describe average trends of large groups of Internet 126

users rather than of actual individuals, can then prove useful for market analysis. Although the aggregate data does not constitute a privacy violation, some people believe that the initial profiling does. Profiling becomes a more contentious privacy issue when data-matching associates the profile of an individual with personally-identifiable information of the individual. Governments and organizations may set up honeypot websites – featuring controversial topics – with the purpose of attracting and tracking unwary people. This constitutes a potential danger for individuals.

Flash cookies
Flash cookies, also known as Local Shared Objects, work the same ways as normal cookies and are used by the Adobe Flash Player to store information at the user's computer. They exhibit a similar privacy risk as normal cookies, but are not as easily blocked, meaning that the option in most browsers to not accept cookies does not affect flash cookies. One way to view and control them is with browser extensions or add-ons.

An Evercookie is a JavaScript-based application which produces cookies in a web browser that actively "resist" deletion by redundantly copying themselves in different forms on the user's machine (e.g.: Flash Local Shared Objects, various HTML5 storage mechanisms, window.name caching, etc.), and resurrecting copies are missing or expired.

Photographs on the internet

Today many people have digital cameras and post their photos online. The people depicted in these photos might not want to have them appear on the Internet. Some organizations attempt to respond to this privacy-related concern. For example, the 2005 Wikimania conference required that photographers have the prior permission of the people in their pictures. Some people wore a 'no photos' tag to indicate they would prefer not to have their photo taken.[citation needed] The Harvard Law Review published a short piece called "In The Face of Danger: Facial Recognition and Privacy Law," much of it explaining how "privacy law, in its current form, is of no help to those unwillingly tagged." Any individual can be unwillingly tagged in a photo and displayed in a manner that might violate them personally in some way, and by the time Facebook gets to taking down the photo, many people will have already had the chance to view, share, or distribute it. Furthermore, traditional tort law does not protect people who are captured by a photograph in public because this is not counted as an invasion of privacy. The extensive Facebook privacy policy covers these concerns and much more. For example, the policy states that they reserve the right to disclose member information or share photos with companies, lawyers, courts, government entities, etc. if they feel it absolutely necessary. The policy also informs users that profile pictures are mainly to help friends connect to each other.However, these, as well as other pictures, can allow other people to invade a person’s privacy by finding out 127

information that can be used to track and locate a certain individual. In an article featured in ABC news, it was stated that two teams of scientists found out that Hollywood stars could be giving up information about their private whereabouts very easily through pictures uploaded to the Internet. Moreover, it was found that pictures taken by iPhones automatically attach the latitude and longitude of the picture taken through metadata unless this function is manually disabled.

Search engines
Search engines have the ability to track a user’s searches. Personal information can be revealed through searches including search items used, the time of the search, and more. Search engines have claimed a necessity to retain such information in order to provide better services, protect against security pressure, and protect against fraud.

Data logging
Many programs and operating systems are set up to perform data logging of usage. This may include recording times when the computer is in use, or which web sites are visited. If a third party has sufficient access to the computer, legitimately or not, the user's privacy may be compromised. This could be avoided by disabling logging, or by clearing logs regularly.

Privacy within social networking sites
Prior to the social networking site explosion over the past decade, there were early forms of social network technologies that included online multiplayer games, blog sites, news groups, mailings lists and dating services. These all created a backbone for the new modern sites, and even from the start of these older versions privacy was an issue. In 1996, a young woman in New York City was on a first date with an online acquaintance and later sued for sexual harassment as they went back to her apartment after when everything became too real. This is just an early example of many more issues to come regarding internet privacy.[14] Social networking sites have become very popular within the last five years. With the creation of Facebook and the continued popularity of MySpace many people are giving their personal information out on the internet. These social networks keep track of all interactions used on their sites and save them for later use.[15] Most users are not aware that they can modify the privacy settings and unless they modify them, their information is open to the public. On Facebook privacy settings can be accessed via the drop down menu under account in the top right corner. There users can change who can view their profile and what information can be displayed on their profile. [16] In most cases profiles are open to either "all my network and friends" or "all of my friends." Also, information that shows on a user's profile such as birthday, religious views, and relationship status can be removed via the privacy settings.[17] If a user is under 13 years old they are not able to make a Facebook or a MySpace account, however, this is not regulated.[16] Social networking has redefined the role of Internet privacy. Since users are willingly disclosing personal information online, the role of privacy and security is somewhat blurry. Sites such as Facebook, Myspace, and Twitter have grown popular by broadcasting status updates featuring personal information such as location. Facebook “Places,” in particular, is a Facebook service, which publicizes user location information to the networking community. Users are allowed to “check-in” at various locations including retail stores, convenience stores, and restaurants. Also, users are able to create their own “place,” disclosing personal information onto the Internet. This form of location tracking is automated and must be turned off manually. Various settings must be turned off and manipulated in order for the user to ensure privacy. According to epic.org, Facebook users are recommended to: (1) disable "Friends can check me in to Places," (2) customize "Places I Check In," (3) disable "People Here Now," and (4) uncheck "Places I've Visited.". [18] Moreover, the Federal Trade Commission has received two complaints in regards to Facebook’s “unfair and deceptive” trade practices, which are used to target advertising sectors of the online community. “Places” tracks user location information and is used primarily for 128

advertising purposes. Each location tracked allows third party advertisers to customize advertisements that suit one’s interests. Currently, the Federal Trade Commissioner along with the Electronic Privacy Information Center are shedding light on the issues of location data tracking on social networking sites.[18] Facebook recently updated its profile format allowing for people who are not “friends” of others to view personal information about other users, even when the profile is set to private. However, As of January 18, 2011 Facebook changed its decision to make home addresses and telephone numbers accessible to third party members, but it is still possible for third party members to have access to less exact personal information, like one’s hometown and employment, if the user has entered the information into Facebook . EPIC Executive Director Marc Rotenberg said "Facebook is trying to blur the line between public and private information. And the request for permission does not make clear to the user why the information is needed or how it will be used." [19] Similar to Rotenberg’s claim that Facebook users are unclear of how or why their information has gone public, recently the Federal Trade Commission and Commerce Department have become involved. The Federal Trade Commission has recently released a report claiming that Internet companies and other industries will soon need to increase their protection for online users. Because online users often unknowingly opt in on making their information public, the FTC is urging Internet companies to make privacy notes simpler and easier for the public to understand, therefore increasing their option to opt out. Perhaps this new policy should also be implemented in the Facebook world. The Commerce Department claims that Americans, “have been ill-served by a patchwork of privacy laws that contain broad gaps,”.[20] Because of these broad gaps, Americans are more susceptible to identity theft and having their online activity tracked by others. Twitter Case - In January 2011, the government recently obtained a court order to force the social networking site, Twitter, to reveal information applicable surrounding certain subscribers involved in the WikiLeaks cases. This outcome of this case is questionable because it deals with the user’s First Amendment rights. Twitter moved to reverse the court order, and supported the idea that internet users should be notified and given an opportunity to defend their constitutional rights in court before their rights are compromised.[21] Facebook Friends Study - A study was conducted at Northeastern University by Alan Mislove and his colleagues at the Max Planck Institute for Software Systems, where an algorithm was created to try and discover personal attributes of a Facebook user by looking at their friend’s list. They looked for information such as high school and college attended, major, hometown, graduation year and even what dorm a student may have lived in. The study revealed that only 5% of people thought to change their friend’s list to private. For other users, 58% displayed university attended, 42% revealed employers, 35% revealed interests and 19% gave viewers public access to where they were located. Due to the correlation of Facebook friends and universities they attend, it was easy to discover where a Facebook user was based on their list of friends. This fact is one that has become very useful to advertisers targeting their audiences but is also a big risk for the privacy of all those with Facebook accounts.[22] FBI prowling the networks - The FBI has dedicated undercover agents on Facebook, Twitter, MySpace, LinkedIn. They create fake ID's and sneak their way into a social network in order to find incriminating evidence. They look at everything. Pictures, posts, and video clips can reveal all sorts of useful information. Friends can be a criminal's worst enemy as investigators check an alibi by comparing stories tweeted by friends as to their location during the crime, their purpose of going places, etc. These covert operations are perfectly legal. The rules and guidelines to the privacy issue is internal to the Justice Department and details aren't released to the public. Agents can impersonate a friend, a long lost relative, even a spouse and child. This raises real issues regarding privacy. Although people who use Facebook, Twitter, and other social networking sites are aware of some level of privacy will always be compromised, but, no one would ever suspect that the friend invitation might be from a federal agent whose sole purpose of the friend request was to snoop around. Furthermore, Facebook, Twitter, and MySpace have personal information and past posts logged for up to one year; even deleted profiles, and with a warrant, can hand over very personal information. One example of investigators using Facebook to nab a criminal is the case of Maxi Sopo. Charged with bank fraud, and having escaped to Mexico, he was nowhere to be found until he started posting on 129

Facebook. Although his profile was private, his list of friends were not, and through this vector, they eventually caught him.[23]

Internet service providers
Internet users obtain Internet access through an Internet service provider (ISP). All data transmitted to and from users must pass through the ISP. Thus, an ISP has the potential to observe users' activities on the Internet. However, ISPs are usually prevented from participating in such activities due to legal, ethical, business, or technical reasons. Despite these legal and ethical restrictions, some ISPs, such as British Telecom (BT), are planning to use deep packet inspection technology provided by companies such as Phorm in order to examine the contents of the pages that people visit. By doing so, they can build up a profile of a person's web surfing habits,[citation needed] which can then be sold on to advertisers in order to provide targeted advertising. BT's attempt at doing this will be marketed under the name 'Webwise'.[citation needed] Normally ISPs do collect at least some information about the consumers using their services. From a privacy standpoint, ISPs would ideally collect only as much information as they require in order to provide Internet connectivity (IP address, billing information if applicable, etc). Which information an ISP collects, what it does with that information, and whether it informs its consumers, pose significant privacy issues. Beyond the usage of collected information typical of third parties, ISPs sometimes state that they will make their information available to government authorities upon request. In the US and other countries, such a request does not necessarily require a warrant. An ISP cannot know the contents of properly-encrypted data passing between its consumers and the Internet. For encrypting web traffic, https has become the most popular and best-supported standard. Even if users encrypt the data, the ISP still knows the IP addresses of the sender and of the recipient. (However, see the IP addresses section for workarounds.) An Anonymizer such as I2P – The Anonymous Network or Tor can be used for accessing web services without them knowing your IP address and without your ISP knowing what the services are that you access. General concerns regarding Internet user privacy have become enough of a concern for a UN agency to issue a report on the dangers of identity fraud.[24] While signing up for internet services, each computer contains a unique IP, Internet Protocol address. This particular address will not give away private or personal information, however, a weak link could potentially reveal information from your ISP.[25] Social networking has redefined the role of Internet privacy. Since users are willingly disclosing personal information online, the role of privacy and security is somewhat blurry. Sites such as Facebook, Myspace, and Twitter have grown popular by broadcasting status updates featuring personal information such as location. Facebook “Places,” in particular, is a Facebook service, which publicizes user location information to the networking community. Users are allowed to “check-in” at various locations including retail stores, convenience stores, and restaurants. Also, users are able to create their own “place,” disclosing personal information onto the Internet. This form of location tracking is automated and must be turned off manually. Various settings must be turned off and manipulated in order for the user to ensure privacy. According to epic.org, Facebook users are recommended to: (1) disable "Friends can check me in to Places," (2) customize "Places I Check In," (3) disable 130

"People Here Now," and (4) uncheck "Places I've Visited.". [18] Moreover, the Federal Trade Commission has received two complaints in regards to Facebook’s “unfair and deceptive” trade practices, which are used to target advertising sectors of the online community. “Places” tracks user location information and is used primarily for advertising purposes. Each location tracked allows third party advertisers to customize advertisements that suit one’s interests. Currently, the Federal Trade Commissioner along with the Electronic Privacy Information Center are shedding light on the issues of location data tracking on social networking sites.

Legal threats
Use by government agencies of an array of technologies designed to track and gather Internet users' information are the topic of much debate between privacy advocates, civil libertarians and those who believe such measures are necessary for law enforcement to keep pace with rapidly changing communications technology. Specific examples

Following a decision by the European Union’s council of ministers in Brussels, in January, 2009, the UK's Home Office adopted a plan to allow police to access the contents of individuals' computers without a warrant. The process, called "remote searching", allows one party, at a remote location, to examine another's hard drive and Internet traffic, including email, browsing history and websites visited. Police across the EU are now permitted to request that the British police conduct a remote search on their behalf. The search can be granted, and the material gleaned turned over and used as evidence, on the basis of a senior officer believing it necessary to prevent a serious crime. Opposition MPs and civil libertarians are concerned about this move toward widening surveillance and its possible impact on personal privacy. Says Shami Chakrabarti, director of the human rights group Liberty, “The public will want this to be controlled by new legislation and judicial authorisation. Without those safeguards it’s a devastating blow to any notion of personal privacy.” The FBI's Magic Lantern software program was the topic of much debate when it was publicized in November, 2001. Magic Lantern is a Trojan Horse program that logs users' keystrokes, rendering encryption useless.

Laws for Internet Privacy Protection
USA Patriot Act The purpose of this act, enacted on October 26, 2001 by former President Bush, was to enhance law enforcement investigatory tools, investigate online activity, as well as to discourage terrorist acts both within the United States and around the world. This act reduced restrictions for law enforcement to search various methods and tools of communication such as telephone, e-mail, personal records including medical and financial, as well as reducing restrictions with obtaining of foreign intelligence. Electronic Communications Privacy Act (ECPA) This act makes it unlawful under certain conditions for an individual to reveal the information of electronic communication and contains a few exceptions. One clause allows the ISP to view private e-mail if the sender is suspected of attempting to damage the internet system or attempting to harm another user. Another clause allows the ISP to reveal information from a message if the sender or recipient allows to its disclosure. Finally, information containing personal information may also be revealed for a court order or law enforcement’s subpoena.


Employees and Employers Internet Regulations When considering the rights between employees and employers regarding internet privacy and protection at a company, different states have their own laws. Connecticut and Delaware both have laws that state an employer must create a written notice or electronic message that provides understanding that they will regulate the internet traffic. By doing so, this relates to the employees that the employer will be searching and monitoring emails and internet usage. Delaware charges $100 for a violation where Connecticut charges $500 for the first violation and then $1000 for the second. When looking at public employees and employers, California and Colorado created laws that would also create legal ways in which employers controlled internet usage. The law stated that a public company or agency must create a prior message to the employees stating that accounts will be monitored. Without these laws, employers could access information through employees accounts and use them illegally. In most cases, the employer is allowed to see whatever he or she pleases because of these laws stated both publicly and privately.

Other potential Internet privacy risks

• •

Malware is a term short for "malicious software" and is used to describe software to cause damage to a single computer, server, or computer network whether that is through the use of a virus, trojan horse, spyware, etc.[32] Spyware is a piece of software that obtains information from a user's computer without that user's consent.

• •

A web bug is an object embedded into a web page or email and is usually invisible to the user of the website or reader of the email. It allows checking to see if a person has looked at a particular website or read a specific email message. Phishing is a criminally fraudulent process of trying to obtain sensitive information such as user names, passwords, credit card or bank information. Phishing is an internet crime in which someone masquerades as a trustworthy entity in some form of electronic communication. Pharming is hackers attempt to redirect traffic from a legitimate website to a completely different internet address. Pharming can be conducted by changing the hosts file on a victim’s computer or by exploiting a vulnerability on the DNS server. Social engineering Malicious proxy server (or other "anonymity" services)

How to protect yourself from malware

Keep your computer’s software patched and current, but, read the changes the new version offers in case there are features/modifications that you don't want or that will cause compatibility or stability issues. Both your operating system and your anti- virus application must be updated on a regular basis. Make sure you do all relevant security updates and keep your anti-virus up to date. Only download updates from reputable sources. For Windows operating systems, always use genuine Microsoft windows updates. For other operating systems, always use the legitimate websites of the company or person who produces it. Always think before you install something, weigh the risks and benefits, and be aware of the fine print. Does the lengthy license agreement that you don’t want to read conceal a warning that you are about to install spyware? Don’t install anything from a website that doesn’t look legitimate and be aware of your internet surroundings. Install and use a firewall. If you are running Windows XP you can use the built-in software firewall under Control Panel, and there are free versions of firewalls that work on all versions of Windows. If you are using a MAC there are various free programs you can install which will help protect your system. Microsoft has recently ramped up the protection for the users of Windows 7. Microsoft Security Essentials (MSE) is a free download for computers with genuine Windows 7. Back in the days of Windows XP, 3rd party antivirus and firewall was essential to your protection. Nowadays, many Windows 7 users are relying 132

soley on MSE. Don't let this give you a false sense of security though, always be proactive, do your research on what best suits your needs and install trusted 3rd party software if needed. Prevention is always better than cure; do your best to protect your system from vulnerabilities and don't open yourself up to malware.

Specific cases
Jason Fortuny and Craigslist
In early September 2006, Jason Fortuny, a Seattle-area freelance graphic designer and network administrator, posed as a woman and posted an ad to Craigslist Seattle seeking a casual sexual encounter with men in that area. On September 4, he posted to the wiki website Encyclopædia Dramatica all 178 of the responses, complete with photographs and personal contact details, describing this as the Craigslist Experiment and encouraging others to further identify the respondents.[33] Although some online exposures of personal information have been seen as justified for exposing malfeasance, many commentators on the Fortuny case saw no such justification here. "The men who replied to Fortuny's posting did not appear to be doing anything illegal, so the outing has no social value other than to prove that someone could ruin lives online," said law professor Jonathan Zittrain,[34] while Wired writer Ryan Singel described Fortuny as "sociopathic".[35] The Electronic Frontier Foundation indicated that it thought Fortuny might be liable under Washington state law, and that this would depend on whether the information he disclosed was of legitimate public concern. Kurt Opsahl, the EFF's staff attorney, said "As far as I know, they (the respondents) are not public figures, so it would be challenging to show that this was something of public concern."[34] According to Fortuny, two people lost their jobs as a result of his Craigslist Experiment and another "has filed an invasion-of-privacy lawsuit against Fortuny in an Illinois court." [36] Fortuny did not enter an appearance in the Illinois suit, secure counsel, or answer the complaint after an early amendment. Mr. Fortuny had filed a motion to dismiss, but he filed it with the Circuit Court of Cook County, Illinois, and he did not file proof that he had served the plaintiff. [37] As a result, the court entered a default judgment against Mr. Fortuny and ordered a damages hearing for January 7, 2009.[38] After failing to show up at multiple hearings on damages,[39][40] Fortuny was ordered to pay $74,252.56 for violation of the Copyright Act, compensation for Public Disclosure of Private Facts, Intrusion Upon Seclusion, attorneys fees and costs.[41]

USA vs. Warshak
This case decided December 14, 2010 by the Sixth Circuit Court of Appeals maintained the idea that an ISP actually is allowed access to private e-mail. However, the government must get hold of a search warrant before obtaining such e-mail. This case dealt with the question of emails hosted on an isolated server. Due to the fact that e-mail is similar to other forms of communication such as telephone calls, e-mail requires the same amount of protection under the 4th amendment.

Search engine data and law enforcement
Data from major Internet companies, including Yahoo! and MSN (Microsoft), have already been subpoenaed by the United States[42] and China.[43] AOL even provided a chunk of its own search data online, [44] allowing reporters to track the online behaviour of private individuals.[45]


In 2006, a wireless hacker pled guilty when his Google searches were used as evidence against him. The defendant ran a Google search over the network using the following search terms: "how to broadcast interference over wifi 2.4 GHZ," "interference over wifi 2.4 Ghz," "wireless networks 2.4 interference," and "make device interfere wireless network." While court papers did not describe how the FBI obtained his searches (e.g. through a seized hard-drive or directly from the search-engine), Google has indicated that it can provide search terms to law enforcement if given an Internet address or Web cookie.

US v. Zeigler
In the United States many cases discuss whether a private employee (i.e., not a government employee) who stores incriminating evidence in workplace computers is protected by the Fourth Amendment's reasonable expectation of privacy standard in a criminal proceeding. Most case law holds that employees do not have a reasonable expectation of privacy when it comes to their work related electronic communications. See, e.g. US v. Simons, 206 F.3d 392, 398 (4th Cir., Feb. 28, 2000). However, one federal court held that employees can assert that the attorney-client privilege with respect to certain communications on company laptops. See Curto v. Medical World Comm., No. 03CV6327, 2006 U.S. Dist. LEXIS 29387 (E.D.N.Y. May 15, 2006). Another recent federal case discussed this topic. On January 30, 2007, the Ninth Circuit court in US v. Ziegler, reversed its earlier August 2006 decision upon a petition for rehearing. In contrast to the earlier decision, the Court acknowledged that an employee has a right to privacy in his workplace computer. However, the Court also found that an employer can consent to any illegal searches and seizures. See US v. Ziegler, ___F.3d 1077 (9th Cir. Jan. 30, 2007, No. 05-30177). [1] Cf. US v. Ziegler, 456 F.3d 1138 (9th Cir. 2006). In Ziegler, an employee had accessed child pornography websites from his workplace. His employer noticed his activities, made copies of the hard drive, and gave the FBI the employee's computer. At his criminal trial, Ziegler filed a motion to suppress the evidence because he argued that the government violated his Fourth Amendment rights. The Ninth Circuit allowed the lower court to admit the child pornography as evidence. After reviewing relevant Supreme Court opinions on a reasonable expectation of privacy, the Court acknowledged that Ziegler had a reasonable expectation of privacy at his office and on his computer. That Court also found that his employer could consent to a government search of the computer and that, therefore, the search did not violate Ziegler's Fourth Amendment rights.

State v. Reid
The New Jersey Supreme Court has also issued an opinion on the privacy rights of computer users, holding in State v. Reid that computer users have a reasonable expectation of privacy concerning the personal information they give to their ISPs. In that case, Shirley Reid was indicted for computer theft for changing her employer's password and shipping address on its online account with a supplier. The police discovered her identity after serving the ISP, Comcast, with a municipal subpoena not tied to any judicial proceeding. The lower court suppressed the information from Comcast that linked Reid with the crime on grounds that the disclosure violated Reid's constitutional right to be protected from unreasonable search and seizure. The appellate court affirmed, as did the New Jersey Supreme Court, which ruled that ISP subscriber records can only be disclosed to law enforcement upon the issuance of a grand jury subpoena. As a result, New Jersey offers greater privacy 134

rights to computer users than most federal courts. This case also serves as an illustration of how case law on privacy regarding workplace computers is still evolving.

Robbins v. Lower Merion School District
In Robbins v. Lower Merion School District (U.S. Eastern District of Pennsylvania 2010), the federal trial court issued an injunction against the school district after plaintiffs charged two suburban Philadelphia high schools violated the privacy of students and others when they secretly spied on students by surreptitiously and remotely activating webcams embedded in school-issued laptops the students were using at home. The schools admitted to secretly snapping over 66,000 webshots and screenshots, including webcam shots of students in their bedrooms.

Teachers and MySpace
Teachers’ privacy on MySpace has created controversy across the world. They are forewarned by The Ohio News Association that if they have a MySpace account, it should be deleted. Eschool News warns, “Teachers, watch what you post online.” The ONA also posted a memo advising teachers not to join these sites. Teachers can face consequences of license revocations, suspensions, and written reprimands. The Chronicle of Higher Education wrote an article on April 27, 2007, entitled "A MySpace Photo Costs a Student a Teaching Certificate" about Stacy Snyder. She was a student of Millersville University of Pennsylvania who was denied her teaching degree because of an unprofessional photo posted on MySpace, which involved her drinking with a pirate's hat on and a caption of “Drunken Pirate". As a substitute, she was given an English degree.

Internet privacy and Blizzard Entertainment
On July 6, 2010, Blizzard Entertainment announced that it would display the real names tied to user accounts in its game forums. On July 9, 2010, CEO and cofounder of Blizzard Mike Morhaime announced a reversal of the decision to force posters' real names to appear on Blizzard's forums. The reversal was made in response to subscriber feedback.

Internet privacy and Google Maps
In Spring 2007, Google improved their Google Maps to include what is known as "Street View". This feature gives the user a 3-D, street level view with real photos of streets, buildings, and landmarks. In order to offer such a service, Google had to send trucks with cameras mounted on them and drive through every single street snapping photos. These photos were eventually stitched together to achieve a near seamless photorealistic map. However, the photos that were snapped included people caught in various acts, some of which includes a man urinating on the street, nude people seen through their windows, and apparently, a man trying to break into someone's apartment, etc; although some images are up to interpretation. This prompted a public outburst and sometime after, Google offered a "report inappropriate image" feature to their website.

Challenges and responses
We aim to be the best sports company in the world and that means meeting the challenges we face head on and finding the appropriate responses to them. Our challenges are: • Being a global business • Being competitive 135

• Managing an external supply chain • Building credibility and trust • Managing change • Being environmentally responsible • Developing our people • Supporting local communities ----------------------------------------------------------------------------------------------------------------------Being a global business Our brands are visible all over the world but that visibility creates its own challenges. For example, our role as official partner to the 2008 Beijing Olympics means millions of people saw our brands in action. However, our presence in China – where we have many suppliers and retail outlets – means we are also in the spotlight when China’s economic growth is cited in discussions about the impacts of globalization on people and the environment. Response We accept that increased scrutiny of our business practices is in part a result of our own efforts to raise the profile and reach of our brands across the world. Like any global business, the adidas Group must manage wide-ranging commercial and competitive pressure to deliver increased financial returns and growth. At the same time we are accountable for our employees and have a responsibility towards the workers in our suppliers’ factories and also for the environment. We are committed to striking the balance between shareholder interests and the needs and concerns of employees and workers and the environment, or in short to becoming a sustainable company. Being competitive Being competitive requires we respond to consumer demands for a broad range of products. This in turn means we need a wide variety of suppliers. Ensuring consistent compliance with our social and environmental standards across this broader and more complex supply chain is a challenge. Response Our Workplace Standards are based on the International Labour Organization (ILO) and UN conventions relating to human rights and employment practices. The Workplace Standards are fundamental to our relationships with our suppliers and are a contractual obligation. In 2007 we rolled out our policies to all adidas Group entities to ensure adherence to the principle ‘One Group – One set of standards’. These policies contain uniform and mandatory procedures related to disclosure of suppliers, approval of new suppliers, enforcement actions and termination practices. Managing an external supply chain Most of our products are manufactured by suppliers under contract to the adidas Group. Outsourced production is not without its risks. We have less control over how our suppliers operate and the conditions at their factories than we do at company-owned sites. Response Within our core supply chain we act as both inspectors and advisors, assessing management commitment to our 136

Workplace Standards but also training our suppliers on the key issues. Our strategy is based on a long-term vision of self-governance for our suppliers and focuses on:

Encouraging our business partners to establish a management systems approach to human resources and health, safety and the environment

Training and advising our suppliers’ workers and managers

Raising environmental awareness and promoting best environmental practice, and

Expanding our engagement with local worker organisations and NGOs to better understand working conditions in places where our products are made.

Building credibility and trust The adidas Group has its own internal team for assessing how well our suppliers are complying with our supply chain code of conduct, the Workplace Standards. Some people question how impartial an internal team can be, and they call for us to publish the results of our assessments and to involve independent third parties in investigating and verifying supplier performance. Response We value transparency and stakeholder feedback. We report regularly on our compliance work, including the location of our suppliers globally. We also submit our programme to evaluation and public reporting by the Fair Labor Association, a non-profit organisation which assesses and verifies the compliance programmes of brands and publishes the results. Moreover, we continue to practise full disclosure to researchers, trade unions and other concerned NGOs, based on their specific requests. A practice we have followed for more than a decade. We also work collaboratively with our suppliers, labour activists, academics and others because we believe that working closely together demonstrates our commitment to meeting stakeholders’ concerns and creating lasting change in factory and environmental conditions.

Managing change As a company we do not act in isolation: we have to react to economic and social developments in the countries where our products are made. For example, we have had to adapt our programme in the face of worker strikes in Vietnam, factory closures in Indonesia and new legislation in China. Response Integrating our standards into our day-to-day operations lies at the heart of our ability to respond to these developments. The ‘Social and Environmental Affairs’ (SEA) team was created in 1997 to ensure supply chain compliance with the Workplace Standards. To make this a part of normal business practice, SEA team members are located near our suppliers and work closely with the Global Operations group, which is responsible for developing and sourcing products from suppliers. 137

To drive change in supplier behaviour and practices, the results of supply chain compliance performance must inform the Global Operations group and other supply chain decision-makers. So the Workplace Standards are an integral part of the manufacturing agreements the Group holds with its business partners. And the Global Operations group refer to our suppliers’ performance against our Standards when deciding which suppliers to select and retain. In this way, we are driving change in the way our suppliers do business.

Being environmentally responsible Our products must be competitive in function and price but also safe. Manufacturing products must be done with the least environmental impact without compromising function and quality. And we have to be efficient in our use of resources but also fully support our global business. The challenge is to balance these various demands. Response Reducing pollution with so-called end-of-pipe solutions offers only limited environmental benefits, so we strive to design out environmental problems by:

Complying with all legal local laws and regulations

Applying best practices at our own sites and operations

Ensuring product materials and components are non-toxic and safe

Promoting environmental management systems and best practices in the supply chain, where major environmental impacts occur

Integrating environmental aspects in the product design and development process, which led to the Spring 2008 launch of our ‘Grün’ range, which uses recycled materials and has been made with the least possible environmental impact.

Developing our people We operate all over the world, and have to mirror the global marketplace with a multinational workforce. Our challenge is to recruit, retain and develop this diverse group of employees so they achieve their full potential. Response The success of the Group is a direct result of the engagement of the people who work for us. We strive to be the best and most productive workplace in the industry by:

Creating a working environment that stimulates team spirit and passion, engagement and achievement

Instilling a performance culture, based upon strong leadership 138

Fostering an understanding of social and environmental responsibility for the world in which we live – for the rights of all individuals, and for the laws and customs of the countries in which we operate

Providing a secure working environment

Supporting local communities Our business has an impact on communities all round the world. We need to understand local needs and design programmes that are core to our business strategy and make a real difference to people’s lives. Response The adidas Group has adopted a largely decentralised and brand-oriented model for community involvement, recognising that people in our regional subsidiaries and Group entities best understand the needs and cultural sensitivities of their local communities. These initiatives derive from the brands’ individual identities and values. They may vary in form, but they are all aimed at supporting children and young adults, with sports as a common theme. At Group level we continue to support our suppliers’ communities, as well as make contributions to organisations that promote sustainable development practices within the industry.


Ergonomics: the science of designing user interaction with equipment and workplaces to fit the user.


Ergonomics is the study of designing equipment and devices that fit the human body, its movements, and its cognitive abilities. Proper ergonomic design is necessary to prevent repetitive strain injuries, which can develop over time and can lead to long-term disability.[1] The International Ergonomics Association defines ergonomics as follows:[2] Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. Ergonomics is employed to fulfill the two goals of health and productivity. It is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines.

More than twenty technical subgroups within the Human Factors and Ergonomics Society [5] (HFES) indicate the range of applications for ergonomics. Human factors engineering continues to be successfully applied in the fields of aerospace, aging, health care, IT, product design, transportation, training, nuclear and virtual environments, among others. Kim Vicente, a University of Toronto Professor of Ergonomics, argues that the nuclear disaster in Chernobyl is attributable to plant designers not paying enough attention to human factors. "The operators were trained but the complexity of the reactor and the control panels nevertheless outstripped their ability to grasp what they were seeing [during the prelude to the disaster]." Physical ergonomics is important in the medical field, particularly to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by these disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or recommended to treat or prevent such disorders, and to treat pressure-related chronic pain. Human factors issues arise in simple systems and consumer products as well. Some examples include cellular telephones and other hand held devices that continue to shrink yet grow more complex (a phenomenon referred to as "creeping featurism"), millions of VCRs blinking "12:00" across the world because very few people can figure out how to program them, or alarm clocks that allow sleepy users to inadvertently turn off the alarm when they mean to hit 'snooze'. A user-centered design (UCD), also known as a systems approach or the usability engineering life cycle aims to improve the user-system.

Design of ergonomics experiments
There is a specific series of steps that should be used in order to properly design an ergonomics experiment. First, one should select a problem that has practical impact. The problem should support or test a current theory. The user should select one or a few dependent variable(s) which usually measures safety, health, and/or physiological performance. Independent variable(s) should also be chosen at different levels. Normally, this involves paid participants, the existing environment, equipment, and/or software. When testing the users, one should give careful instructions describing the method or task and then get voluntary consent. The user should recognize all the possible combination's and interactions to notice the many differences that could occur. Multiple observations and trials should be conducted and compared to maximize the best results. Once completed, redesigning within and between subjects should be done to vary the data. It is often that permission is needed from the Institutional Review Board before an experiment can be done. A mathematical model should be used so that the data will be clear once the experiment is completed.


The experiment starts with a pilot test. Make sure in advance that the subjects understand the test, the equipment works, and that the test is able to be finished within the given time. When the experiment actually begins, the subjects should be paid for their work. All times and other measurements should be carefully measured and recorded. Once all the data is compiled, it should be analyzed, reduced, and formatted in the right way. A report explaining the experiment should be written. It should often display statistics including an ANOVA table, plots, and means of central tendency. A final paper should be written and edited ,after numerous drafts to ensure an adequate report is the final product.

Ergonomics in the workplace
Bilaterally symmetric operating areas of the stationary human body Outside of the discipline itself, the term 'ergonomics' is generally used to refer to physical ergonomics as it relates to the workplace (as in for example ergonomic chairs and keyboards). Ergonomics in the workplace has to do largely with the safety of employees, both long and short-term. Ergonomics can help reduce costs by improving safety. This would decrease the money paid out in workers’ compensation. For example, over five million workers sustain overextension injuries per year. Through ergonomics, workplaces can be designed so that workers do not have to overextend themselves and the manufacturing industry could save billions in workers’ compensation. Workplaces may either take the reactive or proactive approach when applying ergonomics practices. Reactive ergonomics is when something needs to be fixed, and corrective action is taken. Proactive ergonomics is the process of seeking areas that could be improved and fixing the issues before they become a large problem. Problems may be fixed through equipment design, task design, or environmental design. Equipment design changes the actual, physical devices used by people. Task design changes what people do with the equipment. Environmental design changes the environment in which people work, but not the physical equipment they use.

Fields of ergonomics
Engineering psychology
Engineering psychology is an interdisciplinary part of ergonomics and studies the relationships of people to machines, with the intent of improving such relationships. This may involve redesigning equipment, changing the way people use machines, or changing the location in which the work takes place. Often, the work of an engineering psychologist is described as making the relationship more "user-friendly." Engineering psychology is an applied field of psychology concerned with psychological factors in the design and use of equipment. Human factors is broader than engineering psychology, which is focused specifically on designing systems that accommodate the information-processing capabilities of the brain.[6]

Macroergonomics is an approach to ergonomics that emphasizes a broad system view of design, examining organizational environments, culture, history, and work goals. It deals with the physical design of tools and the environment. It is the study of the society/technology interface and their consequences for relationships, processes, and institutions. It also deals with the optimization of the designs of organizational and work systems through the consideration of personnel, technological, and environmental variables and their interactions. The goal of macroergonomics is a completely efficient work system at both the macro- and micro-ergonomic level which results in improved productivity, and employee satisfaction, health, safety, and commitment. It analyzes the whole system, finds how each element should be placed in the system, and considers all aspects for a fully efficient system. A misplaced element in the system can lead to total failure. 141

History Macroergonomics, also known as organizational design and management factors, deals with the overall design of work systems. This domain did not begin to receive recognition as a sub-discipline of ergonomics until the beginning of the 1980s. The idea and current perspective of the discipline was the work of the U.S. Human Factors Society Select Committee on the Future of Human Factors, 1980-2000. This committee was formed to analyze trends in all aspects of life and to look at how they would impact ergonomics over the following 20 years. The developments they found include: 1. Breakthroughs in technology that would change the nature of work, such as the desktop computer, 2. The need for organizations to adapt to the expectations and needs of this more mature workforce, 3. Differences between the post-World War II generation and the older generation regarding their expectations the nature of the new workplace, 4. The inability of solely microergonomics to achieve reductions in lost-time accidents and injuries and increases in productivity, 5. Increasing workplace liability litigation based on safety design deficiencies. These predictions have become and continue to become reality. The macroergonomic intervention in the workplace has been particularly effective in establishing a work culture that promotes and sustains performance and safety improvements. Methods

Cognitive Walkthrough Method: This method is a usability inspection method in which the evaluators can apply user perspective to task scenarios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the workflow is integrated. Kansei Method: This is a method that transforms consumer’s responses to new products into design specifications. As applied to macroergonomics, this method can translate employee’s responses to changes to a work system into design specifications. High Integration of Technology, Organization, and People (HITOP): This is a manual procedure done stepby-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to efficiently integrate technology in these contexts. Top Modeler: This model helps manufacturing companies identify the organizational changes needed when new technologies are being considered for their process. Computer-integrated Manufacturing, Organization, and People System Design (CIMOP): This model allows for evaluating computer-integrated manufacturing, organization, and people system design based on knowledge of the system. Anthropotechnology: This method considers analysis and design modification of systems for the efficient transfer of technology from one culture to another. Systems Analysis Tool (SAT): This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives.


Macroergonomic Analysis of Structure (MAS): This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects. Macroergonomic Analysis and Design (MEAD): This method assesses work-system processes by using a ten-step process. Virtual Manufacturing and Response Surface Methodology (VMRSM).[8]: This method uses computerized tools and statistical analysis for workstation design.

Seating ergonomics
The best way to reduce pressure in the back is to be in a standing position. However, there are times when you need to sit. When sitting, the main part of the body weight is transferred to the seat. Some weight is also transferred to the floor, back rest, and armrests. Where the weight is transferred is the key to a good seat design. When the proper areas are not supported, sitting in a seat all day can put unwanted pressure on the back causing pain. The lumbar (bottom five vertebrate in the spine) needs to be supported to decrease disc pressure. Providing both a seat back that inclines backwards and has a lumbar support is critical to prevent excessive low back pressures. The combination which minimizes pressure on the lower back is having a backrest inclination of 120 degrees and a lumbar support of 5 cm. The 120 degrees inclination means the angle between the seat and the backrest should be 120 degrees. The lumbar support of 5 cm means the chair backrest supports the lumbar by sticking out 5 cm in the lower back area. One drawback to creating an open body angle by moving the backrest backwards is that it takes ones body away from the tasking position, which typically involves leaning inward towards a desk or table. One solution to this problem can be found in the kneeling chair. A proper kneeling chair creates the open body angle by lowering the angle of the lower body, keeping the spine in alignment and the sitter properly positioned to task. The benefit of this position is that if one leans inward, the body angle remains 90 degrees or wider. One mis-perception regarding kneeling chairs is that the body's weight bears on the knees, and thus users with poor knees cannot use the chair. This misperception has led to a generation of kneeling chairs that attempt to correct this by providing a horizontal seating surface with an ancillary knee pad. This design wholly defeats the purpose of the chair. In a proper kneeling chair, some of the weight bears on the shins, not the knees, but the primary function of the shin rests (knee rests) are to keep one from falling forward out of the chair. Most of the weight remains on the buttocks. Another way to keep the body from falling forward is with a saddle seat. This type of seat is generally seen in some sit stand stools, which seek to emulate the riding or saddle position of a horseback rider, the first "job" involving extended periods of sitting. Another key to reducing lumbar disc pressure is the use of armrests. They help by putting the force of your body not entirely on the seat and back rest, but putting some of this pressure on the armrests. Armrest needs to be adjustable in height to assure shoulders are not overstressed.

The International Ergonomics Association (IEA) is a federation of ergonomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Ergonomics Association has 46 federated societies and 2 affiliated societies. The International Society of Automotive Engineers (SAE) is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a standards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Society of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established Human 143

Factors principles. It is one the most influential organizations with respect to Ergonomics work in Automotive design. This society regularly holds conferences which address topics spanning all aspects of Human Factors/Ergonomics.[citation needed] In the UK the professional body for ergonomists is The Institute of Ergonomics and Human Factors and in the USA it is the Human Factors and Ergonomics Society. In Europe professional certification is managed by the Centre for Registration of European Ergonomists (CREE). In the USA the Board of Certification in Professional Ergonomics performs this function. In Canada the professional body for ergonomists is the Association of Canadian Ergonomists. The Human Factors and Ergonomics Society (HFES) is the world's largest organization of professionals devoted to the science of human factors and ergonomics. The Society's mission is to promote the discovery and exchange of knowledge concerning the characteristics of human beings that are applicable to the design of systems and devices of all kinds. [9]

Cyberterrorism is a phrase used to describe the use of Internet based attacks in terrorist activities, including acts of deliberate, large-scale disruption of computer networks, especially of personal computers attached to the Internet, by the means of tools such as computer viruses. Cyberterrorism is a controversial term. Some authors choose a very narrow definition, relating to deployments, by known terrorist organizations, of disruption attacks against information systems for the primary purpose of creating alarm and panic. By this narrow definition, it is difficult to identify any instances of cyberterrorism. Cyberterrorism can also be defined much more generally as any computer crime targeting computer networks without necessarily affecting real world infrastructure, property, or lives.

There is debate over the basic definition of the scope of cyberterrorism. There is variation in qualification by motivation, targets, methods, and centrality of computer use in the act. Depending on context, cyberterrorism may overlap considerably with cybercrime or ordinary terrorism.[1]

Narrow definition
If cyberterrorism is treated similarly to traditional terrorism, then it only includes attacks that threaten property or lives, and can be defined as the leveraging of a target's computers and information, particularly via the Internet, to cause physical, real-world harm or severe disruption of infrastructure. There are some[who?] who say that cyberterrorism does not exist and is really a matter of hacking or information warfare. They disagree with labeling it terrorism because of the unlikelihood of the creation of fear, significant physical harm, or death in a population using electronic means, considering current attack and protective technologies. If a strict definition is assumed, then there have been no or almost no identifiable incidents of cyberterrorism, although there has been much public concern.


Broad definition
Cyberterrorism is defined by the Technolytics Institute as "The premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives. Or to intimidate any person in furtherance of such objectives." [2] The term was coined by Barry C. Collin.[3] The National Conference of State Legislatures, an organization of legislators created to help policymakers issues such as economy and homeland security defines cyberterrorism as: [T]he use of information technology by terrorist groups and individuals to further their agenda. This can include use of information technology to organize and execute attacks against networks, computer systems and telecommunications infrastructures, or for exchanging information or making threats electronically. Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site defacing, Denial-of-service attacks, or terrorist threats made via electronic communication.[4] For the use of the Internet by terrorist groups for organization, see Internet and terrorism. Cyber terrorism can also include attacks on Internet business, but when this is done for economic motivations rather than ideological, it is typically regarded as cyber crime. As shown above, there are multiple definitions of cyber terrorism and most are overly broad. There is controversy concerning overuse of the term and hyperbole in the media and by security vendors trying to sell "solutions".[5]

*******************THE END******************


Sign up to vote on this title
UsefulNot useful