You are on page 1of 14

The Need to Rethink Decentralized Computing in Higher Education Copyright 1994 CAUSE.

From _CAUSE/EFFECT_ Volume 17, Number 4, Winter 1994. Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, the CAUSE copyright and its date appear, and notice is given that copying is by permission of CAUSE, the association for managing and using information resources in higher education. To disseminate otherwise, or to republish, requires written permission. For further information, contact Julia Rudy at CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301 USA; 303-939-0308; e-mail: VIEWPOINT THE NEED TO RETHINK DECENTRALIZED COMPUTING IN HIGHER EDUCATION by Martin B. Solomon American public higher education received massive public funding and support from the 1940s until the mid- 80s, allowing a huge expansion of programs and facilities. However, in the late 80s, legislatures began to cut back funding, and by 1992 the cutbacks had become extremely painful. It is clear now that the good old days of fiscal abundance will never return and that many of the paradigms characterizing higher education must change. One such paradigm involves the management and control of campus networking and computing. CENTRALIZATION VS. DECENTRALIZATION The recent emphasis in management circles upon decentralization within organizations as a tool for downsizing and reengineering has led many to believe that decentralization of anything and everything is most costeffective and that all centralization ideas died with the Soviet Union. However, the centralization versus decentralization evaluation requires analysis of each specific case. Neither centralization nor decentralization is inherently more cost-effective. The availability of inexpensive microcomputers and massive funding for them led higher education, as well as industry, to launch an enormous move to decentralized computing. Decentralization of computing provided ease of use, the elimination of red tape and bureaucracy, and faster decisions being made at the grass-roots level. It was a very satisfying experience for users who had been annoyed by the bureaucracy of central computing organizations and their slow response to departmental needs. Through decentralization, the departments could satisfy their own needs more quickly without the hassles of the past. Questions remain as to the efficacy of this movement and whether it represents the best future approach. Decentralization brings duplication and redundancy. Do such

additional costs outweigh the additional value of decentralization? We now find that there are many hidden costs of decentralized, departmental computing. Some of those hidden costs represent underestimates of the original costs.[1] Some of them represent missed opportunities. Nevertheless, one thing is becoming very clear--in this rush to decentralization, several important things were left behind. Discussing those things is the purpose of this viewpoint. STANDARDIZATION Standardization of hardware and software cannot be done in a decentralized manner. Consequently, the rush to decentralization has left this behind. But standardization may be the most important aspect of computing cost control that an organization can employ. It can affect every component of costs. Standards can reduce initial acquisition costs of hardware and software, reduce the costs of training and upgrades, reduce system development costs, and reduce the amount of time required to diagnose and repair both hardware and software problems. One unintended consequence of departmental computing has been an institutional disregard for enterprise standards. A particularly knowledgeable person (departmental guru) is the person to whom members of a department turn for advice. Departments purchase hardware and software based upon the guru's goals and objectives. For example, if a guru prides himself/herself as a cost-cutter, that department purchases computers and software systems based largely on price. If compatibility is more important to the guru, then the department may select a single vendor's equipment. The point is that each department makes its own decisions independently of enterprise needs and few people in the department except a guru have significant expertise or sophistication in such decisions (and often not the guru either). What is the cost of this suboptimization? The cost is likely to be enormous. As it becomes necessary to develop and integrate enterprisewide applications, this casual acquisition approach will result in large amounts of wasted time in detecting and resolving hardware and software compatibility problems. Further, because different departments will employ different memory sizes, disk drive capacities, processors, networking protocols, and software components, progress toward enterprise-wide systems will be stunted. One major university has determined that it will be uneconomical to implement a university-wide electronic forms project due to the vast diversity of hardware and software platforms within the various colleges and offices. Without standards and control of software acquisitions, immense problems may ensue. Most organizations cannot determine which software is legal and which is not. These organizations are easy game for software companies looking for proper compensation. Without standards for virus protection and regular updates, the institution could be constantly open to immense losses of time and data through virus contamination and law suits.

The lack of standards may also result in excessively costly hardware, software, and labor expenditures due to the required replacement of many computers and components that do not fit into enterprise-wide plans. An institution could create a single office to receive requests for all hardware, software, and networking components. This office could order and catalog all purchases and only approve items that will plug and play in an enterprise-wide system. While this flies in the face of traditional higher education philosophy that each person is an independent entrepreneur, this is one paradigm change that may be necessary for the late 90s. SOFTWARE DISTRIBUTION A recent projection by Peat Marwick reveals that the cost of installing, upgrading and maintaining microcomputer software is significantly larger than most people realize.[2] Assume that an organization starts with 1,000 microcomputers and five application programs running on each computer. Assume further that you add 100 new microcomputers and one new application each year and upgrade each of the microcomputer application programs once each year. Assume further that the labor cost for upgrading an application, error free, is $50. The results show that the organization will pay $2.6 million over five years to simply install the software. This does not include the cost of the computers, software, or upgrades. This only involves the cost of the labor to install the software. A real-world example of distribution costs involves Sprint. In 1992 Sprint installed a large client/server system in Kansas City with 450 personal computers. Without using a centralized approach to software distribution, a software upgrade required approximately one hour per personal computer or 450 hours. After switching to a centralized approach, the task required only sixteen hours.[3] TRAINING AND CONSULTING If an organization employs a reasonable set of hardware and software standards, training costs can be minimized. Continuing enterprise-wide classes can employ first-rate instructors and quality media-based training materials. At the same time, a manageable number of central experts can provide excellent consulting support to faculty and staff with problems or questions. Hardware and software standards permit staff members to transfer between departments and begin productive work immediately without the need to train on different systems. And finally, staff and faculty will be better prepared to help each other with solutions if standards are employed. Such interchange of information will further reduce the cost of training. SKIPPED STEPS

Over the past thirty years, the data processing industry has matured greatly. A well-defined systems development process employs a standard set of steps. One of the traits of departmental systems development is the avoidance of traditional steps in the process required by central information systems groups. The traditional steps include preliminary and detailed systems analysis, conceptual and detailed systems design, program design, coding, testing, documentation, and creating a production environment. Planning is often minimized within individual departments, which customarily avoid development of analysis or design documents, project management procedures, system documentation, and production procedures (or documentation of any kind). This is because a departmental guru normally does all the work and nobody in the department oversees the process or even understands what documents to produce. Because departments skip so many steps in the systems development process, the time to get applications "up and running" is often relatively short. But the cost of skipping these steps has not been assessed. Since departmental systems begin with no legacy software, there are no initial problems related to compatibility and interaction among systems. Some people criticize the traditional steps of computer systems development for taking too much time. However, they have proven themselves by enduring the test of time. Most major university computer systems have been able to mature and provide high quality, reliable service in spite of continuing staff turnover and continuous requirements for modifications. The old adage about "no free lunch" may characterize the use of a "quick-and-dirty" approach to computer systems development. SPOTTY VS. COMPREHENSIVE SUPPORT In a decentralized environment, departments that have particularly adept staff will be able to detect and repair problems quickly. Departments without such capability will flounder and go for long periods of time without being able to repair problems, or they might experience intermittent outages and have no solutions. It is difficult to calculate the cost of down-time that idles some employees for the duration of the outage. If a server is inoperable for six hours and 100 people who earn $10 per hour depend upon its availability, then the single outage might cost $6,000. If a central staff of particularly qualified technicians could detect and repair such a failure in one hour, then the savings of that single outage would be $5,000. If this occurred ten times a year the savings would be $50,000. PERSONNEL Fewer, larger organizations have advantages over smaller, more numerous ones. A central support staff of multiple people can absorb the shock of unanticipated turnover. At the same time, it should be easier to hire and retain higher

quality talent than departments can attract. An early study by Lee Selwyn of Harvard found that large computing organizations paid a smaller percentage of their total costs for personnel.[4] Another study found that larger computing organizations paid a smaller percentage of costs than smaller organizations and, at the same time, paid higher average salaries.[5] It makes sense that many employees would rather work for a large, central organization that offers opportunities for promotion, professional development, and technical variety. In addition, a large organization can obtain staff with the precise skills it needs, whereas in a department, one or two people must perform a wide variety of tasks, some of which are new and require learning time as well as result in the typical novice errors implicit in a learning curve. This creates great inefficiencies and much lost time in small, departmental groups. As mentioned earlier, the departmental guru skips most of the steps in the system development process and carries systems design and logic of departmental computer systems in his/her head. Consequently, the guru becomes indispensable. The greatest liability an organization can acquire is an indispensable person! When that person leaves, chaos will surely reign and there will be a period of time in which the department will have difficulty functioning. Equally important is the probability that these departmental systems will not fit into an enterprise-wide strategy for the future and will require expensive conversions. Central systems developers are able to consider the entire organization in systems designs, cushion the impact of turnover by having multiple staff participate in each project, and ensure that developers follow time-honored development methodologies. Larger, central technology staffs can afford to employ a variety of highly trained specialists so that virtually every type of talent required will be available. In addition, central services groups require a smaller number of people than the aggregate of many departmental staffs. Finally, significant faculty time is spent "under the table" sorting though technical problems and diagnosing difficulties--time that would be better used for primary departmental functions. CONCLUSION Much of the appeal of decentralization is valid and founded on solid principles. But some of it is an emotional response to a massively successful sales campaign on the part of hardware and software vendors and consultants. Some is the result of so many "computer literate" faculty and staff who have their own views of requirements, but do not have the experience or sophistication to see beyond initial equipment costs. Colleges and universities too often avoid factual analysis, but rather hope to reduce costs and push blindly toward client/server systems urged on by departments and vendors. Interestingly enough, some of the same people who initially questioned the consequences of unplanned infusion of microcomputers and now question the wisdom of

client/server-panic are again being labeled "mainframe bigots." Computing is treated as a "right" in many companies and especially in universities. That is, computing planning has often been more political than business-like. Organizations have been slow or totally unwilling to develop standards because of the political fallout. For example, a massive reorganization is taking place at the Tennessee Valley Authority which has 19,500 employees, sales of $5 billion, and an Information Systems budget of $100 million per year with 925 employees. Part of the rationale for the reorganization has to do with the fact that "TVA's previous IS executive centralized the thenscattered IS departments, but that offended the business units. ... The previous chief also started to drag the IS department away from IBM mainframes and into client/server computing ... which alienated the mainframers."[6] The world of client/server is on its way. Initially it is turning out to be more expensive, more complex, and less robust than most people anticipated. But as developments continue, it will become more cost-effective. Bob Heterick, president of Educom, points out, "We have come to recognize that, after the first generation, technology always changes the organization and service levels more than it reduces costs. Successive generations of technology, in order to more than marginally change costs, require us to reengineer our business practices."[7] The time is here to recognize that enterprise-wide hardware and software compatibility and system reliability are paramount issues and the time is here to reevaluate unrestrained decentralization of computing in higher education. Because countless departmental applications spring up each year, the longer an organization waits to understand the criticality of enterprise-wide needs, the more costly and painful it will be to change. Until higher education accepts the notion that computing paradigms must change, we will waste more time and money, year after year. The welfare of the entire organization must take precedence over the wants of the departments for long-term success of information systems and the organization itself. ============================================================= Footnotes: 1 Martin B. Solomon, "The Hidden Costs of Client/Server," _CAUSE/EFFECT_, Spring 1994, pp. 47-51. 2 Larry Rodda, "The Hidden Cost of C/S: Software Distribution," _Datamation_, 1 December 1993, p. 20. 3 Bruce Caldwell, "Looking Beyond the Costs," _Information Week_, 3 January 1994, p. 56. 4 Lee Selwyn, _Economies of Scale in Computer Use_

(Cambridge, Mass.: Project MAC, 1970). 5 Martin B. Solomon, "Economies of Scale and Computing Personnel," _Datamation_, March 1970, p. 108. 6 Mitch Betts, "Utility Sparks IS Revamp to Plug Credibility Gap," _Computerworld_, 13 December 1993, pp. 1, 16. 7 Robert C. Heterick, Jr. "Too Smart Is Dumb," _Educom Review_, November/December 1993, p. 56. ************************************************************* Martin B. Solomon is Professor of Computer Science at the University of South Carolina. He has served as Chief Information Officer at the University of South Carolina, Director of Academic Computing at Ohio State, and Director of the Computing Center at the University of Kentucky. ************************************************************* The Need to Rethink Decentralized Computing in Higher Education 20.k

@n@0;NWord Work File D 2950TEXTMSTEXTMSWD Br2J XOc>8`~

Gf*0,H-@U /?? 0H".



?? (S/


K*/.y/./.sLN^ _O

Jeff Hansen2P 2STR `