You are on page 1of 10

The Hidden Costs of Client/Server Computing Copyright 1994 CAUSE. From _CAUSE/EFFECT_ Volume 17, Number 1, Spring 1994.

Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, the CAUSE copyright and its date appear, and notice is given that copying is by permission of CAUSE, the association for managing and using information resources in higher education. To disseminate otherwise, or to republish, requires written permission. For further information, contact Julia Rudy at CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301 USA; 303-939-0308; e-mail: jrudy@CAUSE.colorado.edu THE HIDDEN COSTS OF CLIENT/SERVER COMPUTING by Martin B. Solomon We tend to blame IBM for every-thing bad in computing since it fell from grace. Continuing that trend, I will blame IBM for the client/server phenomenon that is sweeping this country. For most of the 1980s it was clear that powerful computers could be manufactured for a fraction of the cost of the large, multi-million-dollar mainframes of the 1970s and 1980s. Intel and others manufactured 10-MIPS microcomputer chips for hundreds of dollars, while IBM sold 10-MIPS computers for millions of dollars. While these cost comparisons are not completely fair or valid, the point to be made is that CEOs and boards of directors of large companies became convinced that data processing organizations could be more cost-effective by discontinuing the practice of upgrading very expensive mainframe hardware and software systems every year or two and switching to smaller, less expensive computers. IBM saw this trend growing, but in my view its management did not change course for two major reasons. First, they did not believe that customers could make the switch. The reprogramming challenges seemed to defy human capabilities. But, perhaps more importantly, the short-range thinking within IBM refused to give up its large profit margins on mainframes in favor of smaller margins on midrange computers and virtually no margins on microcomputers. The hardware and software industries within the U.S. seized upon IBM's blind spot with a vengeance. It was an opportunity, for the first time in decades, to get a foot in the door of large IBM mainframe computing organizations. Client/server promise Vendors of minicomputer hardware and client/server software began selling cost-saving solutions on almost every street corner. Literally hundreds of companies began marketing complete or partial solutions to more cost-effective computing. The major selling point was that well-equipped minicomputers could be purchased for hundreds of thousands of dollars, whereas mainframe computers cost millions. The

promise of vast savings captured the imagination of corporate executives at every level and held promise for reduced costs in a time of extreme financial stress. Some experiences and problems have started to accumulate now that organizations have begun the migration into client/server, downsizing activities. This viewpoint article gathers together some actual data on corporate experiences in this endeavor that indicate that the savings possible through client/server downsizing may not be as easy to capture as some people thought originally. Underestimates of cost and time PHH Fleet America has been a pioneer in downsizing. PHH leases 100,000 vehicles, receives 3,000 calls per day, and needs quick access to its maintenance records. PHH implemented a client/server system using Intel 486 servers, OS/2, and LAN Manager. This system, while economical, proved incapable of handling the load due to a lack of memory. Next, PHH swapped out the 486 servers for Parallan OS/2 servers that allowed for more memory. While more memory helped, serious reliability problems developed: "[The] network began to crash for no apparent reason at increasingly frequent intervals."[1] The system lacked the necessary tools to track and monitor these problems. Then PHH Fleet decided to replace LAN Manager with Novell Netware. This seemed to resolve the reliability problems, but as the volume of calls increased, the system could not handle the increased load. Finally, PHH Fleet switched over to Sun Sparc 10 UNIX processors. While PHH Fleet has seemed to resolve both the reliability and capacity problems, it is clear that it vastly underestimated the time, effort, and capacity required, and that any savings produced by the new system were only a fraction of that anticipated, if any at all. "Corporate Computing" reported on a Fortune 100 services company that decided to implement client/server.[2] This case study documents how sizable the underestimation of costs can be when planning client/server systems. The original estimate of the costs was $10,000 for a large 486 server and $6,000 for an SQL software license. As the project planning proceeded, the staff began to worry about performance. So they substituted an IBM RS/6000-350 for the 486 server. This was not a big difference, only $20,000 more, and the UNIX license added only about $10,000. As the project unfolded, other additions were made. Sybase was added for $15,000, database maintenance was needed at $5,000, and data center improvements required $50,000 extra. Add to that UNIX Wizard support for $50,000 and staff training that was not anticipated for $50,000, and the total project cost grew to $160,000, while the original anticipated cost was only $16,000. Again, the promise of client/server savings was not fulfilled. Arvin Industries started a downsizing project that would free its users from the rigid control of centralization. "Users took off in every direction," said James Campabello, a manager at Arvin. Campabello "fears that much of the cost

savings and advantages of downsized, distributed systems are being eaten up by continual maintenance and cumbersome multistep conversions."[3] While systems designers using more traditional data processing technologies also underestimate costs, the profession has had less experience in judging the time and effort required to design, develop, and implement client/server systems. This might explain the vast errors in judgment. Hidden costs of software Client/server computing often implies high degrees of decentralization. Most U.S. companies and institutions have gone in this direction as part of the growing trend toward client/server architectures. What we find, however, is a far-flung and disorganized approach to departmental software acquisition and maintenance. Most large organizations cannot tell you how many of which software packages are installed, much less how many of them are legal copies. What is worse, most of these costs are hidden and not fully understood by management because they creep in so gradually. Large portions of the costs are simply redirected time of staff who were hired for one purpose but pressed into service for another--supporting software systems. The Gartner Group has attempted to quantify the true costs of personal computer software. Based on a study of the fiveyear costs of PC software in a 2,000-user organization, Gartner concluded that only 14 percent of the true costs were associated with the initial license fee of purchased software packages. The organizations under study purchased an average of six PC packages per user and spent approximately $30 million over the five-year period on PC software. That represents $3,000 per year per user. The largest portion of costs went into staff support for the software. This includes training and consulting assistance. The breakdown is displayed in Table 1. ************************************************************************ Table 1 Costs of PC software Cost Category Support Distribution Initial License Administration Upgrade Fees Product Selection Percent 44% 17% 14% 13% 10% 2% Millions $13.0 $5.0 $4.1 $3.8 $3.0 $0.6

************************************************************************ High costs of networking The old SNA world was so simple. Dumb terminals connected to a mainframe required much less time and effort to

troubleshoot and repair than the scattered local area network (LAN) approach of today. Forrester Research, Inc. studied 34 Fortune 1000 companies and documented the typical costs of supporting a 5,000-user corporate network.[4] On average, these corporations spent $1,270 per year per user on networking support in a LANenvironment, whereas they estimate that it would cost an average of $460 per year per user in a traditional SNA network. The breakdown for the $1,270 is displayed in Table 2. ************************************************************************ Table 2 Typical costs of supporting a corporate network Cost Category Percent LAN Administration 59% Physical LAN Support 22% Bridge/Router Support 9% Help Desk 8% Dial-in Support 2% Millions $3.8 $1.4 $0.6 $0.5 $0.1

************************************************************************ Not everyone believes Unisys has decided to swim upstream, consolidating all fifty-two of its data centers into one 100,000-square-foot InfoHub in Eagan, Minnesota.[5] With an information technology budget of $336 million, CIO William Rowan estimated that the consolidation will save Unisys about $100 million per year by cutting system and human redundancy. The company has estimated a reduction in total IT headcount from 800 to 500 over a two-year period. Coca Cola is following the Unisys route as well, in combining the major data centers from its five large subsidiaries into one giant data center. Canadian National Railway is extremely guarded about client/server technology. Andre De Rico, a top IT planner at CN, sees client/server technology as useful in some cases, but believes that it lacks critical capabilities found in the mainframe world. According to De Rico, "It will be ten years before CN relies completely upon client/server for most of its critical applications."[6] An analysis of mainframe versus client/server systems revealed a number of CN assumptions concerning the two technologies (see Table 3). ************************************************************************ Table 3 Canadian National Railway analysis Factor Mainframe Client/Server 30 months to deploy Same uptime would be

Timeliness Up and running in 18 months Uptime More than

98 percent Remote Support Already supporting PCs running as dumb terminals

more expensive and harder to achieve Higher cost to support PCs running local applications with local databases

Database One central database At least 25 local dataManagement to maintain and update bases, raising thorny synchronization issues Network Could use existing Management mainframe-based tools Lack of proven cross-platform management tools

************************************************************************ A "Computerworld" article last summer quoted Mark Theissen of Hughes Electronics as saying, "I haven't seen anything industrial-strength enough to take hundreds of users' pounding and still remain standing." The article went on to say that data warehouses that are completely off the mainframes are still relatively rare, and quoted Chris Erickson, president of Red Brick Systems (a company that sells database software) as saying, "We see very little interest in having the financial database in one location, marketing in another, inventory in another."[7] Complexity Ruben Lopez of the University of Miami cited many advantages and disadvantages of client/server in a paper presented at the 1991 CAUSE Annual Conference, detailing some of the very complex issues involved: * * * * * * * * off-the-shelf distributed applications not yet available few standards for workgroup computing no clearly defined lines between technologies political implications problems with performance problems with integration file-oriented data transmission integration of data files [8]

Clearly, client/server presents new challenges and a level of complexity never before conquered in the data processing world. Writing for "Computerworld," Johanna Ambrosio stated: "Building client/server applications may be a snap compared with actually managing them. ... [The technology] is immature at best and most users lack the needed skills. What customers are looking for is a sane way of managing geographically dispersed gear from various vendors."[9] The same article reported that Nike's users are concerned about electronic software distribution, backup and recovery, and critical resource management. Ambrosio quoted Jeremy Frank of the Gartner Group as saying that over half of all client/server projects fail and 90 percent of those failures

are due to the lack of an adequate management structure. Perhaps the management structure to harness the client/server technology is not yet generally within the grasp of modern management capability. To be continued "Success stories" abound concerning planned client/server technology. Hundreds of DP managers will tell you how they plan to displace old legacy systems with more modern and cost-effective client/server systems. But few of them can show you any examples of major or mission-critical systems that have been converted. Expectations are great and enthusiasm high, but most of the actual client/server applications that I have seen actually implemented are peripheral, non-core applications and certainly not missioncritical. A recent example concerns Citicorp, the giant banking enterprise. Citicorp employs 81,000 people in ninety-three countries. An "Information Week" article last August quoted Citicorp officials as saying that Citicorp needs to free itself of redundant and incompatible systems and stem defections from its corporate IS division.[10] Previously a model of big-time mainframe use, Citicorp plans a shift to client/server applications, which is necessary if it is to hold its current customers and add new ones. Few people are old enough to remember twenty-some-odd years ago when the same Citicorp announced that it would replace its enormously expensive IBM 360 Model 40 mainframes with several minicomputers and save millions of dollars each year. Citicorp installed the minicomputers, but I do not believe that the mainframes ever came out. Now over twenty years later Citicorp seems to be in a similar situation. Is this a replay? Stay tuned. Some emotional responses It is one thing to begin a brand new, independent project using client/server technology, and a much different matter to convert large-scale legacy systems that have been developed over a decade or more. Many new client/server applications have been quite successful, as have some conversions. A large number of these successes have been either systems with relatively low transaction rates or systems that should never have been placed on a mainframe in the first place. But in many organizations, irrational and unfounded assumptions about the value and ease of implementing client/server systems flourish. In one university, the director of computing was simply directed to "get rid of the mainframe in two years!" That decision had no analysis or cost justification for support; it was purely an emotional response to financial pressures. This situation is far from unique. In another university, a consultant proclaimed that client/server systems would provide a 100-to-1 improvement in productivity. Some consultants are trying to sell top management on "simply" converting to client/server to save big bucks. When

that happens, one might ask such a consultant for a bid on the job. Moving from large-scale legacy systems to client/server is a major, major undertaking that is difficult for most people to understand. They have difficulty understanding that hundreds or even thousands of programs might require modification, integration, testing, and documentation. At the same time, DP shops are faced with rising expectations and uncontrolled growth of incompatible, departmental systems. The evidence is mounting that movement from large-scale legacy systems to client/server is a long-term project. It's not just a matter of converting these systems, which is difficult enough in itself. Spending several years converting without also reengineering the systems would not be sensible. At the end of the process, the organization would still possess old systems. Reengineering legacy systems while moving toward client/server means even longer time-frames can be expected. Georgia Tech University is converting its student records and financial systems to client/server, but reengineering at the same time. This project is expected to require over seventy full-time people for a five-year period. George Washington University is also migrating its systems to client/server, but over a sevenyear period. Advice from experts A little downsizing discretion is often the better part of valor. Experts say that headaches can be avoided simply by knowing what to downsize or what not to. Certain applications and systems will resist most attempts at downsizing, according to Ted Klein, president of the Boston Systems Group.[11] Until hardware and software become more state of the art, Klein advises organizations to avoid downsizing the following: * applications with very large databases that cannot be easily partitioned and distributed; * applications that must provide very fast database response to thousands of users; * applications that are closely connected to other mainframe applications; * applications that require strong, centrally managed security and other services; * applications that require around-the-clock availability. Dirk Faegre, a systems administrator at Concord Group Insurance, says that downsizing has become the latest stampede in the computer industry, with a herd of hardware vendors, analysts, and information systems managers touting the benefits of trading mainframes for client/server systems. Faegre's suggestions include: * * * * * Have a concrete reason to downsize. Don't just port applications, improve them. Think application portability during development. Harness the power of multiple processors. Think of downsizing as a productivity improvement

Jack Cooper is one of the most respected chief information officers in the nation. Not only has he been a leader in computing technology as president of CSX and now CIO at Seagram, but he has been a visionary of our industry. In the middle 1970s, when most people were attempting to develop reliable batch-processing computing systems, Cooper envisioned and implemented the first IMS online systems in higher education in the nation. Now he is a pioneer in client/server technology at Seagram, having implemented two previous systems, and is in the process of implementing a third. Cooper indicates that client/server lacks canned material to do such things as backup and recovery, that client/server is much more rigorous, that there are fewer people who know how to program in this environment, and that it takes about six months for an experienced programmer to become reasonably productive. Cooper is an avid proponent of client/sever technology and points out that it can produce tremendous scalability, the ability to design once and deploy many times, and uniformity of systems. He says Seagram has found that "developing for client/server is 12 percent to 15 percent less expensive than developing for other platforms."[12] Same old song? Over the last twenty years, vendors have produced hundreds of products that claimed to improve productivity by monumental amounts. If these claims had been true, we would be producing computer systems in negative time today. Cobol was to improve productivity by zillions. Mark IV would do even more. Third-generation, then fourth-generation, and now fifth-generation languages will save the day. The fact remains that the majority of time required in most systems development projects involves communications with the user community, developing and obtaining consensus on specifications. Then, without an effective change-control mechanism (which most organizations do not have), continual change orders stretch the projects beyond toleration, and the user departments lose confidence in the central DP organization. Jack Cooper's experience leads him to suggest that productivity gains of 15 percent can be expected--a far cry from the 200 percent, 500 percent, or even 1,000 percent claimed by some. Where's the beef? The real question that remains is how effectively client/server technology can reduce costs and improve the information technology organization's ability to respond to enterprise needs. It is clear that traditional systems development projects take too long and cost too much. It is probably true that client/server technology will allow dramatic strides in improving the situation. What is not clear is how close we are to that dream. Marc Dodge, a telecom manager for a Fortune 100 company says, "The really hot story today is the mainframe. It turns out it really isn't dead after all. Applications requiring large data sets and geographically distributed data still need a data center."[13] Dodge goes on to say that client/server is here to stay and that although we are all stumbling, we are

stumbling in the same direction. People should not expect large savings in the early years of client/server implementation. The conclusion of the 1993 Gartner Group Symposium was, "By the time you are finished, the [client/server] exercise will probably wind up costing you as much as 50 percent more than if you had left things alone."[14] But over time, as technology and management catch up to the needs of the database community, client/server costs will come down, function will go up, and the difficulties we face today will be much more manageable. Just be careful in the meantime. ======================================================================== Footnotes: 1 Wayne Eckerson, "Firm Learns Limitations of Client/Server Systems," Network World, 19 July 1993, pp. 23-25. 2 Haig Hoaness, "Tales of the True Cost of Ownership," Corporate Computing, August 1993, p. 17. 3 Alan Radding, "The Accidental Downsizer," Computerworld, 10 August 1992, p. 66. 4 Janet L. Hyland and Mary A. Modahl, "The LAN Money Pit," in The Network Strategy Report (Cambridge, Mass.: Forrester Research, Inc., 1992), pp. 2-13. 5 Mary E. Thyfault, "The Power of One," Information Week, 16 August 1993, pp. 44-48. 6 Robert L. Scheier, "Mainframe Keeps CN On Track," PC Week, 9 August 1993, pp. 68, 70. 7 Johanna Ambrosio, "Warehouses Cling to Mainframe," Computerworld, 23 August 1993, p. 91. 8 Ruben Lopez, "Is Client/Server the Future of Information Processing?" in Proceedings of the 1991 CAUSE Conference (Boulder, Colo.: CAUSE, 1992), pp. 473-483. 9 Johanna Ambrosio, "Distributed Nets Elude IS Control," Computerworld, 23 August 1993, pp. 1, 14. 10 Bob Violino, "Citi's Surge," Information Week, 9 August 1993, pp. 42-47. 11 "Advice," Computerworld, 10 August 1992, p. 68. 12 Johanna Ambrosio, "High Spirits at Seagram," Computerworld Client/Server Journal, 11 August 1993, pp. 12-16. 13 Marc Dodge, "Client/server May Be Ragged, But It's Not on the Ropes," Computerworld, 15 November 1993, p. 37. 14 Johanna Ambrosio, "Client/Server Costs More Than Expected," Computerworld, 18 October 1993, p. 28.

======================================================================== Martin B. Solomon is a professor of computer science at The University of South Carolina. Until recently, he was Vice Provost for Computing & Communications at The University of South Carolina System. Dr. Solomon is author or co-author of five books and many journal articles, and has been a consultant to many colleges and universities, industrial concerns, and the United Nations. ************************************************************************ 03/30/94 (meh) The Hidden Costs of Client/Server Computing