You are on page 1of 14

Implementing ATM at CMU: Transition and Teamwork This paper was presented at the 1996 CAUSE annual conference.

It is part of the proceedings of that conference, "Broadening Our Horizons: Information, Services, Technology -Proceedings of the 1996 CAUSE Annual Conference," page 4-31+. Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage. To copy or disseminate otherwise, or to republish in any form, requires written permission from the author and CAUSE. For further information, contact CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301; 303-449-4430; e-mail info@cause.org. IMPLEMENTING ATM AT CMU: TRANSITION AND TEAMWORK By: James W. Dening and Mark D. Strandskov Central Michigan University 4 December, 1996 THE NETWORKING "SEED" IS PLANTED In the early days of networking at Central Michigan University (CMU) in the late 80's, the need for a structured campus backbone was becoming more apparent. Several isolated LANs were being interconnected by whatever means possible. This was usually done with a pair of obsolete 8088 PCs and some freeware IP routing software. In one case, a token ring was extended between two buildings via multimode fiber and a pair of Fibermux Magnum token ring extenders (Figure 1). Although there was no money to implement a campus backbone, a plan was put together to show what could be done if money were available. The plan consisted of interconnecting all the isolated LANs as well as any existing buildings, classrooms or computer laboratories that had computers with a 16 Megabit per second (Mbps) extended token ring network. Although the hope of any funding at that time was very bleak, people still began to buy-in to the idea of an ubiquitous campus backbone. People were often offended if they were left off "the plan," but they soon became elated after they had been added. What did it matter? It did not cost any more or less to have them added. It was merely an idea with no source of funding. More importantly, it was a "seed" beginning to take root. FUNDING BECOMES AVAILABLE (Figure 1: CMU Network Circa Spring, 1992 is missing, found in the Microsoft Word Version)

In the summer of 1992, CMU constructed the new Dow science building on its campus and remodeled Brooks Hall, its existing science building. As part of that construction project, funds were set aside to provide connections to places on campus of interest to the College of Arts and Sciences. The following locations were chosen as places between which high-speed communication was desired: * Foust Hall, which housed the central mainframe and Internet access facilities * Woldt Hall, which housed a "24x7" student microcomputer lab * Pearce Hall, which housed the Computer Science Department * Brooks Hall, the science building being remodeled Dow Science Building, the new science building A source of funding became available. When it came time to implement the project, the price of technology had dropped to the point where a 100 Mbps Fiber Distributed Data Interface (FDDI) "backbone" was employed. THE INITIAL BUILDING BLOCKS The College of Arts and Sciences, Computer Services and Telecommunications worked together to define what was to be the first structured backbone and building network design. Much of the early network design was based on what peer institutions were doing. The basic philosophy was to route between buildings and bridge within the building. For CMU, this became Cisco routers and Cabletron concentrators. All initial buildings (except Foust, which had a Cisco AGS+ router) had Cisco 4000 three-port routers. The two science buildings and the Woldt Hall lab had chassis-based Cabletron MMAC8-FNB concentrators to interconnect all the wiring. Although a distributed FDDI ring was implemented, the fiber was installed in a star configuration. Six strands of multimode fiber was distributed to all the buildings from a central fiber distribution point, located in the Dow Science Building, with a 24 strand trunk running back to Foust Hall. All future buildings on the south end of campus would have fiber wired to this location. Six strands of multimode fiber was also run between the wiring closets within the buildings and starred from the main building wiring closet. Category 5 Unshielded Twisted Pair (UTP) wiring was used for all the nodal points. Although the original intent was to bridge between the segments within the concentrators, it was often the case that the Cisco routers were used as collapsed building backbones. Giving individual departments each a single port on the router proved to be a mistake. Some departments had hundreds of computers while others had less than a dozen. As bandwidth needs increased in the buildings, the expansion became politically based and not performance based. (Figure 2: CMU Network Circa Fall, 1992 is missing, found in

Microsoft Word version) By March of 1993, the equipment had started to arrive and before the following fall semester, the routers and the shared Ethernet concentrators were operating in all five buildings (Figure 2). Cabletron s Remote Lanview network management software was utilized to centrally manage the network. As most have heard about any Simple Network Management Protocol (SNMP) software, the product was capable of managing all the SNMP devices in the network. Most vendors neglect to mention to what degree their product can manage other vendor's SNMP devices. When you hear the phrase "walk the MIB tree," the "S" in SNMP no longer exists. (Figure 3: CMU Network Circa 1995 is missing, found in the Microsoft Word version) The routers were kept under maintenance but due to the expense, the concentrators were "self-maintained" with spare parts. A true campus backbone was finally established at CMU. Over time, the self maintenance of the Cabletron concentrators has proved to be a very good financial decision for CMU as very few Cabletron devices have failed. ALL ABOARD As the benefits of the campus backbone were being discovered by other departments, additional buildings were connected as quickly as funding could be found. Over the next two years, an additional eight buildings were added directly to the backbone and one was added via a T1 serial connection (Figure 3). Some buildings were easier to add like our Industrial Engineering and Technology building since it had been completely wired with a mix of thicknet and thinnet when it was constructed in 1987. Some proved to be more difficult since they had no network wiring, or they were older buildings. The expansion was funded through remodeling projects, grants or existing departmental funds. Cisco routers were the only devices allowed on the backbone and maintenance was to be paid for by the originating departments. Most other equipment was still self maintained. As the campus network was growing at a feverish pace, work was underway to reengineer the regional network, Michnet (Merit, Inc.). Primary and secondary communication processors based on PDP 11 technology were also being replaced with Cisco routers and Livingston PortMaster Network Access Servers (NAS). The 56 Kbps link which serviced CMU's campus was upgraded over time to a pair of T1 circuits, one to Michigan State University and the other to Saginaw Valley State University. Eventually, traffic gets to the greater Internet through a router at the University of Michigan. CMU's Internet traffic has minimally doubled every year for the last seven years and on-campus traffic has grown at a faster pace.

TWEAKING AND TUNING THE PLAN From 1993 to 1996, the FDDI backbone was growing and additional network drops were added to the various buildings connected to the backbone. All of the additional locations were based on primarily the same building blocks used in the past. A second fiber distribution point was established to service the north end of campus. Also, stackable hubs were used instead of the concentrators. The stackable hubs were cheaper and as shared segments were getting smaller and smaller, stackable hubs proved to be easier to segment. Cabletron's Remote Lanview product was replaced with Cabletron's Spectrum Element Manager and Network General's Expert Sniffer and Foundation Manager products were added for network traffic troubleshooting. Adjustments were also being made to the network management and staffing. A systems programmer became our main "network support" person, and two computer repair technicians spent increasing amounts of time troubleshooting network problems, installing network cards in PCs and Macs, and performing user training. CMU's networking staff had a "Tiger by the Tail" and so in January of 1995, questions like the following were being asked: * How many more routers can be added to the FDDI ring? * What is the utilization of the FDDI ring? * Does a router have to be added each time another building is connected to the backbone? * What alternatives are there? * Does it ever stop growing? Within Computer Services, these questions and others had been considered. While the FDDI utilization appeared to be quite low, on the order of 3% to 6%, it was becoming clear that continuing to add routers to our existing FDDI ring every time another building was brought "on line" was not necessarily the best solution. There was a need for a backbone technology that was more robust than the single FDDI backbone that had the scalability and potential to bring multimedia applications across the network to the desktop. Asynchronous Transfer Mode (ATM) was frequently talked about but few were doing anything with it. Several models were considered: using multiple FDDI rings, ATM to every building, ATM to every wiring closet, and an all ATM network. After some research and many discussions and presentations with Cabletron, Cisco and IBM, the old model of the routed FDDI ring was soon replaced with plans for a hybrid ATM/FDDI network which would evolve to a completely ATM switched backbone in four years. New connections to the backbone and high-demand locations would be ATM attached and the routers would get moved out to edge locations and new lower-demand areas. THE NETWORK WASN'T THE ONLY THING CHANGING AT CMU In the Spring of 1995, CMU's President distributed a "White Paper" on technology which created the position of an Interim

Assistant Vice Provost for Information Technology charged with responsibilities including the following: * Develop a plan to complete the campus computer network. * Develop a comprehensive plan for a Technology Training Center to be integrated with the library instructional resource center and the new technological library expansion plan. * Develop a job description for a new coordinator of distance learning, * Consider a comprehensive plan to provide every CMU student with a personal computer to assist them in their university studies. * Develop a plan that would insure that all faculty members will have appropriate computer access. The article _A Planning Process Addresses an Organizational and Support Crisis in Information Technology1_, in the Summer, 1996 issue of Cause/Effect Magazine depicts the organization that resulted. In essence, a matrix organization to support information technology was developed with an Interim Assistant Vice Provost for Information Technology overseeing the Directors of Computer Services, Telecommunications, and Instructional Support. An Information Technology Planning Board, representing the various Colleges, the Dean of Students, the Library, and the Administrative Divisions, was assembled at the request of the Provost and is chaired by the Interim Assistant Vice Provost for Information Technology. During the Summer of 1995, this Information Technology Planning Board began meeting and various planning teams were identified, including: * * * * Administrative Planning Team Faculty Technology Team Library Planning Team Network Planning Team

The Network Planning Team (NPT) is chaired by the Director of Computer Services and was brought together in February of 1996 to assist in designing and implementing the ATM campus backbone. The NPT consisted of the following representatives: Director of Computer Services Director of Telecommunications Assoc. Dir. Computer Services (Operations) Asst. Dir. Telecommunications Computer Services Staff Specialist (Networking) College of Extended Learning Staff Specialist Director of Purchasing University Library Staff Specialist Assoc. Dir. Residence Life College of Arts & Sciences Staff Specialist Interim Asst. Vice Provost, Information Tech. The initial meetings served to acquaint the members of the NPT about the current status of the campus network and to prepare for the construction of a Request for Proposals (RFP) to expand the existing campus network. This was also the first official "meeting of the minds" to solidify a

networking vision which would address everyone's needs in a strategic fashion. An aggressive timetable was established which, if followed, would network the rest of campus over an 18 month time period. OFF TO RALEIGH As part of this preparation, a Network Design Workshop was scheduled with IBM on a paid basis in Raleigh, North Carolina in March of 1996. (The same thing could have been done with several other vendors, but since CMU already had a well established working relationship with IBM due to the IBM Enterprise Server on campus, they seemed the logical choice.) Objectives of this workshop included the development of a goal statement, learning enough about ATM to reach agreement on the final and initial stages of an ATM approach, and to leave the workshop with enough information to write a RFP for an ATM network. Preparation for the workshop was a great assistance in gathering the necessary information to issue the RFP. The first item of business at the workshop was for CMU to provide a "kickoff" presentation discussing such items as the following: * * * * * * * * * * * Campus Environment Business Goals and Challenges Major Information Systems Projects Current Network, Network Management & Instrumentation Tour of the Campus Future Plans and Projects Networking Vision Diagram of the "Network of the Future" Bandwidth Requirements Workshop Expectations Workshop Objectives

After several days of intense sessions, a polished networking plan was coming together. The new ATM plan would call for ATM to eventually reach all wiring closets (Figure 4). In most cases, this meant an ATM workgroup switch for the main building wiring closet and Ethernet switches with ATM uplinks would provide the backbone connectivity in the remote closets in the buildings. Shared and switched Ethernet will still play a major role in CMU s network for quite some time. With the eventual goal of ATM to every wiring closet, the potential is there to ATM attach any workstation on campus. A single stackable Ethernet hub would connect to each port on the Ethernet switches. This plan provided the most scaleable and cost effective solution. (Figure 4: New Building Blocks is missing, found in the Microsoft Word version) Fiber would continue to be distributed from the existing distribution points. The plan also called to increase the fiber to each building as well as between wiring closets to

be 24 strands of multimode fiber and 6 strands of single mode fiber. Due to the existing tunnel system on campus and the costs of installing so much extra unused fiber, a decision was made to scale back the minimum fiber requirements to simply 12 strands of multimode fiber in locations which were easily accessible via the campus tunnel system. In other cases where fiber would have to be direct buried, the full 24 strands of multimode fiber and 6 strands of single mode fiber would be installed. TIME TO WRITE THE RFP The trip to Raleigh proved to be very worthwhile, and enough information was gathered to develop the RFP. The RFP included the following: I. Introduction A. CMU Background Information B. Network Background Information C. Scope of Project D. CMU Hardware/Software Information II. Instructions and General Conditions for Submitting a Proposal III. General Terms and Conditions of Agreement IV. Attachments A. Network Backbone Diagram B. Installation Schedule C. Listing of Campus Buildings D. Building Purpose E. Campus Map and Fact Sheet F. Wiring Closets G. Network Ports H. Completed Network Diagram I. Vendor Questions J. Price Summary Sheets K. _Data Communications_ ATM Stress Test Article Reprint L. Vendor Qualification Questionnaire M. Letter of Intent to Bid The RFP was issued in April, and proposals were due in May. Eleven proposals were received, all of which were professional and responsive. After a lengthy review process, the NPT subcommittee, which was comprised of a representative from the College of Arts and Sciences, the Computer Services Department, and the Telecommunications Department, selected IBM as CMU's preferred vendor for the ATM network. There were several key points which led to this decision: * IBM had the most comprehensive and complete proposal submitted. * IBM's proposal was very aggressively priced. * IBM included extra items like an ATM Video Distribution System. * IBM fully addressed security concerns both internally and externally. * A few areas in IBM's proposal were lacking however IBM addressed them by supplying the necessary equipment until the final products were released.

* The IBM developed "Switch-on-a-Chip" looked like an excellent performer. * The Multiprotocol Switched Services (MSS) product provided routing functions directly in the switch. * IBM was the most aggressive and helpful getting information to us during the very early stages of this project. * CMU's history with IBM's on-site service has been excellent. * IBM was ranked #1 by all members of the NPT subcommittee. Many of the other proposals were lacking items like a token ring solution, a 25 Mbps ATM solution, a good management platform, or service and support. THE PILOT PROJECT (PROJECT CESSNA) After the decision was made and funding was available in late August of 1996, the ATM pilot project (Project Cessna) was underway. A timeline was set to keep the project moving forward and on track over a three month time frame. The project was broken up into several phases: Phase 1: * Establish core ATM backbone network * Internet connectivity from a 25 Mbps ATM attached PC * Internet connectivity from an Ethernet attached PC over the ATM backbone * Configure Netview 6000 management station, both Ethernet and ATM attached * Configure a Forum compliant LAN Emulation (LANE) Server * Configure 8281 ATM to Ethernet bridge Phase 2: * Connectivity with a 155 Mbps ATM attached Netware 4.1 Server Phase 3: * Add and configure the MSS * Configure the second 8260 hub * Add second 8281 bridge (ATM - Ethernet) Phase 4: * Add Firewall * Reconfigure 8281 bridge for Token Ring Phase 5: * Start replacing 8281 bridges with 8271/8272 switches with ATM uplinks A series of tests were compiled to try out during Project Cessna. Most of the tests were Ethernet, token ring and ATM attached PCs and Macs to Ethernet, token ring and ATM attached servers. Novell Netware, Microsoft Windows NT Server, AppleShare and UNIX servers were all scheduled to be tested. Phase 1 was a very critical phase for CMU. Phase 1 revolved

around having at least one test which would show that ATM was a very viable alternative as a network infrastructure. This test was to browse the World Wide Web from an ATM attached PC. More importantly, this was to be accomplished prior to CMU's Board of Trustee's meeting which would decide the fate of funding for the ATM network expansion. Shortly after the equipment arrived, there was a flurry of activity to get everything installed and configured. Although it seemed like a very simple test, there was quite a bit of background work which needed to be accomplished first. * The hardware needed to be installed. * The backbone needed to be established. * An ARP server and a LANE Server (LES) had to be implemented. Without the MSS, the ARP server was installed on the RS/6000 and the LES was installed on the IBM 8285 workgroup switch. * LANE clients were installed on all the switches. * An IBM 8281 ATM LAN bridge needed to be configured. * The network management station had to be installed and configured. * The network management software needed to be updated. * The test computer needed to be setup and configured. Most of this was accomplished after a couple of marathon days. Nine days prior to the Board's meeting, Phase 1 was accomplished. With Phase 1 completed, it would only seem reasonable to expect to work on Phase 2, but even the best of plans sometimes need to be altered. IBM was able to deliver a prerelease MSS. The MSS is a very critical piece for several reasons. First, it eliminates the need for the isolated, collapsed backbone router. It also provides support for multiple LES, the ARP server, the Broadcast and Unknown Server (BUS) as well as the LAN Emulation Configuration Server (LECS). Implementing the MSS became a bigger priority so the original Phase 2 was postponed. This was also the time to readdress our network management situation. As the network has expanded, the Windows 95 based network management software was beginning to have difficulties keeping up. A larger and more robust platform was needed. Netview 6000 with ATM Campus Manager will provide this functionality for CMU. While it has definitely proven to be a more powerful platform, it is not as "user intuitive" as the older Windows based software. IBM has a feature implemented in their switch called Switchto-Switch Interface (SSI). Much of the functionality of SSIs will be released in Forum compliant PNNI Phase 1. With multiple SSI paths between switches, a call can be rerouted on the fly if a link fails. In a simple test, there was only about a one second loss of service while the call was reestablished. BOUNCING DOWN THE RUNWAY It would not be fair to only discuss the positives without mentioning some of the pitfalls that occurred.

* Two days after Phase 1 was initially working, the hard drive in our test computer failed. * A new ATM test computer, a 166 MHz Pentium, required new drivers to get the card to be recognized. * Due to a bug in Microsoft's TCP/IP stack, the maximum frame size for Token Ring LAN Emulation had to be set smaller than the 4544 byte default frame size. It worked for everything we tested when it was set to 1462 bytes. * The RS/6000 used for the pilot project failed to mount the root partition after it had been restarted one day. Fortunately, it was resurrected quickly, and more space was allocated to the root partition. * ATM adapters for most Apple Macintoshes are not available. The only option at the time was for a PowerMac with a PCI bus. * There was some confusion regarding the particular chipset on the Interphase 155 Mbps ATM adapter. The older chipset would not support the Novell drivers. It turned out that although the card used an old model number, it was, in fact, an adapter with the newer chipset. Although problems still persisted for a while, it appeared that these are due to a faulty adapter and not the switch, cabling or the drivers. RESULTS OF THE PILOT PROJECT AND LESSONS LEARNED * ATM is still quite new and constantly evolving. The installation base of most vendors is small but the major ATM vendors have put considerable resources into ATM. * Once the ATM devices were working, they worked well. The performance was good and the equipment ran flawlessly. * Updated drivers are a necessity. There was not a device or card installed that did not have newer drivers available. In almost all cases, the drivers were necessary to implement a new feature or fix some problems. Although ATM to the desktop has worked out well, a large scale implementation of ATM to the desktop at CMU will need to be carefully controlled due to the maintenance of the drivers. * Develop a good partnership with a vendor. This is really a key point. Without an extremely good working relationship with a vendor, implementing ATM can be frustrating. For CMU, the partnership with IBM worked extremely well. If a problem came up, the right people were brought in to resolve it. This included networking, hardware and software specialists; IBM's test engineers in Raleigh, North Carolina; as well as the developers of the ATM drivers. * Expect the vendor to learn as well. Most vendors have ATM offerings but few have very many installation bases. * A networking planning workshop can be a huge help. If nothing else, it helps organize your thoughts into a single cohesive plan. With input from all university divisions, more people will "buy-in" to the overall plan. * There is nothing better than a good plan. Various

* * * *

influences will affect the network plan, however, with good building blocks and core concepts well thought out, reengineering is much easier. It also allows others to "buy-in" to the plan and should consider strategic as well as technical issues based on the entire campus and not just individual departments. Plan on a lengthy pilot period to familiarize yourself with the equipment. Stick with industry standards as much as possible but do not discount proprietary solutions. Many proprietary solutions are pre-Forum compliant standards. A good management solution is a necessity. Plan the appropriate fault tolerance based on the nature of the equipment, especially in backbone and building backbone switches. In most cases, more and more dependence is being place on the network. The effects of any downtime can be disastrous. Do not forget items like electrical circuits, fiber paths, power supplies, etc. The network is an ever evolving entity. It is not a one-time project. Most universities' and corporations' bandwidth double minimally every eighteen months or less. In many cases, the bandwidth doubles more than once a year. To keep up with this type of growth, recurring funding must be in place and the networking technology used must scale to meet user demands. Stay two years ahead of the technology. It is extremely difficult to predict where technology is going to be in the future. There is only one constant and that is it will continue to grow at a rapid pace. By at least attempting to anticipate the technological needs of the future, the network will be able to accommodate. Not too many vendors make ATM adapters for Apple Macintoshes. There are currently only four models of Apple Macintosh which would take an ATM adapter and they all required a PCI slot. Use a reasonably powerful (e.g. Pentium class) computer if it is to be ATM attached. The initial 486 DX-33 computer was in general too slow. This was not a factor of the ATM adapter, but of the general CPU performance.

THE UPDATED PLAN, AGAIN... Given the available funding for the next expansion was less than the original plan, a new plan was needed. There were four main goals which needed to be accomplished. * Get as many additional nodes connected to the campus network * Get as many additional buildings connected to the backbone * Provide bandwidth relief to problem areas * Allow distributed technical staff to work with the new technology The end result of this revised plan should be 1100 additional active nodes and connectivity via multimode fiber to the last six major academic buildings (Figure 5). By the summer of 1997, twelve buildings should be on the ATM backbone and

twelve buildings should be on the FDDI backbone. All of the major distributed support facilities will have an ATM switch for connectivity of high speed servers and workstations. The end result should look like: (Figure 5: Building Network Connectivity (Shaded boxes indicate ATM switches) is missing, found in the Microsoft Word version) The implementation of this plan will require a significant amount of coordination with the remote staff. Like most academic institutions, making changes in the middle of a semester is not recommended. There are several issues that will have to be resolved for this transition to take place. The new plan is to switch as much as possible and route only when necessary. Switching usually has a latency measured in microseconds whereas routing usually has a latency measured in milliseconds. By switching as much as possible, the router can be eliminated as a potential bottleneck. An Emulated LAN (ELAN) would be assigned to each college, one for a general academic area and one for the administration. Routing would only occur between the ELANs. In order to accomplish this, CMU's Class B IP address will be subnetted using a complex subnet mask. To date, addresses had only been assigned from the lower half of the range using an eight bit subnet mask (254 hosts/subnet). The upper address will be subnetted using a 4 bit subnet mask (4094 hosts/subnet). This will create eight subnets in the upper address range. Eventually, as lower addresses can be freed up, more blocks will be created in the lower address range. In order to accomplish the complex subnetting, a routing protocol other than RIP will need to be employed. OSPF will be used to address this need. To handle the issues of readdressing all of the hosts in these areas, many of these locations have already implemented Dynamic Host Configuration Protocol (DHCP) servers thus making the job easier. The main DHCP server for campus used Competitive Automation's JOIN software. The next release of their server is supposed to include a server-to-server protocol that will allow redundant DHCP servers to issue addresses out of the same address space. Although this is still merely "vaporware," the plan would call for all the remote DHCP server to sync up with the main server. The downside to flattening the network has always been dealing with broadcasts. IBM has a service call Broadcast Manager. Broadcast Manager will send the majority of the broadcasts directly to their destination with the need to flood the network, thus eliminating the broadcast problem. Another issue to be resolved is how to interconnect the FDDI and ATM networks. One method would be to use the existing routers that are ATM capable. Another would be to switch between the FDDI and ATM networks. The downside to the first method is that latency becomes an issue again so the latter

will probably be employed. FUTURE DIRECTIONS FOR CMU * Network one additional residence hall quad per summer. Of CMU's 17 residence halls, only one quad, or four buildings, has been networked. There are four Category 5 UTP network drops to each four person suite with two jacks active. The other two jacks can be activated upon request. All the network drops are connected via 24 port Ethernet hubs. The individual hubs are connected via an IBM 16 port Ethernet switch. Each switch will be attached to the backbone via a 155 Mbps ATM uplink. * Network the on-campus apartments. The apartments will have the same type of connectivity as the residence halls but due to the detached and distributed nature of the apartments, the wiring will be much more difficult. There will probably be more discussion about this before a final plan is implemented. * Get network connectivity to all buildings. There are many more buildings which do not have network connectivity yet. Before the summer of 1997, all major academic buildings will be attached to the network and most of them completely wired. * Work ATM into more wiring closets. This will ultimately allow CMU the greatest flexibility for the network. As ATM applications begin to appear, there will be a larger number of locations which will benefit or at least have the potential to benefit. * Connect more users via ATM. This will probably happen at a slower pace than the backbone itself. Ethernet will be around for a long time at CMU. With the added maintenance requirements of updating adapter drivers, many will probably opt to wait for the ATM development to stabilize further. The locations probably the most likely to have ATM attached workstations are the Technology Learning Center, the multimedia labs and the CAD/CAM labs. * Build more fault tolerance into the network. It is unclear if you can ever have too much fault tolerance. As the dependency on the network increases, the need for more fault tolerance increases. At CMU, there are varying levels of fault tolerance planned for the network with backbone nodes being the most critical and individual workstation nodes being the least critical. * Connect more buildings via fiber. This allows for a more scaleable transition from lower speeds to higher speeds. * Add native ATM applications to the network. Native ATM applications are beginning to be released. Many of these hopefully will have a tremendous impact on the education community. * Develop a plan to incorporate CMU's major College of Extended Learning (CEL) locations. CMU CEL has a very extensive off-campus educational involvement throughout the world. There are almost a many students enrolled through CEL as there are students attending CMU's main campus.

It has been a long and exciting journey for CMU started from nothing more that a "seed." "Big oaks from little acorns grow." ============================================================ ENDNOTES 1Nelson, Keith R. and Davenport, Richard W., A Planning Process Address an Organizational and Support Crisis in Information Technology, Cause/Effect Magazine, Summer, 1996, pp 26ff.