You are on page 1of 55

FDDI Fiber Optic Backbone Network Development Plan CAUSE INFORMATION RESOURCES LIBRARY The attached document is provided

through the CAUSE Information Resources Library. As part of the CAUSE Information Resources Program, the Library provides CAUSE members access to a collection of information related to the development, use, management, and evaluation of information resources- technology, services, and information- in higher education. Most of the documents have not been formally published and thus are not in general distribution. Statements of fact or opinion in the attached document are made on the responsibility of the author(s) alone and do not imply an opinion on the part of the CAUSE Board of Directors, officers, staff, or membership. This document was contributed by the named organization to the CAUSE Information Resources Library. It is the intellectual property of the author(s). Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, that the title and organization that submitted the document appear, and that notice is given that this document was obtained from the CAUSE Information Resources Library. To copy or disseminate otherwise, or to republish in any form, requires written permission from the contributing organization. For further information: CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301; 303449-4430; e-mail To order a hard copy of this document contact CAUSE or send e-mail to FDDI Development Plan@ , @Page @

University of Alberta Fiber Optic Backbone Network Development Plan April 20th,1992

Computing and Network Services (CNS) Data Communications & Networks Room 103 General Services Bldg Telephone 492-9327 Abstract: The purpose of this document is to outline the FDDI technology and present how that technology can be introduced to the campus network environment. Particular attention has been paid to the funding aspects of a phased implementation, together with an advisory structure that can better form a partnership with the user constituencies and Computing & Network Services. With this document, authorization is being sought to proceed with the Phase 1 and Phase 2 implementation of the FDDI Backbone Network. Timeliness is of the essence if the completion of Phase 1 is to be achieved by December 31, 1992. Distribution: Dr L Stanford, Vice President (Student & Academic Services) Dr M Beltrametti, Director CNS B Silzer, Chair - Networking Advisory Committee TABLE OF CONTENTS

Executive Summary Architecture Expectations & Limitations FDDI Architecture FDDI Layout on Campus FDDI Network Management Implementation Phase 1 Route Fiber Allocation Concentrator & Router Selection Phase 1 to Phase 4 Cost Analysis Phase 1 Implementation Outline

5 9 9 11 14 19 23 23 26 27 35 38

Funding Capital Funding Operating Funding Staff Resources Appendix Networking Advisory Committee Glossary

39 39 40 41 43 43 45

Executive Summary CNS committed in its strategic plan [1] to install a centrally funded, high speed, high capacity, campus-wide fiber optic backbone network using FDDI technology. The commitment included providing the support required to install, upgrade, maintain and manage the backbone network. Various initiatives over the past several years have focused on the desperate need for an improved data communications network for the campus. Other initiatives have identified viable technical solutions which could meet that need. Vice Presidential planning task forces [2], a PACCR review of University Computing Systems [3], and more recently the CNS strategic planning efforts [1], all identified networking as tactically the priority one project. Other CNS initiatives [4] [5] had previously focused on fiber optic technology as a basis for a campus network. The project of installing such a backbone network is large both technically and financially. The only realistic way it can be accomplished, is to install the network over a period of a few years. This requires a campus commitment to a multiyear phased network implementation and financial plan. This document provides a vehicle to publicize and coordinate that commitment. CNS also undertook to establish a committee with representation from the key service providers, user constituencies, and financial authority. The committee is to assess the needs of appropriate campus networking infrastructures from a strategic perspective and to participate and advise in the translation of these requirements into networking plans for the University community. This document serves as a vehicle that CNS can use to pass issues to the committee for their advice, to record that advice, and to record achievements. Some of the document's contents are dynamic and will change over the project's lifetime. For example, individual sections have already been given to the committee and in some cases have been acted upon by the committee. These sections have been updated to reflect the action taken and the section reissued.

This document consists of three parts which provide an overview of the architecture, details of the implementation, and details of the funding. The appendix includes information about the Networking Advisory Committee and a glossary of the telecommunication terms used in the document. While the strategic need for an improved data communications network capability has been well articulated, how that need can be met with today's rapidly changing network technology must be stated clearly. There must be no misunderstandings between what challenges can be serviced by a technology and how those challenges can be applied and administered at the University of Alberta. Expectations & Limitations states what the FDDI Backbone Network will do and what it won't do. FDDI (Fiber Distributed Data Interface) is an ANSI fiber optic based networking standard. It allows a variety of options in its architecture. FDDI Architecture provides an overview of the standard and outlines some of the fundamental architectural choices CNS has made. FDDI Layout on Campus describes the project phases, outlines some of the design choices such as the campus route and the location of ring nodes, and serves as a high level implementation plan. FDDI Network Management outlines some of the considerations in managing networks and describes some choices to be made in managing the backbone network. Although there are other backbone network technologies that are suitable for a campus environment and have been implemented at other universities, the FDDI ring of trees was chosen because it best meets the University of Alberta needs and vision. FDDI is non-proprietary, thereby allowing us to acquire products from whichever vendor offers us the best combination of price, performance, and features. FDDI provides an architected solution designed with graceful expansion in mind. FDDI provides a robust design that automatically minimizes the effects of failure. The ring of trees topology matches the topology of the Campus Utility Corridor system, minimizing the fiber costs. The 100 megabit FDDI technology provides a longer term, higher capacity solution than does either the 16 megabit Token/Ring or the 10 megabit Ethernet network technology. A number of implementation decisions are required and several sections deal with these decisions. There were three alternatives for the routing of the Phase 1 portion of the network. The merits of the alternatives and the choice of the North Route alternative are described in Phase 1 Route. The fiber optic cable used for the backbone ring can contain any number of fiber optic strands. Fiber Allocation contains a recommendation for a 24 strand cable, describes how the strands will be used, and gives the factors that led to the recommendation. There are a variety of concentrator and router products available. Concentrator & Router Selection enumerates the factors used in evaluating the products and describes the process being used to select concentrators and routers. While the implementation plan for the project must be

comprehensive and detail the funding requirements and anticipated schedules for the complete implementation, it must focus specifically on the next contemplated phase and the faculties and departments benefiting from the increased connectivity in that phase. Phase 1 to Phase 4 Cost Analysis provides estimates of the funding requirements on both levels - for the completed project and by each phase. Whereas Phase 1 Implementation Outline only describes the details of implementing the first phase of the backbone network. Although the implementation plans for all phases are outlined broadly in FDDI Layout on Campus, detailed implementation plans for subsequent phases will depend on a complete assessment of achievements met and opportunities to be gleaned in accommodating any technological advancements occurring during a previous phase. Some buildings and facilities are not accommodated in this plan. These include campus buildings which are not accessible from the Utility Corridor system (such as Ringhouse 1 and Trailer Complex #2), off-campus University facilities (such as the Faculte St. Jean and the Edmonton Research Station), and some near-campus buildings (such as Campus Tower) in which space is leased by University departments. A complete list of the excluded buildings is found in FDDI Layout on Campus. They are excluded because the fiber optic cable is being laid in the campus Utility Corridor system and there are no plans to do any trenching to lay cable to the excluded sites. However, other network technologies such as leased T1 telephone lines, public data networks, and radio links are being examined to see how well they could service the excluded sites by acting as extensions to the FDDI Backbone network. Some buildings are included that are only of an associated nature to the University. An example is the Cross Cancer Clinic. Although a cost of connection is included in the tables, it must not be construed that the University is financing the provision of connectivity to these agencies. Any connection and supporting infrastructure would have to be 100% externally funded by the agencies themselves. Capital Funding identifies the likely sources and outlines the staging of the capital fund appropriations of $3.25 million for the project. Operating Funding outlines where the annual costs of maintaining the network hardware and software is being found and discusses the issue of a probable future shortfall. Staff Resources identifies that transitory additional staff will be required to relieve existing personnel, so that they may concentrate on the implementation of the FDDI Backbone Network. In summary, the four phases of the FDDI Backbone Network project are estimated to cost: Phase 1 Phase 2 Phase 3 Capital $ $ 230,543 11,600 $ $ 971,625 68,250 $ $ 584,420 28,000 Operating

Phase 4 All Phases

$ 366,410 $ 373,700 $ 2,526,69 8

$ 13,200 $ 0 $ 121,050

It is as yet premature to reduce the Capital Funding budget even though there is a $700K difference between the capital funds budgeted at $3.2M and the projected expenditure at $2.5M. The budget is taken from a March 1991 market evaluation. The projected expenditure is from a March 1992 market survey. Whereas it is gratifying that costs are demonstrably reducing, it must be realized that no purchase requisitions have been written to date and there is no real evidence as yet, justifying a reduction in the greater figure. Authorization is sought from the Vice-President (Student & Academic Services) to proceed with the implementation of Phases 1 and 2. References 1. 2. "Networking Computers and People on Campus and Beyond" - Strategic Plan for Computing and Network Services - February 1992 "Towards a Strategic Plan for Telecommunications and Networking at the University of Alberta" report of the UCAG Task Force on Telecommunications and Networking - May 9, 1991 "Telecommunications" - appendix B of the UCS PACCR report - Fall 1990 "Campus Backbone Network Proposal" - report of the UCS Network Planning Committee - March 6, 1991 "FDDI Fiber Pilot - A Proposal" - Dwight Kruger October 26, 1989

3. 4. 5.


Expectations & Limitations The distributed computing environment is rapidly becoming the primary computing resource for most students and researchers on the campus. Administrative units are increasingly mounting processes on Local Area Networks and seeking ways to share access to information with their peers across the campus. It is from this basis of fundamental need that the implementation of the FDDI Backbone Network springs. The UCAG Task Force Reports in articulating the strategic needs for computing on the campus, without exception identified networking as the number one priority. Network technology is changing rapidly worldwide, both from

the choice of physical hardware that is available and the supported protocols, to the extended services that can be operated across an advanced implementation. It is important to carefully marry strategic need to purchasable advanced technology that is relevant and functional. Sometimes misunderstandings occur between what challenges can be serviced by a technology and how those challenges can be applied and administered in the University of Alberta environment. The following sections attempt to limit these potential misunderstandings. The FDDI Backbone Network Will : Function as the primary network for digital computer communications on the campus. Run on a multi-strand, multimode, fiber optic cable plant that will be laid in the campus utility corridor system. Be able to reach, as the need arises, into the basement of all of the buildings serviced by the campus utility corridor system. Architecturally support the connection of several Local Area Networks within each reachable building. Support the Ethernet and Token/Ring types of Local Area Networks. Be able to route IP (Internet Protocol) packets. Selected Router equipment may allow routing of AppleTalk Phase II, Novell's IPX, and UngermannBass XNS protocols. The selected Routers will be able to bridge additional protocols. Have the capacity to absorb the bandwidth presented by multiple, simultaneous connections. Provide a secure path between (potentially non secure) Local Area Networks. Incur a network connection charge upon departments of $8,000 plus any LAN extension costs. Support the attachment of FDDI Local Area Networks as a future consideration. The FDDI Backbone Network Will Not : Have an unlimited number of spare fibers in the cable for other non CNS applications. As the University does not face the costs of trenching when laying cable the purchase of additional fibers on speculation becomes a poor investment. (Ref. Fiber Allocation) Service buildings away from the campus utility

corridor system (e.g. Rutherford House, Faculte St. Jean and the Edmonton Research Station). There are no plans to do any trenching. (Ref. FDDI Layout on Campus for a complete list of excluded buildings) Be installed in a building until the need arises to attach a departmental Local Area Network that resides in the building. Scheduling will be dependent on personnel and funding availability. Extend beyond the basement to service other floors of buildings. Departments will be responsible for paying the costs of extending their Local Area Networks to the basement FDDI router site. Allow the direct connection of workstations, servers, or hosts to the backbone. Instead, such devices must be connected via a router. Allow the connection of proprietary Local Area Networks, such as ArcNet. Route any protocol off campus other than IP (Internet Protocol). Route or bridge any protocols that might compromise the security of the backbone. Support protocols such as IBM/SNA, Decnet or OSI. OSI protocol is certainly a future consideration as the standard matures. Bridge mismatched protocol layers above those routed. For example, the FDDI backbone will not translate between AppleTalk Filing Protocol and Sun Network File System (as does a GatorBox). Improve end-to-end performance from a single site perspective. Communication through a workstation, departmental LAN, router, FDDI, router, destination LAN, and server will be slower than the speed of the departmental LAN. A connection that passes through 4 electronic devices and 3 networks has to be slower than through 2 electronic devices and 1 network. Automatically, just through its presence, allow an application on one workstation to communicate successfully with an application on a host. Users must be careful to select applications that are designed to interoperate, using protocols that the backbone routers can manage.

FDDI Architecture FDDI (Fiber Distributed Data Interface) is an ANSI fiber optic based networking standard written by the ASC X3T9.5

Task Group in 1990. An FDDI ring network consists of a number of serially attached stations that are connected by a transmission medium (fiber optic cable in FDDI) to form a closed loop. An active station transmits information sequentially as a stream of symbols to the next active station on the ring. As each active station receives these data symbols, it regenerates and repeats them to the next active device on the ring (its downstream neighbor). An FDDI network operates at 100 megabits per second (Mbps) for high speed data transfer, has a timed token passing protocol for efficiency, can use dual counter rotating parallel rings for reliability, and can operate at distances of up to 200 kilometers for reach. These characteristics address the severe networking capacity and reach bottlenecks that will be experienced on the University campus as more users are added to the campus network, the computing power of smaller desktop systems grows, the data traffic on existing compus networks increases, more client/server computing facilities are installed on campus, the use of graphics intensive applications increases, more local area networks need to be interconnected, and complex networks span longer distances on campus. FDDI actually consists of four standards that define the components of FDDI: Physical Layer Medium Dependent(PMD), Physical Layer Protocol(PHY), Media Access Control(MAC), and Station Management(SMT). These standards define several types of networking devices. These include wiring concentrators (CON), dual attachment stations (DAS) and single attachment stations (SAS). All three device types act as connection points to the FDDI network with the wiring concentrator providing connections for the attachment of multiple devices to the FDDI ring. These devices allow users to construct various fiber optic network configurations based on the FDDI standard.

Figure 1: FDDI Ring of Trees As part of the FDDI standard, ANSI permits a number of topologies which include standalone concentrator with attached nodes, tree of concentrators, dual counter-rotating ring and dual ring of trees. The recommended topology is the ring of trees because it is the most robust and flexible. A ring of trees consists of a dual counter-rotating ring with wiring concentrators used to provide connections to the FDDI ring (Fig. 1). The concentrators are the roots of the trees and single rings between the concentrators and single attachment stations are the branches of the trees. The dual counter-rotating ring provides redundancy in the ring design. Data normally only flows on the primary ring. If either a device on the primary ring fails or the primary ring fiber fails, continuity of the ring is restored by

wrapping the primary ring to the secondary ring to maintain the transmission path and isolate the fault. The wrapping is done by the dual attached stations on either side of the fault when they reconfigure after detecting the fault (Fig. 2). Although FDDI limits the total fiber length to 200 km, the dual counter-rotating ring topology effectively doubles the media length in the event of a ring wrap so that the actual length of each ring is limited to 100 km. If multiple faults occur, the ring is fragmented into two or more segmented rings and although each segment is fully functional there is no access between the segments. Figure 2: Ring Wrapping on a Fault A wiring concentrator permits additions, deletions and changes without disruption to the network. Because the concentrator is an active device it can actually control the physical topology of the network and is able to reconfigure the network as necessary by inserting or removing devices connected to it. This capability also allows the concentrator to play an active role in fault isolation as it can bypass inactive or defective stations, as required. This means the network can sustain the loss of all stations connected to the concentrators without losing ring integrity. The stations connected to the concentrators in a ring of trees topology can be bridges, routers, user devices equipped with an FDDI adapter and other concentrators. There can be up to 500 of these stations, including the concentrators on the ring, on an FDDI network. The maximum distance allowed between any two stations is 2 km. Bridges and routers provide the means to connect an FDDI network with other local area networks. Bridges act as links between local area networks in an extended LAN environment and routers are dedicated devices used to route messages between systems. Routers are sensitive to the selected protocols that they manage and are able to direct data packets onto the right links to take the packets to their intended destinations. The FDDI standard also specifies two mechanisms that raise the level of reliability and availability of the network. Concentrators, attached to the dual counter-rotating ring, can be equipped with an optical bypass relay that effectively short-circuits the concentrator if it fails. In this case the ring is not wrapped. Stations, connected to the concentrators, can be dual homed or connected to two different concentrators so that if one of the concentrators or the path through it fails, the station always has an alternate path. Both mechanisms have their disadvantages. They increase the cost and complexity of the network and have some technical penalties. For example, the optical bypass relay reduces the maximum distance allowed between stations to 1km. Whatever topology is chosen, FDDI remains a token passing ring. There is one token in an FDDI network and it denotes

the right to transmit data. A station connected to the ring can transmit data when it receives the token and continues to transmit data until either there is no more data to transmit or a timer has expired. After that, the station places the token back on the ring allowing other stations to transmit data. Fiber optic cable is a reliable media that performs well in hostile electrical and mechanical environments, is immune to outside interference and is not subject to grounding problems. It is also a secure media as optical transmission does not emit radio waves so an electronic listening device is ineffective and physical tapping into a fiber is extremely difficult without early detection. FDDI at the University Departmental LANs will be connected to the FDDI network through routers. Bridges and user devices equipped with FDDI adapters will not be used. The FDDI backbone network will be based on the ring of trees topology. A dual counter-rotating ring will be installed in the campus utility corridor system in the form of a figure of eight (Ref. FDDI Layout on Campus). Dual attached concentrators will be installed on the ring at eight strategic node locations (the roots of the trees). Point-topoint fiber cables (the branches of the trees) will be run from these concentrators to single attached routers located in building basements. Departmental Local Area Networks will be extended down to the basement to attach to the routers. Because prices of FDDI equipment are declining as the FDDI market matures, concentrators will not be purchased and installed on the ring until they are needed. Under some circumstances, routers will be initially installed on the ring instead of concentrators even though availability of the network may be reduced (and strictly speaking the topology will no longer be a true ring of trees). In the case where a ring node is located in a building and only LANs in that building are to be attached to the FDDI network, it is not sound economics to have both a concentrator and a router installed at the same location, nor is it efficient. When the need arises to service other buildings from that ring node location, the router will be removed from the ring, a new concentrator installed on the ring and the router re-attached to the concentrator. To further enhance reliability the concentrators and routers installed on the dual counter-rotating ring will be equipped with uninterruptable power supplies. If operational experience shows that the level of reliability must be raised and funding is available, then (1) optical bypasses may be added to the concentrators and (2) the routers may be dual homed.

FDDI Layout on Campus

The Department of Physical Plant manages an extensive system of Utility Corridors (tunnels) interconnecting most of the buildings on the campus. These utility corridors provide a largely hospitable environment for cable. Strategic locations within the corridors provide the services required for supporting equipment. It is the location of the Utility Corridors that governs the fiber cable routes for the FDDI Backbone Network. By leveraging on the existence of the system of Utility Corridors the University is able to avoid the significant cost of trenching cable into the ground. A number of technological parameters specified by the FDDI Standards must be respected in designing an FDDI Backbone Network appropriate to the campus. The most significant being a maximum ring circumference of 100 kilometers and a distance of no greater than 2 kilometers between FDDI Stations. As discussed in FDDI Architecture the chosen topology for the backbone network is a Ring of Trees. Campus Route The route for the campus backbone network (Fig. 3) is cognizant of the above parameters and the location of the Utility Corridors. The ring of trees will consist of a twenty four strand fiber cable ring (Ref Fiber Allocation), in the form of a figure eight that eventually will reach a length of six kilometers. FDDI wiring concentrators (the base of each tree) will be inserted into the ring at eight Ring Node Locations. A Ring Node Location is an environmentally friendly place suitable for a concentrator or router together with supporting patch panels. The proposed Ring Node Locations are : A General Services second floor computer room B Communications room #B-1A Mechanical Engineering Building C Sub corridor at station #1860 near Tory D Central Academic Building machine room E Students Union Building machine room F Medical Science Building machine room G Sub corridor at station #2080 Clinical Sciences Building H Heating Plant machine room The tree branches will be individual point to point eight strand fiber cables, running from the machine room of each Campus Building to the nearest Ring Node Location. These point to point cables will be laid only when needed. Each token ring or ethernet Local Area Network within a building that is to be connected to the FDDI Backbone Network will have to be extended into the machine room of that building. There it will be attached to a router that may be located in the building machine room or at the Ring Node Location associated with the building. A Department is responsible for extending its LAN to the machine room of the building the LAN is located in (this can be done in conjunction with CNS) and paying an $8000.00 charge for a Standard Network Connection1.

Figure 3: Ring Node Locations and FDDI Cable Route University Buildings and Facilities not serviced by the FDDI Backbone Network Campus buildings which are not accessible from the Utility Corridor system (building names preceded by map # listed in November 1991 Telecommunications Directory) 5 Faculty Club 54 Garneau Student Housing 61 Garneau Student Housing 22 Greenhouse (Agriculture) 67 Jubilee Auditorium 6 Ring House 1 - Art Gallery 4 Ring House 2 3 Ring House 3 2 Ring House 4 24 Rutherford House 87 South Field Car Park 42 Stadium Car Park 21 Trailer Complex #1 (Greenhouse) 92 Trailer Complex #2 (Civil Electrical Engineering) 73 Univ Hospital Day Care Centre 78 Univ Hospital Outpatient Residence 82 University Hospitals Parkade 1 University House 8 Windsor Car Park Near-campus buildings 64 89 Campus Tower (112 St & 87 Ave) Garneau Professional Building (111 St & 82 Ave)

Off-campus facilities Devonian Botanic Garden future Eastpoint Remote Library Stack Facility (50 St & 82 Ave) Edmonton Research Station (115 St & 61 Ave) Ellerslie Research Station Faculte St. Jean (91 St & 84 Ave) Mechanical Engineering Acoustic & Noise Unit (6720 30 St) The following phased implementation is recommended in recognition of the fiscal situation of the University and the rapid technology changes occurring within the communications industry. Each phase must include a complete assessment of the achievements met and the opportunities to be gleaned in accommodating state of the art, but appropriate, technological advancements. The following table

lists the buildings that will be able to connect to the backbone according to ring node location. Phase 1 service between GSB and CAB The first phase will be implemented during 1992 and is intended to provide an extensive test bed for the FDDI backbone technology, as well as contributing to the natural extension of the backbone. In the context of the campus, any chosen route must be readily accessible to the CNS technical staff and equipment and should include teaching, research, and administrative applications. Both GSB and CAB satisfy the above. They have a variety of high speed and lower speed LANs including token ring and ethernet. There are several UNIX, MAC and PC Laboratories in both buildings in addition to powerful workstations. This will give CNS an opportunity to connect a variety of equipment, and in doing so check out the vendors fiber products. With access to the large number of LANs, Laboratories and workstations, CNS will be better able to determine if there are any capacity limitations of the chosen network architecture. Upon the successful completion of phase 1, the equipment currently being used in CAB will be moved to Materials Management to allow an ethernet connection from their location to the campus backbone network. Phase 2 service between CAB and WMHSC Proceeding with this phase in 1993 will be subject to a technology review and the favorable results of phase 1. Thus providing an opportunity and advantage in reviewing new products and prices that will have become available since the phase 1 implementation was decided. WMHSC is chosen not only because it has a number of ethernet and token ring laboratories, but also because it is strategically located allowing several other buildings alongside the fiber to be connected. Phase 3 servicing the remaining central quadrant buildings Proceeding with this phase later in 1993 will be subject to favorable results from phase 2. This phase will primarily connect the southerly portion of the campus. Phase 4 servicing the remaining buildings on campus Proceeding with this phase during 1994 will be subject to favorable results from phase 3. Phase 4 connects the remaining buildings on the campus where network LANs have been added. Node Building Ring Lett Node er Location A Agriculture GSB Forestry A GSB Second floor GSB A Hydraulics Lab GSB A Printing GSB Fibre optic Known LAN's in cable Length Building in meters 185 AT 0 150 175 CS/CNS/Ag. Forst ethernet


Services RCMS GSB Structural Eng GSB Lab Assiniboia Hall Mechanic al Athabasca Hall Mechanic al Bio Sciences Mechanic al CFER Mechanic al Chem/Mineral Eng Mechanic al Mechanical Mechanic al Nuclear Physics Mechanic al Pembina Hall Mechanic al Physics Mechanic al Temporary Lab Mechanic al V-Wing Mechanic al Alberta Cultural Bus. Herit. Tunnel Business Bus. Tunnel Earth Sciences Bus. Tunnel Fine Arts Bus. Tunnel Garneau Trailer Bus. Complex Tunnel Home Economics Bus. Tunnel HUB Mall Bus. Tunnel Humanities Bus. Tunnel Law Centre Bus. Tunnel Rutherford Bus. Tunnel St. Stephen's Bus. College Tunnel Tory (Turtle) Bus. Tunnel University Bus. Health Serv Tunnel Arts/Convocation CAB Hall Cameron Library CAB Chemistry CAB Civil Eng CAB Electrical Eng CAB Power Plant CAB North

150 235 200 255 In Place ? 350 Ring Node Location In Place 320 275 150 150 313 50 350 432 629 418 250 200 560 200 493 150 600 167 In Place In Place In Place 193 174

ethernet ethernet AT ethernets ethernet ethernets ethernets ethernet


AT/TR/ethernet AT AT ethernet






South Lab Administration DentistryPharmacy Education Parkade Industrial Design Studio St. Joseph's College SUB Universiade Pavillion University Hall Van Vliet Centre Base Med. Education North/South Heritage Medical Newton Research UAH Educ. & Dev. Cntr Clinical Science Corbett Hall Extension Centre R-C Blood Centre Rehaab Med Trailers WMHCS Aberhart Centre Aberhart Nurses Res. Aberhart Services Cross Cancer Institute Heating Plant Henday Hall Kelsey Hall Lister Hall MacKenzie Hall Materials Management Mewburn Veterans Cent Nuclear Mag Resonance Services

CAB SUB SUB SUB SUB SUB SUB SUB SUB SUB Base Med. Base Med. Base Med. Base Med. Base Med. 2080 Tunnel 2080 Tunnel 2080 Tunnel 2080 Tunnel 2080 Tunnel 2080 Tunnel Heating Plt Heating Plt Heating Plt Heating Plt Heating Plt ? ? Heating Plt ? Heating Plt Heating Plt Heating Plt Heating

63 153 404 241 138 100 Ring Node Location 303 In Place 26 In Place 125 190 12 305 48 In Place In Place 85 226 283 398 316 370 308 Ring Node Location 381 214 298 642 258 TR ethernet TR/PCnet/TR/ethe rnet AT TR/ethernet ethernet ethernet ethernet/TR/AT TR

ethernet/TR/AT AT/ethernet ethernet AT/ethernet



FDDI Network Management Overview of Network Management The goal of Network Management is to efficiently and effectively manage the network resources. These resources exist in a multi-vendor, heterogeneous network environment, and include the following: Campus backbone components : Routers, bridges, FDDI, and cables. Local Area Networks : Ethernet, LocalTalk, TokenRing, hubs, Host Computers, terminal servers and printers. Coax (native 3270, HYPERbus), and Asynchronous (StarMaster, HYPERbus) connection . The objective of network management is to keep network resource failures to a minimum by taking a proactive role in the management of the network. However, if problems do arise, network management should assist the network personnel in quickly identifying and resolving the failing entity. The OSI Management Framework categorizes network management into the following functional areas: Fault Management : Detect, isolate, and control abnormal network behavior. Configuration Management : Detect and control the state of the network for both logical and physical configurations. Performance Management : Evaluate network behavior and effectiveness to optimize performance. Security Management : Control access to the network. Accounting Management : Collect and process data related to network usage.

High level protocols are required to monitor and control the network from a central location. Two such protocols are the Internet Simple Network Management Protocol (SNMP) and the ISO Common Management Information Protocol (CMIP). Of the two, SNMP has become the de facto standard. The architectural model for SNMP is based on a Network Management Station (NMS), and many managed nodes (network resources). The NMS is responsible for collecting information, displaying the state of the network, and controlling the nodes. Each managed node contains a store of objects, referred to as the Management Information Base (MIB), that allow for the remote monitoring and control of the device. The Network Management Station communicates with the node via the SNMP query and set commands. Architectural Model

FDDI Station Management FDDI technology incorporates a high degree of fault tolerance with its self-healing, dual counter-rotating ring, dual homing, and optical bypass. In addition, Station Management (SMT) is built in to every "layer", making it an integral part of FDDI. Without going into the architectural details of FDDI, SMT may simply be described as a low level protocol that defines the requirements for station operation, including facilities for connection management, node configuration, recovery from error conditions, and the encoding of SMT frames. However, from a network management point of view, SMT presents several problems. While providing comprehensive management at the ring level, it does not address: Remote monitoring and control such as one needs in a campus environment. Management of components on other types of networks such as Ethernet, or TokenRing, etc., and it is not compatible with other network management standards, such as SNMP and CMIP. To overcome these problems, the vendors of FDDI equipment are incorporating SNMP proxy agents into their products. Essentially, proxy agents "translate" between SMT and SNMP. This enables remote network management stations to monitor and control the FDDI network in the same fashion that the local area networks are managed. Network Management at the University CNS Data Communications & Networks' goal is to manage the campus network, including the FDDI backbone and the attached supported local area networks, from a common Network Management Station (NMS). The NMS must be based on SNMP and the components must have SNMP agents. This gives the most flexibility in control and monitoring, and should: Allow the immediate detection of faults and alert appropriate personnel. Automatically display a configuration map, assisting in locating network resources and pinpointing faults. Allow multiple hierarchical views of the network down to the device level. Keep a performance and error log so that historical data can be studied with a view to taking a proactive role in preventing network down time. Allow remote access to the Network Management Station for network operations, CNS operations, and various management and system level personnel. The NMS should also be able to manage major non-SNMP resources such as the StarMaster and HYPERbus, but this capability will have to be developed.

In recognition of the above, any purchases of network components, particularly routers, hubs and concentrators will include SNMP support. As an example of the commitment to Network Management, LANCE+ has been purchased from LEXCEL Inc and is undergoing a 30 day acceptance trial. This product provides the monitoring and control environment required to fulfill the above requirements.

Implementation Phase 1 Route As discussed in the FDDI Layout on Campus the first phase in implementing an FDDI Backbone Network for the campus is to install a backbone service between the General Services Building and the Central Academic Building. This choice provides an environment where we can learn to plan, operate and expand the technology while contributing to the natural extension of the backbone. This choice introduces Ring Node Locations "A" and "D". The activities in the buildings, listed in the following table, will be able to take advantage of the FDDI Backbone Network as part of Phase 1. Ring Node Fiber optic Node Lett cable Length Location er in meters GSB Second floor GSB A 0 Cameron Library CAB D In Place Chemistry CAB D In Place Civil Eng CAB D In Place Electrical Eng CAB D 193 Building Known LAN's in Building CS/CNS/Ag. Forst


There are three possible routes, North, Central and Across Quad, for laying the fiber cable between the General Services Building and the Central Academic Building. Although there are cost differences between each choice because of the differing cable distances and the labor required in passing through introduced Ring Node Locations, these differences do not significantly effect the merits of choosing one route over another. North Route ( Selected by Network Advisory Committee) The fiber optic cable would be run from the General Services Building to Mechanical Engineering B1A, to station 1860 near the Tory Building and on to the Central Academic Building.

Ring Node Locations "B" and "C" can be attached to the backbone as part of the Phase 2 implementation. The activities in the buildings listed in the following table will be able to take advantage of the FDDI Backbone Network

as part of that phase. This route allows more active sites to be attached quickly in Phase 2. Building Ring Node Fiber optic Known LAN's in Node Lett cable Length Building Location er in meters GSB A 185 AT B B B B B C C C 200 In Place 350 Ring Node Location In Place 50 560 200 ethernets ethernet AT/TR/ethernet TR ethernet ethernets

Agriculture Forestry Assiniboia Hall Mechanic al Bio Sciences Mechanic al Chem/Mineral Eng Mechanic al Mechanical Mechanic al Nuclear Physics Mechanic al Business Bus. Tunnel Law Center Bus. Tunnel Rutherford Bus. Tunnel

Central Route (Option 2 - Discarded) The fiber optic cable would have been run from the General Services Building to the Students Union Building and on to the Central Academic Building.

If this route were chosen Ring Node Location "E" would be added to the backbone as part of the Phase 2 implementation. The activities in the buildings listed in the following table would have taken advantage of the FDDI Backbone Network as part of that phase. This route allowed for fewer active sites to be attached quickly in Phase 2. Ring Node Fiber optic Node Lett cable Length Location er in meters Administration SUB E 153 University Hall SUB E In Place DentistrySUB E 404 Pharmacy SUB SUB E Ring Node Location Industrial SUB E 138 Design Studio Van Vliet Center SUB E 26 Universiade SUB E 303 Pavillion St. Joseph's SUB E 100 College Education SUB E 241 Parkade Across Quad Route (Option 3 - Discarded) Building Known LAN's in Building ethernet/TR/AT ethernet/TR/AT


The fiber optic cable would be run from the General Services Building to the Agricultural Forestry Building and on to the Central Academic Building.

If this route were chosen no Ring Node Locations could have been added to the backbone as part of a Phase 2 implementation. In addition, except for the General Services to Agricultural Forestry portion, the cable would eventually be excluded from the proposed final network. Although it could be argued that this fiber might be used for disaster recovery in the future (for example, if all paths between GSB and CAB are broken), this use of the cable would yield little future return. Recommendations (Presented to Network Advisory Committee) 1.The North Route is the preferred alternative by gleaning more active sites in Phase 2 and providing a better opportunity for the effective use of time. While Phase 2 cables are being ordered and installed there is a better opportunity to add more cluster locations to the FDDI Backbone. 2.The Across Quad Route should not be considered as it is hard to justify this alternative that offers so little future return. The Network Advisory Committee, after due diligence and careful analysis of the options, concurred with the recommendations.

Fiber Allocation To provide a common reference point for conformance verification the FDDI Standard specifies the characteristics of station to station and cable plant connectivity as a multimode fiber optic strand with a core of 62.5 microns and a cladding of 125 microns. Multimode means that multiple modes or light rays can be transmitted through the fiber for a maximum distance of 2 kilometers using light emitting diodes as transmitters. A superior FDDI Standard for a single mode fiber optic strand is nearing approval. Single Mode means that only one mode or light ray is propagated in the fiber, but when coupled with laser transmitters the station to station distance can be up to 60 kilometers. The needs of the campus network can be readily met by the less expensive multimode fiber optic cable and supporting technology. Fiber optic cable can be purchased in various configurations depending on local constraints. The options include the number of strands carried in a jacket, whether the strands are loose or bound within the jacket, and indeed the very nature of the jacket. Two examples of factors affecting the choice of jacket are whether the cable will be buried in the

ground or whether it must meet specific local fire standards. The University is fortunate in having a campus utility corridor system that places few demands on the cable, so the choice of jacket is less critical and probably can be determined by whichever jacket yields the best possible procurement price. A cable cost of $1 per strand per meter is the industry guideline that has been used in planning the 6 kilometer FDDI Backbone cable plant. Notwithstanding this rate, the cable industry is extremely competitive and it is anticipated that the maximum number of options must be open to negotiation when purchasing the cable, so as to gain the best price for the greatest benefit to the University. Allocation The cost of laying a cable must be balanced with the needed immediate services and a reasonable view of the likely future requirements that can best be afforded by a current investment. Within this framework it is not possible to have an unlimited number of spare fibers in the cable for other non CNS applications. In appreciation of all of the above, a 24 strand fiber optic cable is recommended with the fiber strands allocated in the following manner: 2 Strands FDDI backbone network; using one strand for the primary ring and one strand for the secondary ring. Possible second backbone network; future traffic and/or technology developments may require a unique pathway. Backup, test and development for the above applications future Video/Voice/Data applications; the application and technological specifications are as yet neither articulated nor understood. University energy and utility management network. Special purpose high speed networks needing cross campus connection.

2 Strands

2 Strands 4 Strands

4 Strands 10 Strands 24 Strands

As the University does not face the costs of trenching when laying cable the purchase of additional fibers on speculation becomes a poor investment. Although there is currently a reasonable amount of spare capacity, the number of strands is a finite resource and as time passes the allocation and use of strands will lead to ever increasing difficulties in deciding the relative merits of applications making calls on the remaining spare strands. Changes in the allocation of fiber optic strands within the cable plant

will be made under the advisement of the Network Advisory Committee. Cost Recovery Fiber optic strands allocated for non CNS network functions will be charged for at $1200 per annum per fiber optic strand. There is no physical support or service included within the fee, such as connectoring or patch paneling, etc. The fee is intended to recover the cost of administering the care and consideration that must be taken throughout the infrastructure in accommodating the non network use of the fiber optic cable plant. The fee will be levied on April 1st of the year following when a fiber strand comes into service. It is not intended that the charge be a reservation fee.

Concentrator & Router Selection The ability to choose industry standard equipment from many different vendors was a fundamental requirement in selecting the FDDI Standard as the wideband backbone network for the University campus. By avoiding proprietary implementations, that are typically only available from a single source vendor, there is a greater opportunity for selecting equipment more appropriate to specific local needs. With a large base of vendors marketing similar equipment, greater vendor competition will result in better purchase prices to the University. Market Research Vendors were contacted during February and March 1992 and asked to respond on how their product met the University's minimum technical requirements (i.e. those expectations that must be met for the University implementation), what other desirable features their product offered, and what the purchase price and ongoing maintenance charges were for a pre defined configuration. The following tables, a set for FDDI Concentrators and a set for FDDI Routers, present the information obtained from the vendors. Each table has two sections - "MUST HAVE" enumerates the minimum technical requirements and "DESIRABLE" enumerates the features that are desirable in seeking the best cost benefit ratio. Equipment Selection A decision analysis process beginning in April 1992 will determine the vendor equipment most likely to meet the University need together with the most appropriate cost benefit ratio. Short listed vendors will be asked to provide equipment manuals for review. Assuming that their product still meets the requirements after this review, the vendors will be asked, through the Purchase Requisition/Tender Process, to quote for the supply and ongoing maintenance of equipment. It is anticipated a decision analysis process in May or June 1992 will then be able to determine the successful bid, with a view to the delivery of equipment in

the early autumn time frame. FDDI Concentrator: Table 1 of 3 NSC FC-16 r 500 MUST HAVE: established product X (May / June) 8 + ports (excluding Dual 16 Attached backbone ports) dual attached to ring Y field upgradable optical bypass Y option FDDI Station ManagemenT (SMT) Y at level 6.2+ FDDI Management Information Y Base with sets 30 day evaluation Y compliance checkout by ANTC or Y Univ. New Hamp. DESIRABLE: dual MAC ? field expandable hot insertion of ports for x expansion/maintenance optional dual power supply x power consumption (watts) 130 PRICING (including educ. discount): Price for DAC + 8 SAS ports $15947 Maintenance Costs $143 / month x optional x x Y X (Apr / May) to 8 Y Y Y Y Y both 2 to 14 Y Y Y ? Y X (both planned) 2 to 14 Y Y Y 4 to 12 Y X (factory order) X (6.1) Y Y Concen. Dec Concentrato Interphase NCR (AT&T) FiberHUB StarLAN 100 Network Perfs NP-MC

X (planned) X (coming) ? Y Y Y

x x 120

Y coming ?

x x 400

x x 200

15% educ discount $19624 $259 ? ? ? ?

20% educ discount $18250 $152

Warranty 1 year

1 year on site

1 year

1 year

1 year

Additional Features:

single mode shielded twist pair

shielded twist pair

shielded twist pair

FDDI Concentrator: Table 2 of 3

Optical Data ODS 1092 Y to 12 Y Y Y Y Y both

Sumitomo ODS 1091 Y to 30 Y Y Y Y Y both ODS 1090 Y to 60 Y Y Y Y Y both Suminet3500 2/4/6/8 Y X Y X (no sets Y Y

MUST HAVE: established product 8 + ports (excluding Dual Attached backbone ports) dual attached to ring field upgradable optical bypass option FDDI Station ManagemenT (SMT) at level 6.2+ FDDI Management Information ) Base with sets 30 day evaluation compliance checkout by ANTC or Univ. New Hamp. DESIRABLE: dual MAC field expandable hot insertion of ports for expansion or maint. optional dual power supply power consumption (watts) PRICING (including educ. discount): Price for DAC + 8 SAS ports Maintenance Costs t Warranty Additional Features:

up to 4 Y Y 700 20 - 30% educ disc $31845 for 12 $186 1 year

up to 4 Y Y 700 20 - 30% educ disc 33933 for 12 $198 1 year

up to 4 Y Y 700 20 - 30% educ disc 37500 for 12 $219 1 year

Y Y x 60 max (or 300?) $US16K for 10 $US14.4K not set ye 1 year graceful insertion

single mode single mode single mode stp stp stp

FDDI Concentrator: Table 3 of 3 SynOptics UBI (Annixter/S ignatel) Model 2914 o ASM-1000 MUST HAVE: established product Y Y 8 + ports (excluding Dual fixed @ 12 to 14 Attached backbone ports) dual attached to ring Y Y field upgradable optical bypass Y Y option FDDI Station ManagemenT (SMT) Y Y at level 6.2+ FDDI Management Information Y Y Base with sets 30 day evaluation Y Y compliance checkout by ANTC or Y Y Univ. New Hamp. DESIRABLE: dual MAC Y field expandable hot insertion of ports for Y expansion or maint. optional dual power supply Y power consumption (watts) 400 PRICING (including educ. sans educ discount): discount Price for DAC + 8 SAS ports $30K for 10 Maintenance Costs ? up to 3 x Y x 105



3904 in a 3000-05 Y 40 Y Y Y Y Y Y

Link Builder 3GH Y 40 Y Y Y X (no sets) Y ?

Concentrat r*32 Y 4 to 32 Y $US1000 Y Y Y Y

Y Y 460

Y & other boards Y 1000

Y Y to 500

24954 / 24K n/a / $200

$33100 / 21K n/a / $175

$49727 $285

$23814 $330

Warranty 1 year Additional Features: RISC better e MTBF

1 year

1 year

1 year

90 days

graceful insertion roving MAC

no single fail point

all hot standbys

future single mod roving MAC graceful insertion redundant cpu option sweeps rin

g freq rem. burn. upgrades NLM security FDDI Router : Table 1 of 4 FX5510T established product MUST HAVE: 4 or more LANS (exclud. Dual X (1) Attached FDDI) dual attach Y dual homing Y FDDI Station ManagemenT (SMT) Y at level 6.2+ FDDI Management Information X Base with sets 802.3 Ethernet & Token/wRing X (T/R (T/R) only) Open Shortest Path First IP n/a routing protocol route ip & at least 1 other n/a ipx,ubi xns,ATalk bridge rest of protocols Y Alantic PowerHUB Y 12 Cisco AGS+ Y to 18 Eth/3 T/R Y Y Y Y Y Coral CX1200 X (May) to 8 CX1600 X (May) to 16 X (1) Fibermux FX5520EZ

Y X latest X X (no T/R) X (planned ) X (IP only) Y

Y Y working on 7 Y X (T/R fall)

Y Y working on 7 Y X (T/R fall) X

Y Y Y X X (Eth only) X

(coming) (coming) Y Y X (rest coming) Y X (rest coming) Y X (IP only) Y

learning bridge Y address & protocol filtering Y transparent (translating) Y bridge 30 day evaluation Y ANTC or Univ New Hamp both compliance checkout AppleTalk phase II n/a DESIRABLE: Apple update-based routing protocol (AURP) 802.1d source routing hot insertion of ports Y

Y Y Y Y ANTC planned x (planned ) x

Y Y Y Y Y I & II



Y Y Y Y both x

underway underway I & II coming I & II coming Y

x (when standard ) Y x

n/a fully

configur ed field upgradable optical bypass Y Y Y Y Y option optional dual power supply planned x x Y x power consumption (watts) 230 500 20 amps 20 amps 150 memory no no no no no option option option option option Performance: internal bus (M 400 533 800 800 100 bits/sec) performance: FDDI <-> Eth PPS 1 ? ? ? ? ? port performance: FDDI <-> Eth PPS 4 ? ? ? ? ? ports (agg.) performance: FDDI <-> FDDI ? ? 3 FDDIs 3 FDDIs ? forwarding full out full out performance: FDDI filtering ? ? ? ? ? performance: Ethernet filtering ? ? ? ? ? Pricing (including educ. includes educ educ $US22K discount) 30% disc not disc not set set Price for 1 SAS + 4 Ethernets $US $31,920 ? $US45900 $US22K 28.8K

Y x 150 no option 100 ? ? ? ? ? $US22 $27K $US27K

Maintenance Costs / month $US 110. Warranty (hardware/software) 1 yr Additional Features: need $8K NMS

for 12 being worked on 1 yr/90 days 10BaseT& thinNET non blocked > 128 2 FDDIs


$US 135.

90 days routes 16 protocol OSI

not set 3 FDDIs

not set fully redundan t selfhealing 3 FDDIs

1 yr need $8K NMS

FDDI Router : Table 2 of 4

Fibronic s FX8210B

NCR (AT&T) Brouter 450 Y to 22 Y Y Y X (planned ) Y

NEC XR2000 Y 4 Y Y Y X (3/4 '92) Y

NSC 6600 Y 2 or 4 Y Y Y Y 6400 Y to 12 / 16 Y Y Y Y

6800 established product Y MUST HAVE: 4 or more LANS (exclud. Dual to 20 Attached FDDI) dual attach Y dual homing Y FDDI Station ManagemenT (SMT) Y at level 6.2+ FDDI Management Information Y Base with sets 802.3 Ethernet & Token/wRing X (T/R (T/R) 3/4 '92) Open Shortest Path First IP Y routing protocol route ip & at least 1 other Y ipx,ubi xns,ATalk bridge rest of protocols Y learning bridge Y address & protocol filtering Y Y 2 or 4 Y Y X (6.1) Y

X (T/R June)

X (T/R 3/4 '92) Y

X (planned ) X (IP only) Y Y Y





transparent (translating) X bridge 30 day evaluation Y ANTC or Univ New Hamp Y compliance checkout AppleTalk phase II Y DESIRABLE: Apple update-based routing under protocol (AURP) devel. 802.1d source routing hot insertion of ports x field upgradable optical bypass x (fact. option order) optional dual power supply x power consumption (watts) 1500 Memory 4 stand, 4 opt Performance: 800 bits/sec) performance: ? port performance: ? ports (agg.) performance: ? forwarding performance: ? internal bus (M FDDI <-> Eth PPS 1 FDDI <-> Eth PPS 4 FDDI <-> FDDI FDDI filtering

Y Y Y x x

Y ? Y Y x

Y Y both Y part3/4,

Y Y Y Y under

X Y Y Y under devel.

full 1/4 developm ent n/a n/a Y Y x Y Y x Y n/a ?

x x (fact. order)

x 100 default 6K addrs n/a 6400 6400 n/a full FDDI full Ethernet

x 410 ?

x 231 512K to 2megs

x 130 2 megs fixed 400 ? ? n/a ? ? $24.3K for 2 Ether $28.3K $291, $340

x 1000/120 0 2 stand, 2 opt 400 ? ? ? ? ?

550 + 80

800 line

65 K

25 K 25 K

full full

160 K 14.8 K 25 percent

performance: Ethernet filtering ? Pricing (including educ. discount) Price for 1 SAS + 4 Ethernets $39.4K Maintenance Costs / month $457/wMo

46000 230

? ?

$US18750 $US62.50

$34.9K $386/ month

nth Warranty (hardware/software) 1 year Additional Features: to 5 FDDIs

90 days single mode

1 year up to 4 FDDIs some OSI

1 year srce route tunnel to 3 FDDIs

1 year RISC

1 year to 3 FDDI?


to 3

to perf,low RISCs/in cost nt Cisco's 9x5 on box? site FDDI Router : Table 3 of 4 Timeplex Proteon (Gandalf/L. A./Texscan) ProNET Time/Lan p4200 100 established product Y MUST HAVES: 4 or more LANS (exclud. Dual to 8 Attached FDDI) dual attach Y dual homing & cleave FDDI Station ManagemenT (SMT) Y at level 6.2+ FDDI Management Information Y Base with sets 802.3 Ethernet & Token/wRing ) Y (T/R) Open Shortest Path First IP Y routing protocol route ip & at least 1 other Y (ATalk ipx,ubi xns,ATalk planned) bridge rest of protocols Y learning bridge Y address & protocol filtering Y to 5 Y Y Y end of March Y (T/R 4) I & II Y to 4 Y Y Y end of March Y Y 3500 ? to 6 Y Y Y X Y X (developing ) X (rest planned) Y Y Y Y Y spanning tree Y Y Y Y Y Y Y II Y 6 Y Y Y Y X (T/R 3/4 Y Sumitomo 3Com coming site OSI OSI 9x5 on t 3RISCs/i

CNX 500



Y transparent (translating) Y Y bridge 30 day evaluation Y Y ANTC or Univ New Hamp X Y compliance checkout AppleTalk phase 2 special x DESIRABLES: load Apple update-based routing 3/4 '92 x protocol (AURP) 802.1d source routing x Y hot insertion of ports x x field upgradable optical bypass Y Y option optional dual power supply x x power consumption (watts) 2.5 amps 300 Memory 2 megs 2 stand, 8 recomm. planned Performance: 528 bits/sec) performance: ? port performance: ? ports (agg.) performance: ? forwarding performance: line rate internal bus (M FDDI <-> Eth PPS 1 FDDI <-> Eth PPS 4 FDDI <-> FDDI FDDI filtering 320 8 - 9 K ? ? 90 - 100 K

Y Y Y Y 3/4 '92 Y x Y Y ? 2 data/2 instr stand. busless (800)

Y Y Y n/a n/a Y x Y consumes slot x 60 max no option

Y Y Y Y mid summer Y Y Y x to 500 BTU 12 meg standard


800 not

30 K n/a line speed (160K) near line speed / /15% ? ? 20% ed dis plan $US19.2

tested yet

performance: Ethernet filtering 9 - 10 K line rate Pricing (including educ. discount) Price for 1 SAS + 4 Ethernets 28668 Monthly maintenance Costs $298 Warranty (hardware/software) 90 day

$33/29.4/no 25/21/21 K

$24,480 $190 1 year

Bid $350/171/no $275/123/22 not set yet Bid 12 / 3(?) 6 12 / 3(?) 1 year

Additional Features: later single mode

months 2 FDDIs

months National chip

OSI routing


some OSI to 3 FDDI 1st gen firmware FDDI downloads

RISC 2nd generation FDDI

FDDI Router : Table 4 of 4

UBI ASM 5361

established product MUST HAVE: 4 or more LANS (exclud. Dual Attached FDDI) dual attach dual homing FDDI Station ManagemenT (SMT) at level 6.2+ FDDI Management Information Base with sets 802.3 Ethernet & Token/wRing Open Shortest Path First IP routing protocol route ip & at least 1 other ipx,ubi xns,ATalk bridge rest of protocols learning bridge address & protocol filtering transparent (translating) bridge 30 day evaluation ANTC or Univ New Hamp compliance checkout now 9, midYear 36 Y Y Y June/Jul y sets Y Y Y Y Y Y Y

WellFlee t Link Concentr Backbone Back. Node ator Link Concent. Node Node Node X (March X (May & & upgrade) upgrade) to 12 to 48 to 12 to 48 Y Y Y summer sets Y Y Y Y Y Y summer sets Y Y Y Y Y Y summer sets Y Y Y Y Y Y summer sets Y Y Y

Y Y Y Y Y Y Y Y extensiv extensiv extensiv extensiv e e e e Y Y Y Y Y Univ New Hamp Y (bridge I) Y Y June for software Y Y ? 5 recommen ded

Y Y Y Y X ANTC Univ New Univ New Univ New for 2/2 Hamp Hamp Hamp '92 AppleTalk phase 2 Y Y Y Y DESIREABLES: (bridge (bridge (bridge I) I) I) Apple update-based routing by end Y Y Y protocol (AURP) of '92 802.1d source routing 4/4 '92 Y Y Y hot insertion of ports Y June for June for June for software software software field upgradable optical bypass Y Y Y Y option optional dual power supply Y x Y x power consumption (watts) 400 175 385 1120 Memory no 5 5 5 option recommen recommen recommen ded ded ded

Performance: internal bus (M bits/sec) performance: FDDI <-> Eth PPS 1 port performance: FDDI <-> Eth PPS 4 ports (agg.) performance: FDDI <-> FDDI forwarding performance: FDDI filtering performance: Ethernet filtering Pricing (includ educ discount) Price for 1 SAS + 4 Ethernets Maintenance Costs / month Warranty (hardware/software) Additional Features:

320 ? ? ? 500 K wire





Eth wire Eth wire Eth wire Eth wire 60 K 20 K 190 K 20 K 150 K 75 K 480 K 75 K

? ? 1 year single mode to 6 FDDI dual RISCs each FDDI

? ? ? ? wire wire wire wire 20 - 30% 20 - 30% 20 - 30% 20 - 30% educ dis educ dis educ educ disc disc 26468 34857 45784(8 86620(8 Ethernet Ethernet ?) s?) 154/54 203/54 267/54 505/54 1 year 1 year 1 year 1 year OSI OSI 68040 no demo'd demo'd single failure point single single to 4 to 13 soon soon FDDIs FDDIs graphica graphica l user l user int int

Phase 1 to Phase 4 Cost Analysis The implementation of a campus wide FDDI Backbone Network involves the commitment of substantial financial resources on the part of the University. As described in the FDDI Layout on Campus, a phased implementation is proposed. The development plan calls for Phase 1 to be completed in 1992, Phase 2 and Phase 3 to be completed in 1993, and Phase 4 in 1994. Although ambitious, this is also a practical schedule. The following summary tables enumerate the expected capital and operating commitments that will be incurred in implementing each phase. It should be noted that these are estimates representing a best judgment of likely expenditures; not the actual amounts that can only be ascertained during the appropriate purchasing and installation processes. Capital Costs Listed by building, this estimate includes the purchase and installation cost of the supporting backbone fiber optic cable plant, connectors, patch panels, appropriate FDDI Concentrator or FDDI Router equipment, and uninterruptable power supply. This places an FDDI Backbone connection in the basement of the indicated building. Operating Cost

Listed by building, this estimate includes the anticipated annual costs in maintaining the FDDI Concentrator or FDDI Router, both hardware and software, and providing maintenance service for the uninterruptable power supply. Budget Parameters The following rates have been used for the purpose of budget planning: Capital Fiber Optic Strand / meter FDDI Concentrator FDDI Router Uninterruptable Power Supply Propo sed in Phase 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 Node Building $1 $25K $35K $1.8K $2.4K $2.4K $500 Operatin g

Ring Node Fibre optic Capital Operati cable ng Lett Location length in Costs Costs er meters A Second floor GSB 0 $162,80 $5,600. GSB to 0.00 00 D Cameron CAB 180 $6,505. $2,450. Library 00 00 D Chemistry CAB IN Place $0.00 $0.00 D Electrical Eng CAB 193 $5,888. $50.00 00 D Civil Eng CAB IN Place $0.00 $0.00 B Mechanical Mechanical Ring Node $7,950. $200.00 Location 00 C Business Bus. Tunnel Ring Node $7,950. $200.00 Location 00 D CAB CAB Ring Node $39,450 $3,100. Location .00 00 Phase 1 $230,54 $11,600 Totals= 3.00 .00 A B B B B B C C C Agriculture Forestry to Chem/Mineral Eng Assiniboia Hall Bio Sciences Mechanical GSB Mechanical Mechanical Mechanical Mechanical 185 350 In Place In Place Ring Node Location In Place 250 50 200 $36,760 $2,450. .00 00 $39,400 $2,450. .00 00 $0.00 $0.00 $32,275 .00 $61,600 .00 $32,700 .00 $39,400 .00 $62,400 .00 $37,000 $2,450. 00 $5,350. 00 $2,450. 00 $2,450. 00 $5,350. 00 $2,450.

Nuclear Mechanical Physics Earth Sciences Bus. Tunnel Business Rutherford Bus. Tunnel Bus. Tunnel

2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3


Law Centre CAB Administration University Hall SUB SUB Van Vliet Centre Education North/South Heritage Medical Base Med. UAH Educ. & Dev. Cntr WMHCS Extension Centre Corbett Hall Clinical Science Station 2080

Bus. Tunnel CAB SUB SUB SUB SUB SUB Base Med. Base Med. Base Med. Base Med. 2080 Tunnel 2080 Tunnel 2080 Tunnel 2080 Tunnel 2080 Tunnel

.00 $44,200 .00 Ring Node $118,57 Location 0.00 153 $36,360 .00 In Place $33,800 .00 Ring Node $65,900 Location .00 50 $6,400. 00 50 $34,600 .00 125 $35,800 .00 190 $36,840 .00 Ring Node $68,300 Location .00 420 $12,520 .00 300 $38,600 .00 In Place $32,200 .00 In Place $32,200 .00 100 $33,800 .00 Ring Node $36,700 Location .00 Phase 2 $971,62 Totals= 5.00 650 $132,70 0.00 $37,960 .00 $40,360 .00 $3,050. 00 $7,100. 00 $6,700. 00 $8,060. 00 $8,380. 00 $6,800. 00 $38,200 .00 $37,000 .00 $8,300. 00

00 $2,450. 00 $2,450. 00 $2,450. 00 $2,450. 00 $5,000. 00 $2,950. 00 $2,450. 00 $2,450. 00 $2,450. 00 $5,500. 00 $2,450. 00 $2,450. 00 $2,450. 00 $2,450. 00 $2,450. 00 $3,100. 00 $68,250 .00 $3,100. 00 $2,450. 00 $2,450. 00 $0.00 $50.00 $50.00 $50.00 $50.00 $50.00 $2,450. 00 $2,450. 00 $50.00


Heating Plant Heating Plt Ring Node Location Materials Heating Plt 260 Management Lister Hall Heating Plt 410 SUB Printing Services to Hydraulics Lab SUB GSB GSB Ring Node Location 175 150 235 255 250 275 200 250

Structural Eng GSB Lab Athabasca Hall Mechanical CFER Physics Humanities HUB Mall Mechanical Mechanical Bus. Tunnel Bus. Tunnel

3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4


Fine Arts

$41,800 .00 Home Economics Bus. Tunnel 418 $11,020 .00 Arts/Convocati CAB 167 $7,020. on Hall 00 DentistrySUB 404 $40,360 Pharmacy .00 Industrial SUB 138 $6,540. Design Studio 00 Universiade SUB 303 $38,760 Pavillion .00 Education SUB 241 $37,640 Parkade .00 Newton Base Med. 12 $34,120 Research .00 Second floor GSB Ring Node $32,550 GSB to Location .00 Phase 3 $584,42 Totals= 0.00 RCMS to Mechanical Engineering Pembina Hall GSB Mechanical Mechanical 150 Ring Node Location 320 150 150 $6,700. 00 $31,200 .00 $9,420. 00 $6,700. 00 $6,700. 00 $31,200 .00 $6,700. 00 $10,700 .00

Bus. Tunnel


$2,450. 00 $50.00 $50.00 $2,450. 00 $50.00 $2,450. 00 $2,450. 00 $2,450. 00 $2,400. 00 $28,000 .00 $50.00 $2,400. 00 $50.00 $50.00 $50.00 $2,400. 00 $50.00 $50.00


Temporary Lab Mechanical V-Wing Mechanical

Business Bus. Tunnel Ring Node Tunnel Location Tory (Turtle) Bus. Tunnel 150 Alberta Cultural Herit. St. Stephen's College Garneau Trailer Complex University Health Serv Power Plant North South Lab CAB St. Joseph's College R-C Blood Centre Rehaab Med Trailers Kelsey Hall Bus. Tunnel Bus. Tunnel Bus. Tunnel Bus. Tunnel CAB CAB CAB SUB 2080 Tunnel 2080 Tunnel Lister Hall 400 493 750 600 174 63 Ring Node Location 100 130 350 100 100

$12,300 $50.00 .00 $16,300 $50.00 .00 $13,900 .00 $7,100. 00 $5,340. 00 $31,200 .00 $1,000. 00 $6,380. 00 $9,900. 00 $5,900. 00 $5,900. $50.00 $50.00 $50.00 $2,400. 00 $50.00 $50.00 $50.00 $50.00 $50.00

MacKenzie Hall Lister Hall

4 4 4 4 4 4 4 4 4


Henday Hall Aberhart Nurses Res. Aberhart Services Aberhart Centre Mewburn Veterans Cent Cross Cancer Institute Services Building Nuclear Mag Resonance Heating Plant

Lister Hall Heating Plt Heating Plt Heating Plt Heating Plt Heating Plt Heating Plt Heating Plt Heating Plt

00 $5,900. 00 370 $10,220 .00 420 $11,020 .00 450 $11,500 .00 350 $9,900. 00 360 $10,060 .00 300 $9,100. 00 470 $11,820 .00 Ring Node $62,350 Location .00 Phase 4 $366,41 Totals= 0.00 100 Splice /Connector Equipment Required = Testing Diagnostic/ Equipment Required = Network Management Equipment Required = Spares = $7,000. 00 $40,000 .00 $42,000 .00

$50.00 $50.00 $50.00 $50.00 $50.00 $50.00 $50.00 $50.00 $4,800. 00 $13,200 .00

$55,000 .00 Project $229,70 Contingency 0.00 = Total for $2,526, $121,05 the Project 698.00 0.00 =

Phase 1 Implementation Outline Implement a Phase 1 fiber service between GSB & CAB Select & Test Router Hardware Select & Test Concentrator Hardware Tender for Fiber Supplier Order Fiber Order Routers Order Concentrator Install Fiber (Physical Plant) Install Connector Termination's From 92.04 92.04 92.04 92.05 92.06 92.06 92.07 92.07 To 92.05 92.05 92.05 92.06 92.07 92.07 92.07 92.08

- Ensure Cable plant meets 92.08 specification - Test FDDI equipment in local environs 92.08 - Attach FDDI equipment to Phase 1 92.08 cable - Test Phase 1 Implementation 92.09 - Publish information on Connection 92.08 Costs & Rates2 - Publish information on How to Connect392.08 - Advise Phase 1 Clients of impending 92.09 Conversion - Convert Phase 1 to Production, 92.10 attaching GSB to Production Backbone Network Chemistry Mathematics Electrical Engineering Cameron Library Civil Engineering - Review all aspects of Phase 1 92.08 - Determine Phase 2 Technology4 92.12

92.08 92.08 92.08 92.10 92.08 92.08 92.09 92.12



Capital Funding The provision of a high capacity fiber optic backbone represents a significant financial investment that is estimated to cost $3.25 million to provide backbone connectivity to the campus buildings. In recognition of the fiscal constraints facing the University an effective balance has been set between a phased implementation that is cognizant of the campus need, the available financial and staffing resources and the financial capacity of the institution. Through careful stewardship of past expenditures UCS has accumulated several funds in anticipation of major capital enterprises. A more recent University policy requiring UCS to convert $1M operating base budget to capital over a five year term provides a further source of funding. The following itemizes the various sources of funds and the lien that will be placed against their respective accounts forthwith. Operating to Capital Conversion A financial plan initiated by the President and the Board of Governors requires UCS to convert $1M base operating budget to base capital budget over a 5 year period. Under this plan $400K is to be converted by April 1st 1992 with the remaining balance being scheduled as $200K conversions on April 1st 1993, 1994 and 1995.

From this source, UCS will allocate $400K during the 1992 fiscal year, followed by $500K during the 1993 fiscal year, $600K in the 1994 fiscal year and $425K in the 1995 fiscal year. This represents a major commitment by UCS to the fiber optic backbone project over the 4 year term. Network Reserve This account was established in recognition of expenditures related to the continued operation of the communications infrastructure across the campus. The various connection charges for campus connectivity form the source of funds for the account, while the provision of the communications hardware represents the expenditures. The difference between revenue and expenditure forms a reserve specifically intended for improvements to the central campus connectivity service. As of December 1991 the account holds some $656K. From this source, UCS will seek $400K during fiscal year 1992, followed by $100K during fiscal year 1994. The omission of a lien during fiscal year 1993 is intentional. It is anticipated that complexities converting existing local area networks to the backbone may require one time funding that would be appropriately and correctly levied against the reserve. Long Term Reserve This reserve is intended to assist in financing major central computing expenditures. Through persistently frugal management UCS has accumulated $600K+. However, major expenditures aside from the fiber optic backbone network are anticipated within the planning period. From this source, UCS will allocate $100K during fiscal year 1992, followed by $400K during fiscal year 1993. Thus hopefully not compromising the central mainframe and software upgrades or purchases, while continuing to implement the network in a speedy and efficient manner. Annual Capital Allocation UCS will seek an allocation of $100K for fiscal year 1992. A similar application for $100K will be made in each of the fiscal years 1993 and 1994. Notwithstanding that each application will be justified on the merits of campus backbone connectivity, annual allocations of the order mentioned can redress the $231K funds transferred to the central administration from the Network Reserve Account in 1989. Outline Plan The following table outlines the planned provision of $3.25M across a 4 year period for the implementation of the fiber optic backbone network. Although current estimates place the cost at $3.25M it is anticipated, and supportive industry evidence is emerging, that prices for the technology will be reducing as the technology becomes more commonly in use.

These possible future benefits will be evaluated as the plan progresses.

Funding Source Operating to Capital Conversion Network Reserve Long Term Reserve Annual Capital Allocation Total

1992/19 1993/19 1994/19 1995/19 93 94 95 96 $ 400 K $ 500 K $ 600 K $ 425 K $ 400 K - $ 100 K -

$ 100 K $ 400 K

$ 100 K $ 100 K $ 100 K $ 1 M $ 1 M

$ 800 K $ 425 K

Operating Funding The continuing changes in work patterns in all campus constituencies, the opportunities presented by the technological advancements that can be purchased from the world wide industry and the pressures upon the University to achieve even greater levels of efficiency all contribute to the evolution and the displacement of current hardware, software and services. The balance is often difficult to achieve between retaining the old to further capitalize on previous investment versus venturing into the new with all the attendant challenges of change. Notwithstanding the difficulties, steady and even leveled progress is required for the University to maintain its stature of excellence and respect as an institution of learning. The obligation upon the service providers is to demonstrate a well managed evolution. The premise throughout is that while introducing new operating expenditures, those expenditures must be offset by the savings accrued in decommissioning the older less efficient technology. In 1989 Network Systems Limited ceased new development and feature enhancement for their HYPERbus line. The product could continue to be purchased through to 1991. Further, their engineering support organization would continue maintenance and service through to early 1994. Within this scenario CNS Data Communications & Networks undertook to publicize the business situation and prepare a statement of direction outlining the short and long term alternatives. Further, that alternative methods of connection would be promoted as opportunities arose, and that contingency plans would be prepared for a limited HYPERbus support beyond the vendors dissolution date.

The FDDI Backbone Network provides connectivity for that user base converting from the 327x Terminal working environment to a Local Area Network working environment. These conversions must be encouraged given the difficulties of CNS having to self maintain the HYPERbus in the future. In this context it is reasonable to divert operating funding from HYPERbus to the FDDI Backbone Network; while always being cognizant that any diversion must not compromise the currently operating HYPERbus service. 1992/19 1993/19 1994/19 1995/19 93 94 95 96 $ 152 K $ 152 K $ 152 K $ 152 K $ 11.6 K $ 11.6 K $ 68.2 K $ 11.6 K $ 68.2 K $ 28.0 K $ 11.6 K $ 68.2 K $ 28.0 K $ 13.2 K $ 121.0 K $ 60.0 K ($ 29.0 K)

HYPERbus Maintenance FDDI Phase Operating FDDI Phase Operating FDDI Phase Operating FDDI Phase Operating 1 2 3 4 Total

$ 11.6 $ 79.8 $ 107.8 K K K Estimated HYPERbus $ 140.4 $ 70.0 $ 60.0 Maintenance K K K Balance $ 0 $ 2.2 K ($ 15.8 K)

Although challenging, the Operating expenditures for fiscal years 1992/1993 and 1993/1994 are manageable within the existing budget. Starting in fiscal year 1994/1995 there is a significant shortfall that requires both the further reduction of the HYPERbus maintenance costs (which may not be practical) and an attendant application to the University administration for increased base funding to cover the shortfall in expenditures.

Staff Resources The implementation of the FDDI Backbone Network and the conversion of the existing Ethernet Backbone clients to the FDDI Backbone Network, represents net new obligations placed on the existing staff resources of CNS Data Communications & Networks. Even though the FDDI technology is expected to be more reliable and therefore less staff intensive for support and maintenance, it will be used more extensively and it is certainly more complex. Some relief is expected through the progressive relinquishing of HYPERbus connections. By embarking upon a new and leading edge technological endeavor, it is incumbent upon CNS Data Communications & Networks to focus the attention of the best and most appropriate staff upon the FDDI implementation. To do this the identified technical specialists must be relieved to a large extent from their current duties.

To achieve this the pragmatic solution would be to request two 2 year term appointments. Beyond that time, it is felt that the network can be managed and operated by the existing staff levels. However, it would be premature at this time to request the indicated term appointments without first being very certain that the required resources could not be leveraged from the existing staff base. That Computing & Network Services must at the same time reduce staff levels to fulfill the Operating to Capital Conversion and the expected Base Budget Reduction is a dichotomy that must be respected. Especially as the FDDI Backbone Network is the significant beneficiary of capital funding resulting from the operating to capital conversion to date. A careful analysis and internal reorganization of CNS Data Communications & Network staff, function and service is planned to begin in May 1992. If the required resources can not be leveraged through the existing staff base, then a formal request for assistance will placed with the Vice President (Student & Academic Services) by August 1992.


Networking Advisory Committee The exploding demand for the interoperability of communication services across diverse and distributed computer platforms, both locally and world wide, requires new vehicles of partnership, information sharing and planning with the users of communication technology on the campus. To achieve cost effective solutions in an arena of rapidly changing communication technology and services, the Networking Advisory Committee is committed to forging a participatory leadership and knowledge sharing base in the use and application of networking and the orderly planning and integration of the new FDDI communication capability. There exists a diversity of requirements across the campus that can be coupled with what are invariably expensive technological opportunities within the communications industry. The strategic and tactical communication decisions require a platform of dialog for the analysis of need, the formation of consensus in objectives and the review of achievements. An enduring advisory committee, comprising of representatives from the key user constituencies and under the umbrella of Vice Presidential support provides the forum for the efficient and effective navigation of the issues and opportunities. Terms of Reference Reporting to the Vice President Student and Academic Services, the Networking Advisory Committee will assess the needs of the existing and future networking infrastructures appropriate to the campus from a strategic perspective and

participate and advise in the translation of these requirements into functional, security sensitive and adaptable networking plans for the university community. Appointments On invitation from the Chair Person of the Networking Advisory Committee subsequent to the approval of the Vice President of Student & Academic Services and the Director of Computing and Network Services. Meetings Irregular according to the mission of the day, however not less frequent than semi annually, June and December, to review achievements from the previous period and plans for the next period. Agenda Items included on request to the Chair Person accompanied by appropriate supporting documents for circulation prior to a meeting. Minutes A synopsis of the business conducted at a meeting and distributed within one month subsequent to a meeting.


Member *Brian Silzer, Chair

Department Registrar

Pho E-Mail ne 372 bsilzer@vm.ucs. 3 476 monica_beltrametti@q 7 246 wenglish@vm.ucs. 0 528 jheilik(PROFS) 2 490 craig_montgomerie@ad 6 357 userhaye@mts.ucs. 1

*Monica Computing and Beltramett Network Services i *Will English Computing and Network Services

Jim Heilik University Libraries Craig Educational Montgomeri Administration e Bob Hayes *Steve Chemical Engineering

Computing Science 476

Sutphen Keith Switzer Kevin Moodie Peter Winters Computing and Network Services Physical Plant Business

246 userkjs@mts.mts. 0 426 kmoodie@vm.ucs. 1 412 userprwp@ualta.mts 0 085 dowram@vm.ucs. 8 148 haswell@ee 6 532 rbusch@vm.ucs. 0

Doug Owram History Phil Haswell *Bob Busch Electical Engineering Research

* Executive Membership Glossary adapter: A device (single attachment or dual attachment) used to connect end-user nodes to the FDDI network; each contains an interface to a specific type of workstation or system. adaptive routing: A form of routing in which messages are forwarded through the network along the most cost-effective path currently available and are automatically rerouted if required by changes in the network topology (for example, if a circuit becomes disabled). American National Standards Institute: An organization that coordinates, develops, and publishes standards for use in the United States. ANSI: See American National Standards Institute asynchronous transmission: The transmission of data according to token-holding rules. Data transmission is initiated by the token holder if the token holding timer has not expired. attenuation: The amount of power (or light) that is lost as the light travels from the transmitter through the medium to the receiver. The difference between transmitted and received power. Expressed in decibels (dB). backbone: A network configuration that connects various LANs together into an integrated network, as for example, in a campus environment. bandwidth: A measure of the amount of traffic the media can handle at one time. In digital communications, describes the

amount of data that can be transmitted over the line in bitsper-second. beacon: A specialized frame used by media access control to announce to other stations that the ring is broken. bridge: An intelligent, protocol-independent, store-andforward device that operates as a Data Link layer relay. Used to connect similar or dissimilar LANs. Thus permitting the creation of extended LANs. Building: Buildings located on the University of Alberta main campus excluding those buildings identified in FDDI Layout on Campus as not being accessible from the Utility Corridor System. building backbone subsystem: Provides the link between the building and campus backbone. This subsystem consists of an intermediate crossconnect located in an equipment room and the cables that connect it to the telecommunications crossconnect. bypass: The ability of a station to be optically or electronically isolated from the network while maintaining the integrity of the ring. campus backbone subsystem: The cabling and crossconnects between clusters of buildings within a site. One building contains the main crossconnect. Campus Building: see Building. carrier-sense multiple access with collision detect: The channel access method used by Ethernet and ISO 8802-3 LANs. Each station waits for an idle channel before transmitting and detects overlapping transmission by other stations. CCITT (Comite Consultatif International Telegraphique et Telephonique): See International Telegraph and Telephone Consultative Committee CFM: See configuration management claim process: A technique used to determine which station will initialize the FDDI ring. CMT: See connection management concentrator: An FDDI node that provides additional attachment points for stations that are not connected directly to the dual ring. The concentrator is the focal point of the dual ring of trees topology. configuration management: That portion of connection management that provides for the configuration of PHY and MAC entities within a node. connection management: That portion of the Station Management function that controls insertion, removal, and connection of the PHY and MAC entities within a station.

counter-rotating ring: An arrangement where two signal paths, whose direction is opposite to each other, exist in a ring topology. crossconnect: Patch cable and passive hardware that is used to administer the connection of cables at a central or remote location. CSMA/CD: See carrier-sense multiple access with collision detect DAC: See dual attachment concentrator DAS: See dual attachment station Data Link layer: Layer 2 of the OSI model that defines frame construction, addressing, error detection, and other services to higher layers. decode: The act of recovering the original information from an encoded signal. destination address filtering: A feature of bridges where only messages intended for nodes on the extended LAN are forwarded. Differential Manchester encoding: A signaling method that encodes clock and data information into bit symbols. Each bit symbol is divided into two halves, where the second half is the inverse of the first half. A zero is represented by a polarity change at the start of the bit time; a one is represented by no polarity change at the start of the bit time. downstream: A term that refers to the relative position of two stations in a ring. A station is downstream of its neighbor if it receives the token after its neighbor receives the token. dual attachment concentrator: A concentrator that offers two connections to the FDDI network capable of accommodating the FDDI dual (counter-rotating) ring, and additional ports for connection of other concentrators or FDDI stations. dual attachment station: An FDDI station that offers two connections to the FDDI network capable of accommodating the FDDI dual (counter-rotating) ring. dual that case dual homing: A method of cabling concentrators and stations permits an alternate or backup path to the dual ring in the primary connection fails. Can be used in a tree or ring of trees configuration.

dual ring of trees: A topology of concentrators and nodes that cascade from concentrators on a dual ring. ECM: See entity coordination management EIA: See Electronic Industries Association

Electronic Industries Association: A standards organization specializing in the electrical and functional characteristics of interface equipment. encapsulating bridge: A proprietary hardware device that encapsulates packets in specialized frames. encode: The act of changing data into a series of electrical or optical pulses that can travel efficiently over a medium. entity: An active element within an Open Systems Interconnection layer or sublayer. entity coordination management: That portion of connection management that provides for controlling bypass relays and signaling to PCM that the medium is available, and for coordinating trace functions. extended LAN: A collection of local area networks interconnected by protocol independent store-and-forward devices (bridges). fiber: A dielectric waveguide that guides light. FDDI: See Fiber Distributed Data Interface Fiber Distributed Data Interface: A set of ANSI/ISO standards that define a high-bandwidth (100 Mb/s) LAN that uses a timed token protocol and fiber optic cable as the transmission medium. fiber optic cable: A jacketed fiber or bundle of fibers. fiber optics: A technology whereby signals in the form of pulses of light are transmitted over an optical waveguide medium through the use of light emitting transmitters and light detecting receivers. fragmentation: A process in which large frames from one network are broken up into smaller frames that are compatible with the frame size requirements of the network to which they will be forwarded. fragment: In FDDI, pieces of a frame left on the ring; caused by a station stripping a frame from the ring. frame: A data message that includes the source address, destination address, data, frame check sequence, and control information. graded index: A characteristic of fiber optic cable in which the core refractive index is varied so that it is high at the center and matches the refractive index of the cladding at the core-cladding boundary. header: Control information attached to the front of an Ethernet frame by an encapsulating bridge. The header, in conjunction with the trailer, surround the frame prior to the bridge forwarding it to an FDDI network.

ICC: See intermediate crossconnect IEEE: See Institute of Electrical and Electronics Engineers index profile: The refractive index of a fiber optic cable as a function of its distance from the core center. Institute of Electrical and Electronics Engineers: An information exchange organization. As part of its various functions, it coordinates, develops, and publishes network standards for use in the United States following ANSI rules. interconnect: A panel-mounted fiber optic coupler or wallboxmounted fiber optic coupler used to join two cables with a single pair of connectors. intermediate crossconnect: An element in the EIA/TIA 568 Commercial Building Wiring standard. Consists of the active, passive, and support components that connect the interbuilding cabling and the intrabuilding cabling for a building. intermediate system: An OSI term for a system that originates and terminates traffic, and that also forwards traffic to other systems. Also referred to as IS. International Organization for Standardization: An international agency that is responsible for developing international standards for information exchange. International Telegraph and Telephone Consultative Committee: An international consultative committee that sets international telecommunications and data communications usage standards. interoperability: The ability of all system elements to exchange information between single vendor and multivendor equipment. Also called open communications. ISO: See International Organization for Standardization LAN: See local area network link-loss budget: The total amount of loss that can be introduced and still have the optical system work LLC: See Logical Link Control local area network: A data communications network that spans a limited geographical area. The network provides highbandwidth communication over coaxial cable, twisted-pair, fiber, or microwave media. It is usually owned by the user. CNS supports Ethernet, Token Ring and Apple Talk LAN's. Logical Link Control: Part of the Data Link layer of the OSI model. It defines the transmission of a frame of data between two stations with no intermediate switching nodes. logical ring: The circular path a token follows in an FDDI

network made up of all the connected MAC sublayers. The physical topology can be a dual ring of trees, a tree, or a ring. logical topology: See logical ring MAC: See Media Access Control MAC-bridge: A term used to describe any Data Link layer bridge. main crossconnect: An element in the EIA/TIA 568 Commercial Building Wiring standard. Consists of the active, passive, and support components that connect the interbuilding backbone cables between intermediate crossconnects. Manchester encoding: A signaling method by which clock and data bit information can be combined into a single, selfsynchonizable data stream. A transition takes place in the middle of each bit time. A low-to-high transition represents a one; a high-to-low transition represents a zero. MCC: See main crossconnect Media Access Control: The Data Link layer sublayer responsible for scheduling, transmitting, and receiving data on a shared medium local area network (for example, FDDI). media interface connector: An optical fiber connector pair that links the fiber media to the FDDI node or another cable. The MIC consists of two halves. The MIC plug terminates an optical fiber cable. The MIC receptacle is associated with the FDDI node. MIC: See media interface connector Network Connection (Standard): Allows campus Local Area Networks that operate at data rates up to 16 megabits per second to attach to the campus backbone. Network Connection (Special): Allows connections to the campus backbone network of LANs or devices that exceed 16 megabits per second. Network layer: Layer 3 of the OSI model that permits communications between network nodes in an open network. node: Any device connected to a network (for example, station, concentrator, bridge). nonreturn to zero: A data transmission technique in which a polarity level, high or low, represents a logical 1 or 0. nonreturn to zero invert on ones: A data transmission technique in which a polarity transition from low to high, or high to low, represents a logical 1. No polarity transition represents a 0. NRZ: See nonreturn to zero

NRZI: See nonreturn to zero invert on ones Open Systems Interconnection: Internationally accepted framework of standards for intersystem communication. A seven layer model, developed by the International Organization for Standardization, that covers all aspects of information exchange between two systems. optical receiver: An optoelectronic circuit that converts an incoming optical signal to an electrical signal; typically a photodetector. optical transmitter: An optoelectronic circuit that converts an electrical signal to an optical signal; typically a light emitting diode or laser diode. OSI: See Open Systems Interconnection packet: A group of bits, including data and control elements, that are transmitted together. The control elements include a source address, destination address, frame control and status indicators, and frame check sequence. The data and control elements and error-control information are arranged in a specified format. PCM: See physical connection management peer-to-peer: Assigning of communications tasks so that data transmission between logical groups or layers in a network architecture is accomplished between entities in the same sublayer of the OSI model. PHY: See Physical Layer Protocol physical connection: In FDDI, the full-duplex physical layer association between adjacent PHYs in an FDDI ring. physical connection management: That portion of connection management that manages the physical connect between adjacent PHYs. This includes the signaling of connection type, link confidence testing, and the enforcement of connection rules. Physical layer: Layer 1 of the OSI model that defines and handles the electrical and physical connections between systems. The physical layer can also encode data into a form that is compatible with the medium. Physical Layer Medium Dependent (PMD): The Physical Layer sublayer that defines the physical requirements for nodes that attach to the FDDI ring. Items defined by the PMD include transmit and receive power levels, connector requirements and fiber optic cable requirements. Physical Layer Protocol (PHY): The Physical Layer sublayer that defines the media independent portion of the Physical Layer in FDDI. Items defined by the PHY include symbols, line states, data encode/decode process, and the data recovery process.

physical link: The path, via PMD and attaching cable, from the transmit logic of one PHY to the receive logic of an adjacent PHY in an FDDI ring. physical topology: The actual arrangement of cables and hardware that make up the network. PMD: See Physical Layer Medium Dependent power budget: The difference between transmit power and receiver sensitivity, including any safety margins. power penalty: The total loss introduced by planned-for splices in the fiber link. Typically, extra splices are planned but not immediately implemented. propagation delay: The time it takes for a signal to travel across the network. protocols: Set of operating rules and procedures usually governing peer-to-peer communications over a network. protocol filtering: A feature in which some bridges can be programmed to always forward or always reject transmissions that are originated under specified protocols. receive: The act of a station accepting a frame, token or control sequence from the ring. refractive index: The measure of the speed of light in a material, relative to the speed of light in a vacuum. repeat: The act of a station receiving a frame or token from an upstream station, retiming it, and placing it onto the ring for its downstream neighbor. The repeating station may examine, copy to a buffer, or modify control bits in the frame as appropriate. repeater: A level 1 hardware device that performs the basic actions of restoring signal amplitude, waveform, and timing of signals, before transmission onto another network segment. ring: Connection of two or more stations in a circular logical topology. Information is passed sequentially between active stations, each one in turn examining or copying the data and finally returning it to the originating station, which removes it from the network. Ring of Trees: An FDDI ring of trees consists of a backbone ring with FDDI concentrators (the bases of the trees) inserted on the ring at strategic locations. Point-to-point fibers (the branches of the trees) are run from these backbone concentrators to FDDI routers. Ring Node Location: A point on the FDDI backbone ring at which some of the strands of fiber in the cable are cut and terminated on a patch panel to allow Concentrators or Routers to be connected to the ring by attaching them to the patch panel using patch cables.

router: A level 3 hardware device that uses layer 3 protocols to control network communication between stations and forwards messages to end-stations or other routers. SAC: See single attachment concentrator SAS: See single attachment station sensitivity: The minimum power of an incoming optical signal that is necessary for a receiver to be able to read the signal. ships-in-the-night: Refers to routing algorithms that operate independently of each other. Simple Network Management Protocol: A high-level standardsbased protocol for network management, usually used in TCP/IP networks. single attachment concentrator: A concentrator that offers one attachment to the FDDI network and extra ports for the attachment of stations or other concentrators. single attachment station: An FDDI station that offers one attachment to the FDDI ring. SMT: See Station Management SNMP: See Simple Network Management Protocol source address filtering: A feature of some bridges where messages from designated source addresses are either always forwarded or always rejected. spanning tree: A method of creating a loop-free logical topology on an extended LAN. Formation of a spanning tree topology for transmission of messages across bridges is based on the industry-standard spanning tree algorithm defined in IEEE 802.1d. station: An addressable node on an FDDI ring capable of transmitting, receiving, and repeating data. A station has one instance of SMT, at least one instance of PHY and PMD, and an optional MAC entity. Station Management: The entity within a station on the FDDI ring that monitors station activity and exercises overall control of station activity. step index: A characteristic of fiber optic cable in which the refractive index of the core material is uniform. A sudden change (or step) of the refractive index exists at the core-cladding boundary. stuck beacon: The condition where a station is locked into sending continuous beacon frames. symbol: The smallest signaling element used by the MAC sublayer. The symbol set consists of sixteen data symbols

and sixteen nondata symbols. Each symbol corresponds to a specific sequence of code bits (code group) to be transmitted by the Physical layer. target token rotation time: The value used by the MAC receiver to time the operations of the MAC layer. The TTRT value varies, depending on whether or not the ring is operational. TCC: See telecommunications closet telecommunications closet: An element in the EIA/TIA 568 Commercial Building Wiring standard. The location for crossconnects and active, passive, and support components that provide the connection between the building backbone cabling and the horizontal wiring. Telecommunications Industries Association: TIA was formed from the Electronics Industry Association (EIA) and the United States Telecommunications Buyers Association. The TIA fiber optic committees develop and publish testing standards and specifications for fiber optic components and systems. TIA: See Telecommunications Industries Association timed-token protocol: The rules defining how the target token rotation time is set, the length of time a station can hold the token, and how the ring is initialized. token: A bit pattern consisting of a unique symbol sequence that circulates around the ring following a data transmission. The token grants stations the right to transmit. token holding timer: A timer that controls the amount of time a station may hold the token in order to transmit; asynchronous frames. token passing: A method where each node, in turn, receives and passes on the right to use the channel. The nodes are usually configured in a logical ring. token rotation timer: A clock that times the period between the receipt of tokens. trace: A diagnostic process to recover from a stuck-beacon condition. The fault is localized to the beaconing MAC and its upstream neighbor MAC. trailer: See header translating bridge: A non proprietary MAC layer device used to connect similar and dissimilar LANs according to 802.1d rules. transmit: The act of a station that consists of generating a frame, token or control sequence, and placing it on the ring for receipt by the next (downstream) station. TRT: See token rotation timer

TTRT: See target token rotation time TVX: See valid transmission timer upstream: A term that refers to the relative position of two stations in a ring. A station is upstream of its neighbor if it receives the token before its neighbor receives the token. Utility Corridor: A system of tunnels interconnecting buildings on the University campus. valid transmission timer: A timer that times the period between valid transmissions on the ring; used to detect excessive ring noise, token loss, and other faults. WAN: See wide area network wide area network: A network spanning a large geographical area that provides communications among devices on a regional, national, or international basis. window: As relates to a fiber optic cable, a window refers to a wavelength region of relatively high transmittance, surrounded by regions of low transmittance. wiring concentrator: See concentrator. workgroup: A network configuration characterized by a small number of attached devices spread over a limited geographical area. _______________________________ 1 A Standard Network Connection allows the attachment of a token ring or ethernet LAN at speeds up to a maximum of 16 megbits per second. Speeds in excess of 16 megabits per second are considered to be Special Network Connection and will be priced on a cost recovery basis. 2This is a formalization of the existing procedures for determining rates, that provide for Ethernet Backbone connections, to also accommodate FDDI connection procedures. This is not a Phase 1 contingent task. 3This a formalization of current CNS Data Communications & Network procedures, and is not a Phase 1 contingent task. 4This is a Phase 2 task that is shown here for completeness.