You are on page 1of 10

Computing and Network Services Technology Plan: An Architecture for Distributed Processing CAUSE INFORMATION RESOURCES LIBRARY The

attached document is provided through the CAUSE Information Resources Library. As part of the CAUSE Information Resources Program, the Library provides CAUSE members access to a collection of information related to the development, use, management, and evaluation of information resources- technology, services, and information- in higher education. Most of the documents have not been formally published and thus are not in general distribution. Statements of fact or opinion in the attached document are made on the responsibility of the author(s) alone and do not imply an opinion on the part of the CAUSE Board of Directors, officers, staff, or membership. This document was contributed by the named organization to the CAUSE Information Resources Library. It is the intellectual property of the author(s). Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, that the title and organization that submitted the document appear, and that notice is given that this document was obtained from the CAUSE Information Resources Library. To copy or disseminate otherwise, or to republish in any form, requires written permission from the contributing organization. For further information: CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301; 303449-4430; e-mail To order a hard copy of this document contact CAUSE or send e-mail to

University of Alberta Computing & Network Services Technology Plan

An Architecture for Distributed Processing

Computing and Network Services (CNS) Room 103 General Services Bldg. Telephone 492-9327 Abstract: This is one within a series of documents detailing the various components of a Technology Plan. This document suggests an architecture for implementing distributed information processing using client/server platforms. The content will be under review as work continues, leading to a composite Technology Plan scheduled for completion in June 1993. An Architecture for Distributed Processing The development of microcomputer technology has led to the dispersal of processing as computing equipment has been distributed away from central sites. As well as gains in function, this dispersal has created needs for interoperability and portability of applications and has led to the recognition that processing must also become distributed. This need is being met by developments in the definition of standards and advances in the distributed computing capabilities provided by software and networks. One of these advances is support for a concept known as client/server processing. Client/server technology is still developing and presents some challenges, but, as outlined in this section, we propose that it be the basis of a system architecture that will meet the campus needs for truly distributed processing. The Dispersal of Processing on Campus In the centralized computing model that was prevalent through to the end of the 1980's, users in the various departments across the University campus depended upon the central computing department for nearly all their administrative data processing needs. However, in some environments users experience frustration with the centralized model. This is particularly true where the processes do not require sharing data across the campus. In such cases, central support for their projects seems to be a low priority, leading sometimes to years of waiting for the development of a new application. By the mid 1980's the microcomputer offered users an alternative to waiting for central support. Microcomputers were cheap enough to fit the departmental budget and powerful enough to support a group of users. Consequently, a number of departments installed local area networks to interconnect their microcomputers and developed local applications. Over the last four years the short list of departments includes :

* * * * * * * * * * * * * *

Budget & Statistics Central Stores Chemical Engineering Computing Science Dentistry Housing & Food Services Libraries Materials Management Medicine Parking Physical Education Space Physics Student Fees and others..........

Meanwhile, other departments continue to rely on central computing services for their processing, primarily involving the large enterprise wide information systems. The transition of work to the microcomputer in the mid 1980's proved both beneficial and limiting for departmental computing operations. On the one hand, microcomputers dramatically improved the productivity as they had enough memory for intelligence, disk storage, excellent graphics and interactive capabilities. On the other hand, sharing the data proved to be more difficult because of transportation and application incompatibilities - especially true as microcomputers became more numerous and applications crossed departmental boundaries and microcomputers of various types. Communicating with another department, whether on its own or connected via the network, is often difficult. Moving from Dispersed to Distributed Processing To address the needs of this environment, the University users are increasingly insisting that their applications work together, to interoperate, as part of an overall educational support process. As the University invests in evolving the applications to the new platforms, it is necessary that those investments be adaptable to changing needs. Portability of data, people skills and applications is required to effectively share information, preserve people skills, and reduce application development costs and time and resulting processing costs; while increasing functionality. To achieve these goals of interoperability and portability, the University requires an open information infrastructure. The open information infrastructure through industry standards and guidelines becomes the mechanism that allows the effective flow of information across the processes of the University; supporting the changing dimensions of today's working environment and the quality of the students experience with the campus. This infrastructure, much like that of a transportation system, provides the building blocks and protocols that not only enable continued

operation, but also enable the deployment of new technology based upon the needs of a particular user. A planned and effective implementation of enterprise wide distributed processing will provide a number of benefits: * The productivity of information users will improve, as staff will be able to do their jobs with increased independence while still having access to more enterprise wide computing resources and information. It will be easier to integrate the portfolio of existing applications. That is, departments that develop the necessary skills and facilities will be better positioned to utilize the islands of information throughout the enterprise based on common data standards. Accessing information relevant to a departmental function will be quicker and cheaper to retrieve when that information is stored locally. Highly productive application packages with good user interfaces, operating on microcomputer platforms, can be used to process and present information. These packages will include desktop publishing, spreadsheets, relational data base products, printing and electronic mail, etc... The development time and costs for computer applications will reduce as the use of enhanced state of the art tools available on microcomputer platforms increases. The efficient use of the microcomputer processing capacity will off load processing from the System/370 Architecture computers. Thereby allowing the migration from the System/370 platforms to be progressive and orderly. (Reference the Future of the System/370 Architecture on the University of Alberta Campus) Software innovations will be integrated into the applications with greater ease. The emerging, significant and visible breakthroughs in information processing are being introduced at the client/server and the local area network levels. As a result of all of the above, the University educational support and business processes will run more efficiently.

Because enterprise wide distributed processing is a new and developing processing methodology, it requires careful planning, appropriate hardware, software, data communications, tools, and skilled system integrators. As the methodology and the technologies it depends on mature, a number of challenges will need to be surmounted - including :

Suitable off the shelf application software may not exist for particular client/server processing configurations. A change of mind set is required of all constituencies to effectively exploit distributed client/server technology while re-engineering the University education support and business processes. Industry standards that support client/server processing are immature and still evolving; albeit at a rapid pace. Development tools for the design, implementation and maintenance of client/server applications are still evolving. The operating environment for client/server applications may not be stable due to the many rapid changes that could be applied across the participating cooperative platforms. The quality of information derived from numerous distributed databases may be compromised if the data in them is not synchronized correctly. For example, a database may contain out dated or incomplete information relative to other databases. The network and processors may not have the capacity, speed or memory to support a particular client/server application. Departments may be reluctant to accept responsibility for the administration and maintenance of distributed client/server applications. Technical support, training and software distribution will be more difficult to manage as more processing occurs in geographically dispersed locations. Effective security, backup/restore, software distribution and support infrastructures must be developed.

A distributed processing strategy that can provide service on a University wide level must be continually cognizant that : * The Technology is utilized for appropriate purposes. * Protection is provided for existing investments that have been made in networking and application technology. * A non proprietary approach in product procurement is important. * Growth attendant with the strategy be non disruptive.

* *

There be a high degree of resource sharing. There be a flexibility of design that will not impede future technology opportunity. * To realize economies and interoperability there be more adherence to industry communication and data processing standards. * There be support for diverse data types and services. * The ultimate goal is University processes that run more efficiently. Distributed Computing, Definitions and Models Part of the difficulty in discussing the future computing environment on this campus is that terminology in this area has very different meaning to different people. To some, distributed computing evokes the specter of each department providing its own mainframe; to others, it means the collapse of intra-campus connectivity. Thus, it is often difficult to engage in rational discourse because words have become encumbered and overloaded with meanings that are far from their intent. In particular, it is critical to differentiate distributed computing, which is a technology model for providing computing resources, from decentralized computing, which is a management and operational model for providing those services. Distributed computing allows for a mixture of centrally-provided and locally-provided services. All services are not necessarily distributed. Distributed computing refers to a technology model where computing, instead of being concentrated on a mainframe computer, is now spread across more specialized systems which are often dedicated to a specific task. Some of these systems are generally referred to as servers, because they are designed to provide specific services to users of a distributed environment. Servers might include file servers, print servers, tape servers, high performance compute servers, and others. In a distributed computing environment, users typically sit at an desktop microcomputer (PC, MacIntosh, Unix), usually referred to as a client, doing some computation locally but also relying on these more specialized servers to provide additional facilities needed by the clients. Client/server computing is the most common framework for distributed computing. Underlying all of this is an interconnected network environment that provides for campus, national, and international connectivity. The future computing environment will bring qualitative changes to the user's experience of information technology. A key aspect is the integration into the user's desktop work environment of access to multiple services now provided remotely. Today, users must decide where (on which host) a function will be performed and then select what is to be done. Usually, users must leave the comfortable environments of their desktops to access these services, thus interrupting their work flow. An example is signing on to VM Profs to retrieve mail, something that many users do several times a day. In the future computing environment, users will select functions to be performed through their desktop

environments without disrupting their work flow. Once the function is selected, users may determine where that function is to be performed, although in many cases this will not matter. Another important aspect is the shift from terminal access to using client/server software to manipulate, view, and modify data. This will allow users to take advantage of the computational power increasingly available the from desktop microcomputer. Users will not simply transport their old ways of interacting with data to their desktop platforms; they will also adopt computation-intensive, interactive methods of data analysis that often were not practical in a mainframe-based environment. For example, statistical information need no longer be displayed as tables of numbers, but rather it can be displayed as a graphical structure with which the user can interact. Such visualization software may become the main stay to data analysis and to the teaching of statistics. There is very little disagreement that the future of computing, both on this campus and elsewhere, will be distributed. The price of providing information technology services relative to the performance delivered is increasingly making it difficult to justify mainframe solutions. For example, attempting to provide high quality word processing services on a mainframe compared to personal systems would be prohibitive. Since the economic trends argue compellingly in favor of a more distributed solution, the important discussion on this campus is how to implement a distributed computing model that meets as many needs as possible. Also, as increasingly more work is done at desktop microcomputers, simplicity of use and ease of integration will require that services such as printing and file access be made available to these desktop users as network based services, and not on an isolated mainframe. Distributed computing does not require that acquisition, financing, management, operation, or support necessarily be decentralized. It is almost axiomatic that in any distributed computing environment, some services will be provided centrally, and some locally. Thus, increasingly the discussion needs to be framed in terms of which services must be provided centrally vs. locally, all within the context of a distributed computing environment. In practice, certain services, such as mail and conferencing, may be viewed as so critical to the infrastructure that they need to be provided centrally, even though they will be accessed in a client/server environment as opposed to a mainframe environment. Even within a given service, such as mail, combinations of management options are possible. For example, a likely scenario for this campus is that while there will be locally-run mail servers serving large parts of the campus, central organizations such as CNS will continue to provide mail servers for those portions of the campus which do not provide their own mail servers. Never the less CNS must provide cross mail services between servers and access to international mail services. Similar scenarios are likely for services such as conferencing,

printing, and tapes. For environments where distributed computing is already the norm, it is not uncommon to find significant centralization of many resources and services. For example, the Budget & Statistics or Materials Management local area network installations are by most measures fully-distributed computing environments, yet they are still quite centralized in terms of operations and support, albeit that the centralization is now at the department level instead of the university level. For example, all hardware and software maintenance, support for file, print, and compute servers are provided by a central organization within the department, down to providing for the centralized backup of files. The Role of the Client/Server Model in a Distributed Processing Strategy The client/server model emerged as a technique for solving the problem of sharing enterprise wide information. In contrast to the master/slave hierarchy prevalent with the centralized computing model, the client/server model describes a peer to peer relationship. The client represents the user's computing system, usually a microcomputer, while the server represents the system, of any type or size, where some or all of the data resides. The client issues a request for a file, a program, or data, and the server responds with the resource requested. The entire event is transparent to the user, who accesses the information as if it were local to the user; even though that data may well be stored across the enterprise on a remote server. A system can function as either a client or a server, or both, depending on where the information resides, with the owner of the information acting as the server. Thus information can be shared bi directionally across the entire network enabling users in different locations to work together and share each others ideas and efforts. A Client/Server Architecture for the Campus The implementation, that is now proceeding, of the FDDI Backbone Network on the University campus is a fundamental cornerstone to the development of the enterprise wide client/server technology. Operating at 100 megabits per second (Mbps) for high speed data transfer, the FDDI Backbone Network supplies the bandwidth required to support the multiple and concurrent sessions required by the distributed client/server technology. The adequacy of this bandwidth will be continually monitored as communication traffic increases with the introduction of an increasing number of client/server applications. With an operating distance of up to 200 kilometers the FDDI Backbone Network addresses the severe networking capacity and reach bottlenecks that will be experienced on the University campus as : * * more users are added to the campus network, the computing power of smaller desktop systems grows, * the data traffic on existing campus networks increases, * more client/server computing facilities are

installed on campus, the use of graphics intensive applications increases, * more local area networks need to be interconnected, and * complex networks span longer distances on campus. * An architecture for client/server computing must provide enterprise wide access to applications and information both new and existing. This access must work across the widest range of information technologies and transparently present to the user the image of a single integrated solution. The architecture must recognize the differences between : * Information that is of enterprise interest versus information that is predominantly contained within the process of a department. * Enterprise Applications of interest to all users versus Applications local to a departments function. * Respect for solutions pertinent to local conditions versus accommodation in design for enterprise participation. The accomplished client/server model would have all users within the University being clients, and all server microcomputers installed on existing or new departmental local area networks acting as departmental application servers to the campus. Enterprise applications would be mounted on discrete servers. This is the mature form of client/server technology that today, and for the next several years, is only demonstrable in proprietary and mainly single vendor solutions. Our model, that avoids the proprietary implementation, connects up to eight large Enterprise Servers to the FDDI Backbone Network. To provide maximum performance, each Enterprise Server is logically located adjacent to the local area networks of the administrative units that the Server supports. Figure 1 : Campus wide Client/Server Model Architecturally, the Enterprise Servers must be of a size to support multiple and concurrent computing sessions, and have a commensurate attached information storage capacity to support enterprise wide applications, including : Enterprise data that typically would be organized in SQL Query relational tables. It is expected these tables would be verification data, such the Account Number & Description constants used by many applications across the campus, or summary reference data such as demographic statistics related to processes from Human Resources, the Registrar or Budget & Statistics. The currency of the tables could be maintained actively where there was compatibility between the Enterprise Server and a Departmental Application Server operating on the local area network under the responsible data owners department. Otherwise the tables could be

refreshed periodically with an attendant conversion process. Enterprise File Libraries house the programs and files needed to support the users for their access to enterprise data. These libraries would include the cross platform query tools and application development tools. Enterprise Services supporting cross platform applications : * Electronic University Form processing * Electronic data interchange * Electronic Conferencing * Enterprise document management * Electronic Mail * Directory of Services * Campus Wide Information System * Data & Program Archives * Mass printing, plotting, etc... * Data Storage * * Library Access Student Services

The principle function of a client/server architecture based upon a network of Enterprise Servers is to place powerful technology near to its major point of use while providing access to that technology across the whole campus with security and integrity.