Professional Documents
Culture Documents
30446ipcc Paper7a Vol1 Cp3part2 PDF
30446ipcc Paper7a Vol1 Cp3part2 PDF
32 Information Technology
communicate more easily among themselves. Each user can be assigned a mail-box on the
server. Messages to other users can then be dropped into the mail-box and read by them
when they log into the network.
(v) Organizational Benefits - LANs provide numerous benefits that include reduced costs
in computer hardware, software and peripherals, and a drastic reduction in the time and cost
of training or re-training manpower to use the systems. In addition, LANs helps managers and
executive to communicate with each other more easily and faster, without any logistical
constraints. Information flow too becomes a lot smoother with various departments having the
ability to access or request for information and data pertinent to them.
(vi) Data management benefits - Since data is located centrally on the server, it becomes much
easier to mange it, as well as back it up. No file is transferred between users through floppies.
(vii) Software cost and up-gradation - If the organization is concerned about using licensed
software purchasing, a network version can save a lot of money since there would be no need
to buy multiple copies of the same software for every machine in the organization. Therefore,
software upgrades are much easier as any given package is stored centrally on the server.
3.8.3 LAN Requirements: There are certain features that every LAN should have and users
would do well to keep note of these when they decide to implement their own network. These
features essentially involve hardware and software components. Broadly, these can be
defined as follows:
(i) Compatibility - A local area network operating system must provide a layer of
compatibility at the software level so that software can be easily written and widely distributed.
A LAN operating system must be flexible, which means that it must support a large variety of
hardware. Novell Net Ware is a network operating system that can provide these features, and
has today, become an industry standard.
(ii) Internetworking - Bridging of different LANs together is one of the most important
requirements of any LAN. Users should be able to access resources from all workstations on
the bridge network in a transparent way; no special commands should be required to cross the
bridge. A network operating system must be hardware independent, providing the same user
interface irrespective of the hardware.
(iii) Growth Path and Modularity - One of the most important requirements of a LAN is its
modularity. A set of PCs should get easily converted into a LAN which can grow in size simply
by adding additional workstations. If more storage is required, one should be able to add
another hard disk drive, or another server.
(iv) System Reliability and Maintenance - All computers are prone to system lockups,
power failures and other catastrophes. If a centralized processing system goes down, all users
connected to it are left without a machine to work on. Such a situation can arise even in a
distributed or local area network system. However, a LAN operating system should be
powerful enough to withstand accidents. In fact, Novells SFT Level I and Level II include fault-
tolerance as a feature.
3.8.4 Components of a LAN : A typical local area network running under Novell NetWare has
five basic components that make up the network. These are as follows:
- File Servers
- Network operating system
- Personal Computers, Workstations or Nodes
- Network Interface Cards
- Network Cabling
(i) File Server - A network file server is a computer system used for the purpose of
managing the file system, servicing the network printers, handling network communications,
and other functions. A server may be dedicated in which case all of its processing power is
allocated to network functions, or it may be non-dedicated which means that a part of the
servers functions may be allocated as a workstation or DOS-based system.
(ii) The Network operating system - It is loaded into the servers hard disk along with the
system management tools and user utilities. When the system is restarted, NetWare boots and
the server commands its control. At this point, DOS or Windows is no longer valid on the
network drive, since it is running the network operating system or NetWare; however most
DOS/Windows programs can be run as normal. No processing is done on the server, and
hence it is called a Passive Device. The choice of a dedicated or non-dedicated network
server is basically a trade-off between the cost and performance, and operation of a network.
The larger the network, the more important it becomes to have a high performance server.
Larger amounts of RAM are required to support disk caches and printer queues (which are
created due to sharing of same hard disk and printers by number of nodes on the network).
The server should be matched with anticipated throughput as closely as possible. While most
IBM systems are satisfactory for NetWare, a Pentium system is preferable for better overall
performance of the network.
(iii) Workstations - Workstations are attached to the server through the network interface
card and the cabling. Workstations are normally intelligent systems, such as the IBM PC. The
concept of distributed processing relies on the fact that personal computers attached to the
networks perform their own processing after loading programs and data from the server.
Hence, a workstation is called an Active Device on the network. After processing, files are
stored back on the server where they can be used by other workstations.
(iv) Network Interface Card (NIC) - As discussed earlier, every device connected to a LAN
needs a Network interface card (NIC) to plug into the LAN. For example, a PC may have an
Ethernet card installed in it to connect to an Ethernet LAN.
(v) Network Cabling - Once the server, workstations and network interface cards are in
place, network cabling is used to connect everything together. The most popular type of
network cable is the shielded twisted-pair, co-axial and fiber optic cables.
3.8.5 Wireless LAN (WLAN): Wireless networks do not require any physical media or cables
for data transmission. Radio waves and Infrared signals are used for communication. Radio
waves are most commonly used transmission media in the wireless Local Area Networks.
A Wireless Local Area Network (LAN) is a flexible data communications system implemented
as an extension to a wired LAN as shown in Fig 3.8.5.1. Using radio frequency (RF)
technology, wireless LANs transmit and receive data over the air, minimizing the need for
wired connections. With wireless LANs, users can access shared information without any plug-
in or without any physical connection with wired infrastructure. Flexibility and mobility make
wireless LANs both effective extensions and attractive alternatives to wired networks. Wireless
LANs provide all the functionality of wired LANs, without the physical constraints of the wire
itself. Wireless LAN configurations range from simple peer-to-peer topologies to complex
networks offering distributed data connectivity and roaming.
How wireless LANs Work?
Wireless LANs use electromagnetic airwaves (radio or infrared) to communicate information
from one point to another without relying on any physical connection. Radio waves are radio
carriers and they simply perform the function of delivering signal to a remote receiver. The
data being transmitted is superimposed on the radio carrier so that it can be accurately
extracted at the receiving end.
* www.wikipedia.org
obtained. End users access the wireless LAN through wireless-LAN adapters, which are
implemented as PC cards in notebook or palmtop computers, as cards in desktop computers,
or integrated within hand-held computers. Wireless LAN adapters provide an interface
between the client network operating system (NOS) and the airwaves via an antenna.
3.9 Client / Server Technology
Recently, many organizations have been adopting a form of distributed processing called client /
server computing. It can be defined as a form of shared, or distributed, computing in which tasks
and computing power are split between servers and clients (usually workstations or personal
computers). Servers store and process data common to users across the enterprise, these data
can then be accessed by client system. In this section we will discuss various aspects of
client/server technology. But before that, let us first look at the characteristics of the traditional
computing models and various limitations that led to the client/ server computing.
3.9.1 Limitation of the traditional computing models
(i) Mainframe architecture: With mainframe software architectures, all intelligence is within
the central host computer (processor). Users interact with the host through a dump terminal
that captures keystrokes and sends that information to the host. Centralized host-based
computing models allow many users to share a single computers applications, databases, and
peripherals. Mainframe software architectures are not tied to a hardware platform. User
interaction can be done using PCs and UNIX workstations.
A limitation of mainframe software architectures is that they do not easily support graphical user
interfaces or access to multiple databases from geographically dispersed sites. They cost literally
thousands of times more than PCs, but they sure dont do thousands of times more work.
(ii) Personal Computers: With introduction of the PC and its operating system,
independent-computing workstations quickly became common. Disconnected, independent
personal computing models allow processing loads to be removed from a central computer.
Besides not being able to share data, disconnected personal workstation users cannot share
expensive resources that mainframe system users can share like disk drives, printers,
modems, and other peripheral computing devices. The data and resources sharing problems
of independent PCs and workstations, quickly led to the birth of the network/file server
computing model, which links PCs and workstations together in a Local Area Network-LAN.
(iii) File sharing architecture: The original PC networks were based on file sharing
architectures, where the server downloads files from the shared location to the desktop
environment. The requested user job is then run in the desktop environment.
The traditional file server architecture has many disadvantages especially with the advent of
less expensive but more powerful computer hardware. The server directs the data while the
workstation processes the directed data. Essentially this is a dumb server-smart workstation
relationship. The server will send the entire file over the network even though the workstation
only requires a few records in the file to satisfy the information request. In addition, an easy to
use graphic user interface (GUI) added to this model simply adds to the network traffic,
which is easily accessed despite the physical location of the data within the organization. The
implementation of client/server technology includes Online banking application, Internal call
centre application, Applications for end-users, E-commerce online shopping page, Intranet
applications, Financial, Inventory applications based on the client Server technology; Tele
communication based on Internet technologies and may more.
3.9.4 Characteristics of Client / Server Technology: There are ten characteristics that
reflect the key features of a client / server system. These ten characteristics are as follows:
1. Client/server architecture consists of a client process and a server process that can be
distinguished from each other.
2. The client portion and the server portions can operate on separate computer platforms.
3. Either the client platform or the server platform can be upgraded without having to
upgrade the other platform.
4. The server is able to service multiple clients concurrently; in some client/server systems,
clients can access multiple servers.
5. The client/server system includes some sort of networking capability.
6. A significant portion of the application logic resides at the client end.
7. Action is usually initiated at the client end, not the server end.
8. A user-friendly graphical user interface (GUI) generally resides at the client end.
9. A structured query language (SQL) capability is characteristic of the majority of client/
server systems.
10. The database server should provide data protection and security.
3.9.5 Benefits of the Client /Server Technology: Client/server systems have been hailed as
bringing tremendous benefits to the new user, especially the users of mainframe systems.
Consequently, many businesses are currently in the process of changing or in the near future
will change from mainframe (or PC) to client / server systems. Client / Server have become
the IT solution of choice among the countrys largest corporations. In fact, the whole transition
process, that a change to a client/ server invokes, can benefit a companys long run strategy
and discussed broadly point-wise. The work can be performed by the user easily. It reduces
the total cost of ownership by increasing productivity.
It increases end users & developers productivity by facilitating them required s/w and
h/w resources in time.
Less manpower is required to maintain a client/server application than mainframe
systems. The expenses of hardware and network in the client/server environment are
less than those in the mainframe environment.
Users are more productive today because they have easy access to data and because
applications can be divided among many different users so efficiency is at its highest.
Client/server applications make organizations more effective by allowing them to port
applications simply and efficiently.
It reduces the cost of the clients computer: the server stores data for the clients rather
than clients needing large amounts of disk space. Therefore, the less expensive network
computers can be used instead.
It reduces the cost of purchasing, installing, and upgrading software programs and
applications on each clients machine: delivery and maintenance would be from one
central point, the server.
The management control over the organization would be increased.
Many times easier to implement client/server than change a legacy application.
It leads to new technology and the move to rapid application development such as object
oriented technology.
It gives long term cost benefits for development and support.
It is easy to add new hardware to support new systems such as document imaging and
video teleconferencing which would not be feasible or cost efficient in a mainframe
environment.
It can implement multiple vendor software tools for each application.
3.9.6 Components of Client Server Architecture (as shown in Fig 3.9.6.1)
line, a VPN uses "virtual" connections routed through the Internet from the company's private
network to the remote site or employee. There are two common types of VPN.
(a) Remote-access, also called a Virtual Private Dial-up Network (VPDN) is a user-to-LAN
connection used by a company that has employees who need to connect to the private
network from various remote locations. Typically, a corporation that wishes to set up a large
remote-access VPN will outsource to an Enterprise Service Provider (ESP). The ESP sets
up a Network Access Server (NAS) and provides the remote users with desktop client
software for their computers. The telecommuters can then dial a toll-free number to reach the
NAS and use their VPN client software to access the corporate network.
A good example of a company that needs a remote-access VPN would be a large firm with
hundreds of sales people in the field. Remote-access VPNs permit secure, encrypted
connections between a company's private network and remote users through a third-party
service provider.
(b) Site-to-Site VPN: Through the use of dedicated equipment and large-scale encryption, a
company can connect multiple fixed sites over a public network such as the Internet. Site-to-
site VPNs can be one of two types:
Intranet-based - If a company has one or more remote locations that they wish to join in
a single private network, they can create an intranet VPN to connect LAN to LAN.
Extranet-based - When a company has a close relationship with another company (for
example, a partner, supplier or customer), they can build an extranet VPN that connects
LAN to LAN, and allows all of the various companies to work in a shared environment.
Different examples of VPN are shown in Fig 3.10.1.
* www.49-nt-reseller-hosting.com
frequently provide database server functionality, and some DBMS's (e.g., MySQL) rely
exclusively on the client-server model for database access.
Database servers offer real potential for remote database access and distributed databases.
Because the database server only returns selected database record(s) to the client machine
(instead of large blocks of data), remote access over relatively slow telephone lines can
provide acceptable performance. In addition, a client computer can make requests of multiple
servers regardless of physical location.
In a master-slave model, database master servers are central and primary locations of data
while database slave servers are synchronized backups of the master acting as proxies.
3.12.2 Application Server: An application server is a server program that resides in the
server (computer) and provides the business logic for the application program. The server can
be a part of the network, more precisely the part of the distributed network. The server
program is a program that provides its services to the client program that resides either in the
same computer or on another computer connected through the network.
Application servers are mainly used in Web-
based applications that have a 3-tier
architecture.
First Tier: Front End - Browser (Thin
Client) - a GUI interface lying at the
client/workstation.
Second Tier: Middle Tier - Application
Server - set of application programs
Third Tier: Back End - Database Server.
The application server is a second/middle tier
of the three-tier architecture. In other words,
application servers are now an integral part of
the three-tier architecture.
* www.googlecode.com
3.12.3 Print Server: Print server provides shared access to printers. Most LAN operating
systems provide print service. Print service can run on a file server or on one or more
separate print server machines. Non-file server print servers can be dedicated to the task of
print service, or they can be non-dedicated workstations.
The disadvantages of using workstations as print servers are similar to the disadvantages of
using file servers as workstations. The workstation may run a little slower when print services
are being used, a user could shut down the server without warning, or an application could
lock up the server. The consequences of a lock-up or shut-down on a print server, however,
are usually less severe than the consequences of locking up a file server. The time involved in
dealing with these problems, however, can be costly.
3.12.4 Transaction Server: Microsoft Transaction Server (MTS) is an integral part of
Windows NT, and is installed by default as part of the operating system in NT5. It is a service
in much the same way as Internet Information Server or the File and Print services that we
now take for granted. In other words, it is part of the system that is available in the background
whenever one of our applications requires it.
Control and configuration of MTS is via either a snap-in to the Microsoft Management
Console, or through the HTML administration pages that are included with MTS. This is very
similar to the interface provided for Internet Information Server 4, and gives an integrated
management function that is useful when building and setting up distributed applications.
What Does Transaction Server Do? To understand what MTS is and what it does, we need
to first make one very important point clear. This software should really have been named
Microsoft Component Server, not Microsoft Transaction Server. MTS is all about managing the
way applications use components, and not just about managing transactions. Yes,
transactions are a big part of many applications we write and MTS can help to manage
thesebut MTS also provides a very useful service for applications that dont use transactions
at all. To be able to define MTS accurately, we first need to understand what goes on inside it
in the most basic way.
3.12.5 Internet Server: An Internet server is a special computer connected to the Internet
users across the world for sharing of information. The server is build up with high quality
components and is stored in data centre, to ensure better Internet connectivity. For this
purpose, specific web server software has to be installed and configured according to given
platform. The various types of Internet server are as follows.
(i) File server: A file server, one of the simplest servers, manages requests from clients for
files stored on the servers local disk. A central file server permits groups and users to share
and access data in multiple ways. Central file servers are backed up regularly and
administrators may implement disk space quotas for each user or group of users.
For example, using a certain software client, PC users are able to "mount" remote UNIX
server file systems. As a result, the remote network file system appears to be a local hard
drive on the user's PC.
(ii) Mail server: A mail server is the most efficient way to receive and store electronic mail
messages for a community of users. A central mail server runs 24 hours a day. The mail
server can also provide a global email directory for all community and school users, as well as
email gateway and relay services for all other mail servers in the district. In such a scenario,
user email boxes would be maintained on a separate email server located at each school
Example: "Eudora" is a powerful cross-platform email client that receives incoming mail
messages from and sends outgoing mail messages to a mail server.
(iii) DNS server: Domain Name Service (DNS) is an Internet-wide distributed database
system that documents and distributes network-specific information, such as the associated IP
address for a host name, and vice versa. The host that stores this database is a name server.
The library routines that query the name server, interpret the response and return the
information to the program that requested it are resolvers.
For example, to determine the location of a remote computer on the Internet, communications
software applications (such as NCSA Telnet) use resolver library routines to query DNS for the
remote computer's IP address.
(iv) Gopher server: Gopher is an Internet application that uses multiple Gopher servers to
locate images, applications, and files stored on various servers on the Internet. Gopher offers
menu choices to prompt users for information that interests them, and then establishes the
necessary network connections to obtain the resource. For example, "Veronica" is a Gopher
application that searches databases of the file contents of world-wide Gopher servers to help
locate Gopher resources.
(v) Web server: The World Wide Web (WWW) is a very popular Internet source of information.
Web browsers present information to the user in hypertext format. When the user selects a word or
phrase that a Web pages author has established as a hypertext link, the Web browser queries
another Web server or file to move to another Web page related to the link. For example, "Internet
Explorer" is a Web browser which queries Web servers on the Internet.
(vi) FTP server: File Transfer Protocol (FTP) is an Internet-wide standard for distribution of
files from one computer to another. The computer that stores files and makes them available
to others is server. Client software is used to retrieve the files from the server. The two most
common ways to transfer files are with anonymous FTP, where anyone can retrieve files from
or place files on a specific site, and logged file transfers, where an individual must login into
the FTP server with an ID and password.
For example, Merit Network, Inc. makes network configuration files such as Domain Name
Registration templates available for anonymous FTP on ftp.merit.edu.
(vii) News server: Usenet News is a world wide discussion system consisting of thousands of
newsgroups organized into hierarchies by subject. Users read and post articles to these
newsgroups using client software. The "news" is held for distribution and access on the news
server. Newsgroups tend to generate large amounts of Internet traffic.
There are two ways to accept Usenet News: as a "push" or "pull" feed. With a "push" feed,
news articles are "pushed" onto the news server, whether or not the users read those articles.
With a "pull" feed, the news server has all of the headers for the collection of Usenet News
articles, but does not retrieve the article itself unless it is specifically requested by a user.
For example, the newsgroup "k12.ed.comp.literacy" contains a discussion of topics dealing
with Computer Literacy in K12 schools.
(viii) Chat server: Some organizations choose to run a server that will allow multiple users to
have "real-time" discussions, called "chats" on the Internet. Some chat groups are moderated;
most however are unmoderated public discussions. Further, most chat servers allow the
creation of "private" chat rooms where participants can "meet" for private discussions. The
popularity of chat rooms has grown dramatically over the past few years on the Internet;
however, the ability to talk in small groups on the Internet is not new. "Chat" is a graphical
form of an Internet service called IRC, or Internet Relay Chat. IRC was a replacement for a
UNIX command called "talk." Using talk and even IRC can be cumbersome. Chat clients, on
the other hand, are available for all platforms and are graphical in nature, opening up their
utility to the majority of Internet users.
(ix) Caching server: A caching server is employed when number of accesses to the Internet
is required to be restricted. There are many reasons to consider doing this. Basically, a
caching server sits between the client computer and the server that would normally fulfill a
clients request. Once the clients request is sent, it is intercepted by the caching server. The
caching server maintains a library of files that have been requested in the recent past by users
on the network. If the caching server has the requested information in its cache, the server
returns the information without going out to the Internet.
Storing often-used information locally is a good way to reduce overall traffic to and from the
Internet. A caching server does not restrict information flow. Instead, it makes a copy of
requested information, so that frequently requested items can be served locally, instead of
from the original Internet source. Caching servers can also be connected in a hierarchy so if
the local cache does not have the information, it can pass the request to nearby caching
servers that might also contains the desired files.
(x) Proxy server: A proxy server is designed to restrict access to information on the
Internet. If, for example, a user wants to have access to pornographic materials, a proxy
server can be configured to refuse to pass the request along to the intended Internet server.
A proxy server operates on a list of rules given to it by a System Administrator. Some proxy
software uses list of specific forbidden sites, while other proxy software examines the content
of a page before it is served to the requester. If certain keywords are found in the requested
page, access to it is denied by the proxy server.
3.13 Network Tier Architectures
The Tier: A tier is a distinct part of hardware or software. Fig. 3.13.1 shows the most common
tier systems.
* www.googlecode.com
server applications. The two main advantages for client-applications are clear: quicker
development through the reuse of pre-built business-logic components and a shorter test
phase, because the server-components have already been tested.
Dynamic load balancing: If bottlenecks in terms of performance occur, the server
process can be moved to other servers at runtime.
Change management: It is easy - and faster - to exchange a component on the server
than to furnish numerous PCs with new program versions. For example: in case of VAT,
it is quite easy to run the new version of a tax-object in such a way that the clients
automatically work with the version from the exact date that it has to be run. It is
however, compulsory that interfaces remain stable and that old client versions are still
compatible. In addition such components require a high standard of quality control. This
is because low quality components can, at worst, endanger the functions of a whole set
of client applications. At best, they will still irritate the systems operator.
Disadvantages
Creates an increased need for network traffic management, server load balancing,
and fault tolerance.
Current tools are relatively immature and are more complex.
Maintenance tools are currently inadequate for maintaining server libraries. This is
a potential obstacle for simplifying maintenance and promoting code reuse
throughout the organization.
3.13.4 Multi-tier Architecture: Multi-tier architecture (often referred to as n-tier architecture)
is a client-server architecture in which an application is executed by more than one distinct
software agent. For example, an application that uses middleware to service data requests
between a user and a database employs multi-tier architecture. The most widespread use of
"multi-tier architecture" refers to three-tier architecture
Multi-tier is shown below in Fig 3.13.4.1
The client program has only User Interface (UI) code that talks, via a network, to the "middle
tier" on which the business and database logic sits. In turn the middle tier talks, via a network,
to the database. In practice the middle tier can be placed, if necessary, on the same machine
as the database.
In either architecture the data "traffic" is highest between database logic and database server
(illustrated by a thicker arrow). This means that the network infrastructure that connects the
database logic with the database server needs to be very high bandwidth; i.e. expensive. With
traditional client/server architecture it is easy to create a scenario where no existing network
technology would be enough to cope.
The advantages of a multi-tier architecture are:
Forced separation of UI and business logic;
Low bandwidth network;
Business logic sits on a small number (may be just one) of centralized machines; and
Enforced separation of UI and business logic.
3.14 Data Centre
A data center is a centralized repository for the storage, management and dissemination of
data and information. Data centers can be defined as highly secure, fault-resistant facilities,
hosting customer equipment that connects to telecommunications networks. Often referred to
as an Internet hotel/ server farm, data farm, data warehouse, corporate data center, Internet
Service Provider (ISP) or wireless application service provider (WASP), the purpose of a data
center is to provide space and bandwidth connectivity for servers in a reliable, secure and
scalable environment. These data centers are also referred to as public data centers because
they are open to customers. Captive, or enterprise data centers, are usually reserved for the
sole and exclusive use of the parent company, but essentially serve the same purpose. These
facilities can accommodate thousands of servers, switches, routers and racks, storage arrays
and other associated telecom equipment.
A data center also provides certain facilities, like housing websites, providing data serving and
other services for companies. This kind of data center may contain a network operations
center (NOC), which is a restricted access area containing automated systems that constantly
monitor server activity, web traffic, network performance and report even slight irregularities to
engineers, so that they can spot potential problems before they happen. They primary goal of
a data center are to deploy the requisite state-of-the-art redundant infrastructure and systems
so as to maximize availability and prevent or mitigate any potential downtime for customers.
3.14.1 Types and Tiers: According to the varied computing needs of the businesses they
serve, data centers fall into following two main categories:
(i) Private Data Center: A private data center (also called enterprise data centers) is
managed by the organizations own IT department, and it provides the applications, storage,
able to withstand natural disasters and calamities, like fire and power failures. Recovery
sites must be well maintained as it is here that everything in the data center is replicated
for failure recovery.
(iii) Availability of Data: The goal of a data center is to maximize the availability of data, and
to minimize potential downtime. To do this, redundancy has to be built in to all the
mission critical infrastructure of the data center, such as connectivity, electrical supply,
security and surveillance, air conditioning and fire suppression.
(iv) Electrical and power systems: A data center should provide the highest power
availability with uninterrupted power systems (UPS)
(v) Security: Physical security and systems security are critical to operations. Thus, it
should provide both types of security measures to ensure the security of equipment and
data placed at the data center. The Physical security can be achieved through security
guards, proximity card and PIN for door access, biometrics access and PIN for door
access and 24 x 365 CCTV surveillance and recording. System security or data security
should be addressed at multiple levels.
Perimeter security: This is to manage both internal and external threats. This
consists of firewalls, intrusion detection and content inspections; host security; anti-
virus and access control and administrative tools.
Access management: This is for both applications and operating systems that host
these critical applications.
(vi) Backup Systems: A data center should have the provision of automatically
providing backup data services to all shared hosting solutions. In addition,
services should also be optionally available to all dedicated and co-location
solutions. Full back up may be taken for full system restoration and off-server and
off-site backup services should also be available as well.
(vii) Continuous monitoring 24x7: 24x7x365 monitoring must be there in data centers
for the security of the data. Such a monitoring will be done for all the hardware
including routers, switches, UPS systems and servers. In addition, the network
operations and all the critical services related to the Internet and associated
technologies namely FTP, HTTP, SMTP should also be monitored.
(viii) Environment Cooling: Cooling infrastructure is a significant part of a data center.
The complex cooling infrastructure creates the optimal computing environment,
ensuring the longevity of the servers installed within and the vitality of the
organization they support. Latest cooling techniques should be used to address
the challenges associated with latest generation servers and of course to reduce
the energy bill. The quad-compressor / dual-blower redundant systems provide a
virtually dust-free and particle free computing environment, with temperature,
humidity, and air-quality regulation.
3.14.5 Constituents of a Data Center: To keep equipment running reliably, even under the
worst circumstances, the data center is built with following carefully engineered support
infrastructures:
Network connectivity with various levels of physical (optical fiber and copper) and service
(both last mile and international bandwidth) provider redundancy,
Dual DG sets and dual UPS,
HVAC systems for temperature control,
Fire extinguishing systems,
Physical security systems: swipe card/ biometric entry systems, CCTV, guards and so on,
Raised flooring,
Network equipment,
Network management software,
Multiple optical fiber connectivity,
Segregating the public and private network for security of data by installing firewalls and
intrusion detection systems (IDS).
3.14.6 Data Center Protection challenges and Best Practice Solutions: Large enterprise
IT managers understand the criticality of data protection challenges and are looking for
ways to mitigate the cost, risk and complexity of data protection throughout their
enterprises including data centers, disaster recovery sites and branch
locations. That data growth is driving them to re-examine their current data protection
environment. Some of the top challenges faced by large enterprise IT managers and the
best practices for overcoming them are as follows:
(i) Control skyrocketing data growth: Data growth is the biggest data center hardware
infrastructure challenge for large enterprises. However, there are several types of data
deduplication technologies that helps in reducing data storage needs by eliminating
redundant data. The redundant data is replaced with a pointer to the unique data copy
that will lower storage space requirements and save money on disk expenditures. The
more efficient use of disk space also allows for longer disk retention periods, which
provides better Recovery Time Objectives (RTO) for a longer time and reduces the need
for tape backups. Data deduplication also reduces the data that must be sent across a
WAN for remote backups, network replication, and disaster recovery.
(ii) System performance and scalability: Many enterprises not only have had to add
numerous disk-based backup target systems to stay ahead of growing data volumes but
also to deal with an aging infrastructure or, in some cases, product obsolescence.
Every one of these storage not only consumes more space, power and cooling in the
data center, it also adds significant administration time, increases complexity and risk
of human error, and reduces deduplication efficiency.
To avoid data center sprawl in the data protection environment, IT managers should
look ahead 3-5 years and choose a data protection target system that will scale to
accommodate the performance and capacity they will need without adding new system
images. It also saves thousands of dollars in administration time by eliminating the
need to load balance and tune new systems needed for scaling. Further, by
consolidating data protection in one system, this strategy enables significantly more
efficient deduplication.
(iii) Network congestion and connectivity architecture: The new generation of servers
with multicore processors demands significantly high input/output (I/O), and if these
servers are virtualized, this requirement further goes up, which is one of the biggest
data center infrastructure challenge. The traditional LAN switches are not designed to
meet this sudden upsurge in network demand. Increased reliance on WAN can be
another trigger for network-related challenges as users are consolidating their IT
systems, especially as individual users are increasingly working remotely or going
mobile. Vendors should help its customers to be strategic with their network
infrastructure rather than continue to take a silo approach.
(iv) IT administration and staff time at premium: Data protection IT administrators have
more data to protect and more complex data protection standards to meet while staying
within a limited budget. They need to invest in systems that automate disk-subsystem
management, reduce complexity, and provide effective dashboards and reporting.
Minimum requirements for large enterprise data protection platforms include:
Automatic load balancing and tuning. As enterprise IT managers add capacity or
performance, the system should bring it online and perform load balancing and
tuning automatically.
Automatic system monitoring and phone home functionality. Choose systems
that preemptively identify potential hardware issues and notify administrators
before a problem arises. The system should also automatically send system
efficiency data to the vendor for ongoing analysis.
Provide dashboards and reporting. Choose a system that enables IT managers to
monitor the status of all data protection operations backup, deduplication,
replication and restore throughout the enterprise from centralized dashboards. The
system should also provide detailed reporting that enables administrators to track
and project future capacity and performance needs.
(v) Inadequate Disaster Recovery plans: Large enterprises that have been using
physical tape backup systems and disparate disk-based solutions in branch offices are
particularly vulnerable to downtime and data loss in the event of a disaster. Enterprise
IT managers should consider the use of a consistent platform that enables IT staff to
manage remote-office backup, deduplication, replication and restore operations from a
data center headquarters. It also minimizes the burden on IT administrators in branch
systems or network. They usually work their way into a network from the Internet or
dialup access servers.
Internal threats - Typically, these threats originate from individuals who have
authorized access to the network. These users either have an account on a server
or physical access to the network. An internal threat may come from a discontented
former or current employee or contractor. It has been seen that majority of security
incidents originate from Internal threats.
Vulnerabilities: Vulnerability is an inherent weakness in the design, configuration, or
implementation of a network or system that renders it susceptible to a threat.
The following facts are responsible for occurrence of vulnerabilities in the software:
Software Bugs - Software bugs are so common that users have developed
techniques to work around the consequences, and bugs that make saving work
necessary every half an hour or crash the computer every so often are considered
to be a normal part of computing. For example - buffer overflow, failure to handle
exceptional conditions, access validation error, input validation errors are some of
the common software flaws.
Timing Windows - This problem may occur when a temporary file is exploited by an
intruder to gain access to the file, overwrite important data, and use the file as a
gateway for advancing further into the system.
Insecure default configurations - Insecure default configurations occur when
vendors use known default passwords to make it as easy as possible for
consumers to set up new systems. Unfortunately, most intruders know these
passwords and can access systems effortlessly.
Bad Protocols - Some protocols, or the standards by which information is
exchanged over the Internet, lack any security at all. For example, unsolicited
commercial email, commonly referred to as "spam," is the irritating result of poor
protocol programming.
Trusting Untrustworthy information- This is usually a problem that affects routers,
or those computers that connect one network to another. When routers are not
programmed to verify that they are receiving information from a unique host,
bogus routers can gain access to systems and do damage.
End users - Generally, users of computer systems are not professionals and are
not always security conscious. For example, when the number of passwords of an
user increases, user may start writing them down, in the worst case to places from
where they are easy to find. In addition to this kind of negligence towards security
procedures users do human errors, for example save confidential files to places
where they are not properly protected.
(vii) Controls adjustment: The involves the adjustment of controls which means whether over
some time period any control can be designed, implemented and operated such that the cost of
control is lower than the reduction in the expected losses. The reduction in the expected losses is
the difference between expected losses with the (i) existing set of controls and (ii) improved set of
controls.
(viii) Report generation outlining the levels of security to be provided for individual
systems, end user, etc.: This is the last step that involves report generation documenting,
the findings of the review and specially recommending new assets safeguarding techniques
that should be implemented and existing assets safeguarding mechanisms that should be
eliminated / rectified, and also recommending the assignment of the levels of security to be
pervaded for individual end users and systems.
3.15.4 IDS Components
Intrusion Detection Systems are generally made up of following major components:
Sensors: These are deployed in a network or on a device to collect data. They take input
from various resources, including network packets, log files, and system call traces. Input is
collected, organized, and then forwarded to one more analyzers.
Analyzers: Analyzers in IDS collect data forwarded by sensors and then determine if an
intrusion has actually occurred. Output from analyzers should include evidence supporting the
intrusion report. The analyzers may also provide recommendations and guidance on
mitigation steps.
User interface: The user interface of the IDS provides the end users a view and way to
interact with the system. Through the interface, the user can control and configure the
system. Many user interfaces can generate reports as well.
Honeypot: In fully deployed IDS, some administrators may choose to install a Honeypots,
essentially a system components setup as bait or decoy for intruders. Honeypots can be used
as early warning systems on an attack, decoys from critical systems, and data collection
sources for attack analysis. A honeypot should only be deployed when the organization has
the resources to maintain it. Many IDS vendors maintain honeypots for research purposes
and to develop new intrusion signatures. A honeypot left unmanaged may become a
significant liability because attackers may use a compromised honeypot to attack other
systems.
3.15.5 Network Security Techniques
1. Firewall: Firewall is a device that forms a barrier between a secure and an open
environment when the latter environment is usually considered hostile, for example the
Internet. It acts as a system or combination of systems that enforces a boundary between
more than one networks. Access controls are common form of controls encountered in the
boundary subsystem by restricting the use of system resources to authorize users, limiting the
actions authorized users can take with these resources and ensuring that the users obtain
only authentic system resources.
Current systems are designed to allow users to share their resources. This is done by having
a single system simulate the operations of several systems, where each of the simulated
system works as virtual machine allowing more efficient use of resources by lowering the idle
capacity of the real system. Here, a major design problem is to ensure that each virtual
system operates as if it were totally unaware of the operations of the other virtual systems.
Besides increased scope exists for unintentional or deliberate damage to system resources /
users actions. The route is resource sharing through virtual systems through need for
isolation through need for access controls. Access controls associate with authentic users the
resources, which the users are permitted to access and the action privileges, which the users
have with reference to those resources. It acts as a part of the operating system.
2. IDS Technologies: An Intrusion Detection System (IDS) is a device or software
application that monitors network or system activities for malicious activities or policy
violations and produces reports to a Management Station. The goal of intrusion detection is to
monitor network assets to detect anomalous behavior and misuse. Primary IDS technologies
are defined as follows:
Network Intrusion Detection (NID): Network Intrusion Detection System is placed on
a network to analyze traffic in search of unwanted or malicious events on the wire
between hosts. Typically referred to as "packet-sniffers", network intrusion detection
devices intercept packets traveling along various communication mediums and protocols,
usually TCP/IP. Once captured, the packets are analyzed in a number of different ways.
Some NID devices will simply compare the packet to a signature database consisting of
known attacks and malicious packet "fingerprints", while others will look for anomalous
packet activity that might indicate malicious behavior. In either case, NID should be
regarded primarily as a perimeter defense.
Host-based Intrusion Detection (HID): Host-based Intrusion Detection systems are
designed to monitor, detect, and respond to user and system activity and attacks on a
given host. Some more robust tools also offer audit policy management and
centralization, supply data forensics, statistical analysis and evidentiary support, and in
certain instances provide some measure of access control. The difference between host-
based and network-based intrusion detection is that NID deals with data transmitted from
host to host while HID is concerned with what occurs on the hosts themselves.
Host-based intrusion detection is best suited to combat internal threats because of its
ability to monitor and respond to specific user actions and file accesses on the host. The
majority of computer threats come from within organizations, from many different
sources; disgruntled employees and corporate spies are just two examples.
Hybrid Intrusion Detection: Hybrid Intrusion Detection systems offer management of
and alert notification from both network and host-based intrusion detection devices.
Hybrid solutions provide the logical complement to NID and HID - central intrusion
detection management.