You are on page 1of 57

A relational database is a collection of data items organized as a set of formally-described tables

from which data can be accessed or reassembled in many different ways without having to
reorganize the database tables. The relational database was invented by E. F. Codd at IBM in
1970

A relational database is a set of tables containing data fitted into predefined categories. Each
table (which is sometimes called a relation) contains one or more data categories in columns.
Each row contains a unique instance of data for the categories defined by the columns. For
example, a typical business order entry database would include a table that described a customer
with columns for name, address, phone number, and so forth. Another table would describe an
order: product, customer, date, sales price, and so forth. A user of the database could obtain a
view of the database that fitted the user's needs. For example, a branch office manager might like
a view or report on all customers that had bought products after a certain date. A financial
services manager in the same company could, from the same tables, obtain a report on accounts
that needed to be paid.

When creating a relational database, you can define the domain of possible values in a data
column and further constraints that may apply to that data value. For example, a domain of
possible customers could allow up to ten possible customer names but be constrained in one
table to allowing only three of these customer names to be specifiable.

The definition of a relational database results in a table of metadata or formal descriptions of the
tables, columns, domains, and constraints.

A person, organization, object type, or concept about which information is stored. ... An entity
type typically corresponds to one or several related tables in database. Attribute. A
characteristic or trait of an entity type that describes the entity, for example, the Person entity
type has the Date of Birth attribute

In general, an attribute is a characteristic. In a database management system (DBMS), an


attribute refers to a database component, such as a table. It also may refer to a database field.
Attributes describe the instances in the row of a database

An attribute is a characteristic of an entity object or view object, implemented as a JavaBean


property of the object class. An attribute can correspond to a database column, or be
independent of a column

A data structure is a specialized format for organizing and storing data. General data structure
types include the array, the file, the record, the table, the tree, and so on. Any data structure is
designed to organize data to suit a specific purpose so that it can be accessed and worked with
in appropriate ways. In computer programming, a data structure may be selected or designed
to store data for the purpose of working on it with various algorithms

'DROP' is sql query keyword to delete something on SQL Database. The most commonly used
situation is when you try to delete Table, Database or Certain Column in a Table. Delete
Database > DROP DATABASE database_name Delete Table > DROP TABLE table_name Delete
Column > ALTER TABLE table_name DROP INDEX column_name

Chapter 6

Database Management

6.1 Hierarchy of Data [Figure 6.1][Slide 6-4]

Data are the principal resources of an organization. Data stored in computer systems form a
hierarchy extending from a single bit to a database, the major record-keeping entity of a firm.
Each higher rung of this hierarchy is organized from the components below it.

Data are logically organized into:

1. Bits (characters)

2. Fields

3. Records

4. Files

5. Databases

Bit (Character) - a bit is the smallest unit of data representation (value of a bit may be a 0 or 1).
Eight bits make a byte which can represent a character or a special symbol in a character code.

Field - a field consists of a grouping of characters. A data field represents an attribute (a


characteristic or quality) of some entity (object, person, place, or event).

Record - a record represents a collection of attributes that describe a real-world entity. A record
consists of fields, with each field describing an attribute of the entity.

File - a group of related records. Files are frequently classified by the application for which they
are primarily used (employee file). A primary key in a file is the field (or fields) whose value
identifies a record among others in a data file.

Database - is an integrated collection of logically related records or files. A database


consolidates records previously stored in separate files into a common pool of data records that
provides data for many applications. The data is managed by systems software called database
management systems (DBMS). The data stored in a database is independent of the application
programs using it and of the types of secondary storage devices on which it is stored.

6.2 File Environment and its Limitations

There are three principal methods of organizing files, of which only two provide the direct access
necessary in on-line systems.

File Organization [Figure 6.2 & 6.3]

Data files are organized so as to facilitate access to records and to ensure their efficient storage.
A tradeoff between these two requirements generally exists: if rapid access is required, more
storage is required to make it possible.

Access to a record for reading it is the essential operation on data. There are two types of access:

1. Sequential access - is performed when records are accessed in the order they are stored.
Sequential access is the main access mode only in batch systems, where files are used and
updated at regular intervals.

2. Direct access - on-line processing requires direct access, whereby a record can be accessed
without accessing the records between it and the beginning of the file. The primary key serves to
identify the needed record.

There are three methods of file organization: [Table 6.1]

1. Sequential organization

2. Indexed-sequential organization

3. Direct organization

Sequential Organization

In sequential organization records are physically stored in a specified order according to a key
field in each record.

Advantages of sequential access:

1. It is fast and efficient when dealing with large volumes of data that need to be processed
periodically (batch system).

Disadvantages of sequential access:


1. Requires that all new transactions be sorted into the proper sequence for sequential access
processing.

2. Locating, storing, modifying, deleting, or adding records in the file requires rearranging the
file.

3. This method is too slow to handle applications requiring immediate updating or responses.

Indexed-Sequential Organization

In the indexed-sequential files method, records are physically stored in sequential order on a
magnetic disk or other direct access storage device based on the key field of each record. Each
file contains an index that references one or more key fields of each data record to its storage
location address.

Direct Organization

Direct file organization provides the fastest direct access to records. When using direct access
methods, records do not have to be arranged in any particular sequence on storage media.
Characteristics of the direct access method include:

1. Computers must keep track of the storage location of each record using a variety of direct
organization methods so that data can be retrieved when needed.

2. New transactions' data do not have to be sorted.

3. Processing that requires immediate responses or updating is easily performed.

6.3 Database Environment [Figure 6.6][Slide 6-5]

A database is an organized collection of interrelated data that serves a number of applications in


an enterprise. The database stores not only the values of the attributes of various entities but also
the relationships between these entities. A database is managed by a database management
system (DBMS), a systems software that provides assistance in managing databases shared by
many users.

A DBMS:

1. Helps organize data for effective access by a variety of users with different access needs and
for efficient storage.

2. It makes it possible to create, access, maintain, and control databases.

3. Through a DBMS, data can be integrated and presented on demand.

Advantages of a database management approach:


1. Avoiding uncontrolled data redundancy and preventing inconsistency

2. Program-data independence

3. Flexible access to shared data

4. Advantages of centralized control of data

6.4 Levels of Data Definition in Databases [Figure 6.7]

The user view of a DBMS becomes the basis for the date modelling steps where the relationships
between data elements are identified. These data models define the logical relationships among
the data elements needed to support a basic business process. A DBMS serves as a logical
framework (schema, subschema, and physical) on which to base the physical design of databases
and the development of application programs to support the business processes of the
organization. A DBMS enables us to define a database on three levels:

1. Schema - is an overall logical view of the relationships between data in a database.

2.Subschema - is a logical view of data relationships needed to support specific end user
application programs that will access the database.

3.Physical - looks at how data is physically arranged, stored, and accessed on the magnetic disks
and other secondary storage devices of a computer system.

A DBMS provides the language, called data definition language (DDL), for defining the
database objects on the three levels. It also provides a language for manipulating the data, called
the data manipulation language (DML), which makes it possible to access records, change
values of attributes, and delete or insert records.

 6.5 Data Models or How to Represent Relationships between Data

A data model is a method for organizing databases on the logical level, the level of the schema
and subschemas. The main concern in such a model is how to represent relationships among
database records. The relationships among the many individual records in databases are based on
one of several logical data structures or models. DBMS are designed to provide end users with
quick, easy access to information stored in databases. Three principal models include:

1. Hierarchical Structure

2. Network Structure

3. Relational Structure

Hierarchical:
Early mainframe DBMS packages used the hierarchical structure, in which:

1. Relationships between records form a hierarchy or tree like structure.

2. Records are dependent and arranged in multilevel structures, consisting of one root record &
any number of subordinate levels.

3. Relationships among the records are one-to-many, since each data element is related only to
one element above it.

4. Data element or record at the highest level of the hierarchy is called the root element. Any data
element can be accessed by moving progressively downward from the root and along the
branches of the tree until the desired record is located.

Network Structure:

The network structure:

1. Can represent more complex logical relationships, and is still used by many mainframe DBMS
packages.

2. Allows many-to-many relationship among records. That is, the network model can access a
data element by following one of several paths, because any data element or record can be
related to any number of other data elements.

Relational Structure:

The relational structure:

1. Most popular of the three database structures.

2. Used by most microcomputer DBMS packages, as well as many minicomputer and mainframe
systems.

3. Data elements within the database are stored in the form of simple tables. Tables are related if
they contain common fields.

4. DBMS packages based on the relational model can link data elements from various tables to
provide information to users.

Evaluation of Database Structures

MODEL ADVANTAGES DISADVANTAGES

Hierarchical Data Ease with which data can Hierarchical one-to many relationships
Structure be stored and retrieved in must be specified in advance, and are
structured, routine types of not flexible.
transactions.
Cannot easily handle ad hoc requests
Ease with which data can for information.
be extracted for reporting
purposes. Modifying a hierarchical database
structure is complex.
Routine types of
transaction processing is Great deal of redundancy.
fast and efficiently.
Requires knowledge of a programming
language.

Network Structure More flexible that the Network many-to-many relationships


hierarchical model. must be specified in advance

Ability to provide User is limited to retrieving data that


sophisticated logical can be accessed using the established
relationships among the links between records. Cannot easily
records handle ad hoc requests for information.

Requires knowledge of a programming


language.

Flexible in that it can Cannot process large amounts of


handle ad hoc information business transactions as quickly and
Relational Structure requests. efficiently as the hierarchical and
network models.
Easy for programmers to
work with. End users can
use this model with litter
effort or training.

Easier to maintain than the


hierarchical and network
models.

6.6 Relational Databases [Figure 6.11, 6.13]

A relational database is a collection of tables. Such a database is relatively easy for end users to
understand. Relational databases afford flexibility across the data and are easy to understand and
modify.
1. Select, which selects from a specified table the rows that satisfy a given condition.

2. Project, which selects from a given table the specified attribute values

3. Join, which builds a new table from two specified tables.

The power of the relational model derives from the join operation. It is precisely because records
are related to one another through a join operation, rather than through links, that we do not need
a predefined access path. The join operation is also highly time-consuming, requiring access to
many records stored on disk in order to find the needed records.

6.7 SQL - A Relational Query Language

Structured Query Languages (SQL) has become an international standard access language for
defining and manipulating data in databases. It is a data-definition-and-management language of
most well-known DBMS, including some nonrelational ones. SQL may be used as an
independent query language to define the objects in a database, enter the data into the database,
and access the data. The so-called embedded SQL is also provided for programming in
procedural languages (Ahost@ languages), such as C, COBOL, or PL/L, in order to access a
database from an application program. In the end-user environment, SQL is generally hidden by
more user-friendly interfaces.

The principal facilities of SQL include:

1. Data definition

2. Data manipulation

6.8 Designing a Relational Database

Database design progresses from the design of the logical levels of the schema and the
subschema to the design of the physical level.

The aim of logical design, also known as data modeling, is to design the schema of the database
and all the necessary subschemas. A relational database will consist of tables (relations), each of
which describes only the attributes of a particular class of entities. Logical design begins with
identifying the entity classes to be represented in the database and establishing relationships
between pairs of these entities. A relationship is simply an interaction between the entities
represented by the data. This relationship will be important for accessing the data. Frequently,
entity-relationship (E-R) diagrams, are used to perform data modeling.

Normalization is the simplification of the logical view of data in relational databases. Each table
is normalized, which means that all its fields will contain single data elements, all its records will
be distinct, and each table will describe only a single class of entities. The objective of
normalization is to prevent replication of data, with all its negative consequences.
After the logical design comes the physical design of the database. All fields are specified as to
their length and the nature of the data (numeric, characters, and so on). A principal objective of
physical design is to minimize the number of time-consuming disk accesses that will be
necessary in order to answer typical database queries. Frequently, indexes are provided to ensure
fast access for such queries.

 6.9 The Data Dictionary

A data dictionary is a software module and database containing descriptions and definitions
concerning the structure, data elements, interrelationships, and other characteristics of an
organization's database.

Data dictionaries store the following information about the data maintained in databases:

1. Schema, subschemas, and physical schema

2. Which applications and users may retrieve the specific data and which applications and users
are able to modify the data

3. Cross-reference information, such as which programs use what data and which users receive
what reports

4. Where individual data elements originate, and who is responsible for maintaining the data

5. What the standard naming conventions is for database entities.

6. What the integrity rules is for the data

7. Where the data are stored in geographically distributed databases.

A data dictionary:

1. Contains all the data definitions, and the information necessary to identify data ownership

2. Ensures security and privacy of the data, as well as the information used during the
development and maintenance of applications which rely on the database.

6.10 Managing the Data Resource of an Organization

The use of database technology enables organizations to control their data as a resource,
however, it does not automatically produce organizational control of data.

Components of Information Resource Management [Figure 6.17]


Both organizational actions and technological means are necessary to:

1. Ensure that a firm systematically accumulates data in its databases

2. Maintains the data over time

3. Provides the appropriate access to the data to the appropriate employees.

The principal components of this information resource management are:

1. Organizational processes

- Information Planning and data modeling

2. Enabling technologies

- DBMS and a Data Dictionary

3. Organizational functions

- data administration and database administration

Database Administration and Database Administration [Figure 6.18]

The functional units responsible for managing the data are:

1. Data administrator (DA)

2. Database administrator (DBA)

Data administrator - the person who has the central responsibility for an organizations data.

Responsibilities include:

1. Establishing the policies and specific procedures for collecting, validating, sharing, and
inventorying data to be stored in databases and for making information accessible to the
members of the organization and, possibly, to persons outside of it.

2. Data administration is a policy making function and the DA should have access to senior
corporate management.

3. Key person involved in the strategic planning of the data resource.

4. Often defines the principal data entities, their attributes, and the relationships among them.
Database Administrator - is a specialist responsible for maintaining standards for the
development, maintenance, and security of an organization's databases.

Responsibilities include:

1. Creating the databases and carrying out the policies laid down by the data administrator.

2. In large organizations, the DBA function is actually performed by a group of professionals. In


a small firm, a programmer/analyst may perform the DBA function, while one of the managers
acts as the DA.

3. Schema and subschemas of the database are most often defined by the DBA, who has the
requisite technical knowledge. They also define the physical layout of the databases, with a view
toward optimizing system performance for the expected pattern of database usage.

Joint responsibilities of the DA and DBA:

1. Maintaining the data dictionary

2. Standardizing names and other aspects of data definition

3. Providing backup

4. Provide security for the data stored in a database, and ensure privacy based on this security.

5. Establish a disaster recovery plan for the databases

6.11 Developmental Trends in Database Management

Three important trends in database management include:

1. Distributed databases

2. Data warehousing

3. Rich databases (includes object-oriented databases)

Distributed Databases [Figure 6.19][Slide 6-8]

Distributed databases are that are spread across several physical locations. In distributed
databases, the data are placed where they are used most often, but the entire database is available
to each authorized user. These are databases of local work groups (LAN), and departments at
regional offices (WAN), branch offices, manufacturing plants, and other work sites. These
databases can include segments of both common operational and common user databases, as well
as data generated and used only at a user's own site.
Data Warehouses Databases [Figure 6.20]

A data warehouse stores data from current and previous years that has been extracted from the
various operational and management databases of an organization. It is a central source of data
that has been standardized and integrated so it can be used by managers and other end user
professionals from throughout an organization. The objective of a corporate data warehouse is to
continually select data from the operational databases, transform the data into a uniform format,
and open the warehouse to the end users through a friendly and consistent interface.

Data warehouses are also used for data mining - automated discovery of potentially significant
relationships among various categories of data.

Systems supporting a data warehouse consists of three components:

1. Extract and Prepare Data

- the first subsystem extracts the data from the operational systems, many of them older legacy
systems, and Ascrubs@ it by removing errors and inconsistencies.

2. Store Date in the Warehouse

- the second support component is actually the DBMS that will manage the warehouse data.

3. Provide Access and Analysis Capabilities

- the third subsystem is made up of the query tools that help users access the data and includes
the OLAP and other DSS tools supporting data analysis.

Object-oriented and other Rich Databases

With the vastly expanded capabilities of information technology, the content of the databases is
becoming richer. Traditional databases have been oriented toward largely numerical data or short
fragments of text, organized into well-structured records. As the processing and storage
capabilities of computer systems expand and as the telecommunications capacities grow, it is
possible to support knowledge work more fully with rich data. These include:

1. Geographic information systems

2. Object-oriented databases

3. Hypertext and hypermedia databases

4. Image databases and text databases

 
TCP
Transmission Control Protocol (TCP) A connection-oriented transport protocol. Connection-
oriented transport protocols provide reliable transport, in that if a segment is dropped, the sender
can detect that drop and retransmit that dropped
segment. Specifically, a receiver acknowledges segments that it receives. Based on those
acknowledgments, a sender can determine which segments were successfully
received.

TCP operates at the transport layer of the OSI model.


TCP three-way handshake.
1. It sends a message called a SYN to the target host.

2. The target host opens a connection for the request and sends back an acknowledgment
message called an ACK (or SYN ACK).

3. The host that originated the request sends back another acknowledgment, saying that it has
received the ACK message and that the session is ready to be used to transfer data.
UDP
User Datagram Protocol A connectionless transport protocol. Connectionless transport protocols
provide unreliable transport, in that if a segment is dropped, the sender is unaware of the drop,
and no retransmission occurs.

UDP operates at the transport layer


FTP
File Transfer Protocol (FTP)

Works at the Application layer

FTP provides for the uploading and downloading of files from a remote host running FTP server
software. As well as uploading and downloading files, FTP enables you to view the contents of
folders on an FTP server and rename and delete files and directories if you have the necessary
permissions.

One of the big problems associated with FTP is that it is considered insecure. Even though
simple authentication methods are associated with FTP, it is still susceptible to relatively simple
hacking approaches. In addition, FTP transmits data between sender and receiver in an
unencrypted format.
Commonly Used FTP Commands
ls Lists the files in the current directory on the remote system

cd Changes the working directory on the remote host


lcd Changes the working directory on the local host

put Uploads a single file to the remote host

get Downloads a single file from the remote host

mput Uploads multiple files to the remote host

mget Downloads multiple files from the remote host

binary Switches transfers into binary mode

ascii Switches transfers into ASCII mode (the default)


SFTP
Secure File Transfer Protocol

A protocol that transfers files between clients securly,Based on Secure Shell (SSH) technology,
provides robust authentication between sender and receiver. It also provides encryption
capabilities, which means that even if packets are copied from the network, their contents remain
hidden
from prying eyes.
TFTP
Trivial File Transfer Protocol

A variation on FTP is TFTP, which is also a file transfer mechanism. However, TFTP does not
have the security capability or the level of functionality that FTP has. TFTP, is most often
associated with simple downloads, such as those associated with transferring firmware to a
device such as a router and booting diskless workstations.

Another feature that TFTP does not offer is directory navigation.

TFTP is an application layer protocol that uses UDP, which is a connectionless transport layer
protocol. For this reason, TFTP is called a connectionless file transfer method.
SMTP
Simple Mail Transfer Protocol

SMTP is a protocol that defines how mail messages


are sent between hosts. SMTP uses TCP connections to guarantee error-free delivery of
messages. SMTP is not overly sophisticated and requires that the destination host always be
available.

SMTP can be used to both send and receive mail. Post Office Protocol version 3 (POP3) and
Internet Message Access Protocol version 4 (IMAP4) can be used only to receive mail.
HTTP
Hypertext Transfer Protocol

HTTP, is the protocol that enables text, graphics, multimedia, and other material to be
downloaded from an HTTP server. HTTP defines what actions can be requested by clients and
how servers should answer those requests.

HTTP is a connection-oriented protocol that uses TCP as a transport protocol.


HTTPS
Hypertext Transfer Protocol Secure

One of the downsides of using HTTP is that HTTP requests are sent in clear text. For some
applications, such as e-commerce, this method to exchange information is unsuitable—a more
secure method is needed. The solution is
HTTPS, which uses a system known as Secure Socket Layer (SSL), which encrypts the
information sent between the client and host.
POP3
Post Office Protocol Version 3

A mechanisms for downloading, or pulling, email from a server. They are necessary because
although the mail is transported around the network via SMTP, users cannot always immediately
read it, so it must be stored in a central location. From this location, it needs to be downloaded
or retrieved, which is what POP3 enable you to do.

One of the problems with POP3 is that the password used to access a mailbox is transmitted
across the network in clear text. This means that if people want to, they could determine your
POP3 password with relative ease.
IMAP4
Internet Message Access Protocol Version 4

A mechanisms for downloading, or pulling, email from a server. They are necessary because
although the mail is transported around the network via SMTP, users cannot always immediately
read it, so it must be stored in a central location. From this location, it needs to be downloaded
or retrieved, which is what IMAP4 enable you to do.

IMAP4 offers an advantage over POP3. It uses a more sophisticated authentication system,
which makes it more difficult for people to determine a password.
Telnet
Telnet is a virtual terminal protocol. It enables sessions to be opened on a remote host, and then
commands can be executed on that remote host. For many years, Telnet was the method by
which clients accessed multiuser systems such as mainframes and minicomputers. It also was the
connection method of choice for UNIX systems. Today, Telnet is still
commonly used to access routers and other managed network devices.

One of the problems with Telnet is that it is not secure. As a result, remote session functionality
is now almost always achieved by using alternatives such as SSH.
SSH
Secure Shell (SSH) is a secure alternative to Telnet. SSH provides security by encrypting data as
it travels between systems. This makes it difficult for hackers using
packet sniffers and other traffic-detection systems. It also provides more robust authentication
systems than Telnet.

Two versions of SSH are available: SSH1 and SSH2. Of the two, SSH2 is considered more
secure. The two versions are incompatible. If you use an SSH client program, the server
implementation of SSH that you connect to must be the same version. Although SSH, like
Telnet, is associated primarily with UNIX and Linux systems, implementations of SSH are
available for all commonly used computing platforms, including Windows and Macintosh. As
discussed earlier, SSH is the foundational technology for Secure File Transfer Protocol (SFTP).
(ICMP)
Internet Control Message Protocol

ICMP Is a protocol that works with the IP layer to provide error checking and reporting
functionality. In effect, ICMP is a tool that IP uses in its quest to provide best-effort delivery.

ICMP can be used for a number of functions. Its most common function is probably the widely
used and incredibly useful ping utility, which can send a stream of ICMP echo requests to a
remote host.

ICMP also can return error messages such as Destination unreachable and Time exceeded. (The
former message is reported when a destination cannot be contacted and the latter when the Time
To Live [TTL] of a datagram has been exceeded.)

ICMP performs source quench. In a source quench scenario, the receiving host cannot handle the
influx of data at the same rate as the data is sent. To slow down the sending host, the receiving
host sends ICMP source quench messages, telling the sender to slow down. This action prevents
packets from dropping and having to be re-sent.
ARP
Address Resolution Protocol (ARP)

ARP, is responsible for resolving IP addresses to Media Access Control (MAC) addresses. When
a system attempts to contact another host, IP first determines whether the other host is on the
same network it is on by looking at the IP address. If IP determines that the destination is on the
local network, it consults the ARP cache to see whether it has a
corresponding entry. The ARP cache is a table on the local system that stores mappings between
data link layer addresses (the MAC address or physical address) and network layer addresses (IP
addresses).
RARP
Reverse Address Resolution Protocol (RARP)

Performs the same function as ARP, but in reverse. In other words, it resolves MAC addresses to
IP addresses.
RARP makes it possible for applications or systems to learn their own IP address from a router
or Domain Name Service (DNS) server. Such a resolution is useful for tasks such as performing
reverse lookups in DNS.
NTP
Network Time Protocol

NTP is the part of the TCP/IP protocol suite that facilitates the communication of time between
systems. The idea is that
one system configured as a time provider transmits time information to other systems that can be
both time receivers and time providers for other systems.
NNTP
Network News Transfer Protocol

Is a protocol associated with posting and retrieving messages to and from newsgroups. A
newsgroup is a discussion
forum hosted on a remote system. By using NNTP client software, like that included with many
common email clients, users can post, reply to, and retrieve messages.
Although web-based discussion forums are slowly replacing newsgroups, demand for newsgroup
access remains high.

The distinction between webbased discussion forums and NNTP newsgroups is that messages
are retrieved
from the server to be read. In contrast, on a web-based discussion forum, the messages are not
downloaded. They are simply viewed from a remote location.
SCP
Secure Copy Protocol

Secure Copy Protocol (SCP) is another protocol based on SSH technology. SCP provides a
secure means to copy files between systems on a network. By using SSH technology, it encrypts
data as it travels across the network, thereby
securing it from eavesdropping. It is intended as a more secure substitute for Remote Copy
Protocol (RCP). SCP is available as a command-line utility, or as part of application software for
most commonly used computing platforms.
LDAP
Lightweight Directory Access Protocol

Lightweight Directory Access Protocol (LDAP) is a protocol that provides a mechanism to


access and query directory services systems. In the context of the Network+ exam, these
directory services systems are most likely to be Novell
Directory Services (NDS) and Microsoft's Active Directory. Although LDAP supports
command-line queries executed directly against the directory database, most LDAP interactions
are via utilities such as an authentication program
(network logon) or locating a resource in the directory through a search utility.
IGMP
Internet Group Management Protocol

The protocol within the TCP/IP protocol suite that manages multicast groups. It enables, for
example, one computer on the Internet to target content to a specific group of computers
that will receive content from the sending system.

IGMP is used to register devices into a multicast group, as well as to discover what other devices
on the network are members of the same multicast group. Common applications for multicasting
include groups of routers on an internetwork
and videoconferencing clients.
TLS
Transport Layer Security

A security protocol designed to ensure privacy between communicating client/server


applications. When a server and client communicate, TLS ensures that no one can eavesdrop and
intercept or otherwise tamper with the data message. TLS is the successor to SSL.

TLS record protocol: Uses a reliable transport protocol such as TCP and ensures that the
connection made between systems is private using data encryption.

TLS handshake protocol: Used for authentication between the client and server.
SIP
Session Initiation Protocol

An application layer protocol designed to establish and maintain multimedia sessions, such as
Internet telephony calls. This means that SIP can create communication sessions for such
features as audio/videoconferencing, online gaming, and person-to-person conversations over the
Internet. SIP does not operate alone; it uses TCP or UDP as a transport protocol.
RTP
The Real-time Transport Protocol

is the Internet-standard protocol for the transport of real-time data, including audio and video.
RTP can use either TCP or UDP as a transport mechanism. However, UDP is used more
often because applications using RTP are less sensitive to packet loss but typically are sensitive
to delays. UDP, then, is a faster protocol because packet delivery is not guaranteed. RTP is often
used with VoIP. VoIP data packets live in RTP packets, which are inside UDP-IP packets.

The data part supports applications with real-time properties such as continuous media (such as
audio and video), including timing reconstruction, loss detection, security, and content
identification.

The control part (RTCP) supports real-time conferencing of groups of any size within an
internet.
DHCP
Dynamic Host Configuration Protocol (DHCP),
enables ranges of IP addresses, known as scopes, to be defined on a system running a DHCP
server application. When another system configured as a DHCP client is initialized, it asks the
server for an address. If all things are as
they should be, the server assigns an address from the scope to the client for a predetermined
amount of time, known as the lease.

In addition to an IP address and the subnet mask, the DHCP server can supply many other pieces
of information; although, exactly what can be provided depends on the DHCP server
implementation. In addition to the address information, the default gateway is often supplied,
along with DNS
information.
DHCP lease
lease is the length of time the client can have the assigned IP address.

At various points during the lease (normally the 50 percent and 85 percent points), the client
attempts to renew the lease from the server. If the server cannot perform a renewal, the lease
expires at 100 percent, and the client stops using the address.
DHCP Scope
The range of IP address available to assign to clients.
DHCP Reservation
In addition to having DHCP supply a random address from the scope, you can configure it to
supply a specific address to a client. Such an arrangement is known as a reservation.
Reservations are a means by which you can still use DHCP for a system but at the same time
guarantee that it always has the same IP address. DHCP can also be configured for exclusions. In
this scenario, certain IP addresses are not given out to client systems.
DHCP Process
1. DHCPDISCOVER packet: client sends a broadcast looking for server.

2. DHCPOFFER packet: Server sends an address

3.DHCPREQUEST packet: Client then ask to have the offered address.

4. DHCPACK packet: server assigns the address and sends an acknowledgement to the
requesting client.

These communications are done as broadcast.


DHCP advantages
First, administrators do not need to manually configure each system. Second, human error such
as the assignment of duplicate IP addresses is eliminated. Third, DHCP removes the need to
reconfigure systems if they move from one subnet to another, or if you decide to make a
wholesale change in the IP addressing structure.
DHCP disadvantages
DHCP traffic is broadcast-based and thus generates network
traffic—albeit a small amount. Finally, the DHCP server software must be installed and
configured on a server, which can place additional processor load (again, minimal) on that
system.
SNMP
Simple Network Management Protocol

Provides network devices with a method to monitor and control network devices; manage
configurations, statistics
collection, performance, and security; and report network management information to a
management console.

Both SNMPv1 and v2 are not secured.

SNMPv3 An enhanced SNMP service offering both encryption and authentication services.
SNMP agent
A software component that enables a device to communicate
with, and be contacted by, an SNMP management system.
SNMP trap
An SNMP utility that sends an alarm to notify the administrator
that something within the network activity differs from the
established threshold, as defined by the administrator.
NMS (Network Management System)
An application that acts as a central management point for network management. Most NMS
systems use SNMP to communicate with network devices.
MIB
Management Information Base

A data set that defines the criteria that can be retrieved and set on a device using SNMP
SNMP Communities
SNMP communities are logical groupings of systems. When a system is configured as part of a
community, it communicates only with other devices that have the same community name. In
addition, it accepts Get, Get Next, or Set commands only from an SNMP manager with a
community name it recognizes.

IP Addresses And Subnet Masks


Addressing for intranets (and the Internet) explained
TCP/IP is the networking protocol of the Internet, and by extension of intranets. For
TCP/IP to work, your network interfaces need to be assigned IP addresses. Note that
we said network interfaces and not computers. The IP addresses are assigned to
interfaces and not to computers. So, one computer can have more than one IP address.
For example, if you have two network cards on your computer, then each of them can
have a different IP address either static or dynamic (more about that in a minute).
Similarly, if you have a proxy server running, then the machine on which it is installed
should have a static IP address. Now, the same machine has to establish a dial-up link
to the Internet through, say VSNL. Then the dial-up adapter would be assigned a
different dynamic address.
What's an IP address?
An IP address is a number that represents a device like a network card uniquely on the
Internet or on your company's intranet. This number is actually a binary one, but for
convenience it's normally written as four decimal numbers. For instance, a typical IP
address would be something like 192.168.1.1. The four constituent numbers together
represent the network that the computer is on and the computer (interface) itself. Let us
first look at the network address part.
The IP addresses for networks on the Internet are allocated by the InterNIC. If you have
an Internet connection (a registered domain and a permanent link to the Internet, and
not just a dial-up connection), then you would be allocated a network address by the
agency that registered you, like the InterNIC. Let us assume this to be 192.6.132.0, a
class C network. Then all the machines on this network would have the same network
address. And the last 0 will be replaced by a number from 1 to 254 for the node
address. So, nodes will have addresses 192.6.132.1, 192.6.132.2, and so on up to
192.6.132.254.It would be worth mentioning here that IP address calculations and
concepts make sense only when done in binary.

Types of networks and corresponding IP


addresses
Depending on the size of the network, IP-based networks are divided into three classes.

�Class A- Class A networks are mega monster


networks with up to 224 nodes 16 million plus.
Class A networks have their network addresses
from 1.0.0.0 to 126.0.0.0, with the zero's being
replaced by node addresses.
� Class B- Class B networks are smaller networks
in comparison they can have only about 65,000
nodes! Network addresses for these ranges from
128.0.0.0 to 191.0.0.0. Here the last two zeros get
replaced by the node addresses.
� Class C- These are the baby networks that can
have only 254 nodes at the maximum. The network
IP addresses for these range from 192.0.0.0 to
223.0.0.0.

For a given network address, the last node address


is the broadcast address. For example, for the C class network with address
192.168.1.0, the address 192.168.1.255 is the broadcast address, used to transmit to all
nodes in that network. So, this address along with the network address itself should not
be used as node address.If you want your network to be permanently on the Internet,
then you need to be allocated a network address by the InterNIC. Most of the network
addresses now available for allocation are class C addresses.There are other classes of
networks class D and class E. These are primarily used for experimental purposes.

Introducing subnet masks


In an IP network, every machine on the same physical network sees all the data packets
sent out on the network. As the number of computers on a network grows, network
traffic will grow many fold, bringing down performance drastically. In such a situation,
you would divide your network into different subnetworks and minimize the traffic across
the different subnetworks. Interconnectivity between the different subnets would be
provided by routers, which will only transmit data meant for another subnet across itself.
To divide a given network address into two or more subnets, you use subnet masks.
The default subnet masks for class A networks is 255.0.0.0, for class B is 255.255.0.0,
and for class C is 255.255.255.0, which signify a network without subnets.

Which class of network? Which IP


address?
The InterNIC has (RFC 1597 Address Allocation for Private Internets) allocated
particular blocks of network addresses for use in intranets. These IP addresses don't
conflict with those of existing Internet hosts and will not be handed out for use on the
Internet.
The address blocks are:
Class A: 10.0.0.0
Class B: From 172.16.0.0 to 172.31.0.0
Class C: From 192.168.0.0 to 192.168.255.0

Computers on networks using the above IP addresses will be treated as private ones
and they can communicate only within the company intranet. However, they can still
access the outside world using proxy servers. This adds to the security of your intranet.
So, your intranet should always use addresses from these reserved groups only.

Now, which IP address class should you use for your intranet?The answer depends on
the number of hosts that are going to be connected to the intranet. Any machine
connected to the network, whether server or client, is called a host.
Without subneting, you can have the following configurations.

No of machines to be
Class of network Network addresses
connected
254 or less C 92.168.0.0 to 192.168.255.0
255 to 65,534 B 172.16.0.0 to 172.31.0.0
65,535 to 16,777,214 A 10.0.0.0

Thus, if you are having a class C network that is not permanently connected to the
Internet, your network address can be any one from 192.168.1.0 to 192.168.255.0, and
without subneting, you can have 254 hosts having addresses 192.168.1.1 to
192.168.1.254, if you have selected 192.168.1.0 as your network address;
192.168.1.255 is the broadcast address and 192.168.1.0 is the network address for this
network.
Dynamic IP addressing Vs Static IP
addressing
In assigning IP addresses to machines, you have two choices. You can either go around
typing in the individual address on each machine or you can setup one machine to
assign IP addresses to the others. The second one called dynamic addressing is
preferred for three reasons. First, it makes the job of administering the network such as
adding new clients, avoiding IP clashes, etc a lot easier. And second, since only those
machines that are switched on will need an IP address, you could potentially have more
machines on your network with dynamic addressing, than you could with static
addressing. Finally, mobile computing has become an everyday reality, and notebook
computers have a likelihood of moving from one network to another or from one subnet
to another. In such a situation, if you have static IP addressing, you have to reconfigure
the machine every time you move it something that is eminently avoidable.
You do dynamic addressing with DHCP (Dynamic Host Configuration Protocol). To
make DHCP work on your network you have to set up a DHCP server.

Calculation of IP addresses and subnet masks is no job for the binary challenged. A
handy tool which will do all this for you is the IP Subnet calculator, a freeware tool from
the Net3 Group. It is available on the PCQ July 97 CD-ROM.

An example in subnet design


Warning: You can safely ignore this section and use the IP subnet calculator
instead. Remember that all this is done in binary. If you are curious as to what
happens behind the scenes, here it goes.

We will consider a class C network being subneted.


First of all you have to decide how many subnets you want to have. This can be along
functional lines like different subnets for accounts, sales, and marketing etc. You also
need to know the number of hosts that the largest subnet is to support. And remember
to keep future needs in mind.

Assume that the network address chosen for your intranet is 192.168.1.0, and that you
want seven subnets, with the largest one having 20 hosts.Since you are dealing with
binary numbers, subnets can be created only in blocks of powers of two. That is you can
have two subnets, four, eight, 16, and so on. In this case you choose eight subnets,
which will also give you one free subnet for future use. Your IP address is a 32-bit binary
number. Out of this the first 24 bits (8 x 3) have already gone for the network address.
Now you have to set aside the next three (8 = 23 ) for subneting. That leaves you with
32-24-3 = 5 bits for host addresses. With five bits you can have 25 = 32 individual IP
addresses for the hosts. Of these, two all 1s and all 0s cannot be assigned to hosts. The
all 0s host number identifies the base network or the subnet while the all 1s host number
identifies the broadcast address of the network or subnetwork. So, you can have a
maximum of 30 hosts on each subnet.

If you want more than 30 hosts on a subnet, what would you do? Reduce the number of
subnets or go for a higher class of network. Remember that the maximum number of
hosts on a class C network is 254 (after subtracting the broadcast address and the
network address), and with every subnet, you are reducing that number by two. (8 x 30)
+ (7x2) = 240 + 14 = 254.

Now we come to the binary numbers.


Network address = 192.168.1.0 = 11000000.10101000.00000001.00000000
Default subnet for class C = 255.255.255.0 =
11111111.11111111.11111111.00000000
Adding 8 subnets = 11111111.11111111.11111111.11100000

Converting this to binary, the required subnet mask is 255.255.255.224 (11100000 in


binary is 224 in decimal notation).
The subnets are numbered 0 to 7. The subnet is defined by replacing the three most
significant digits ( first three from left) of the last octet in the network address with the
binary representation of the subnet number. Thus,
Subnet 0 will be 11000000.10101000.00000001.00000000 = 192.168.1.0
Subnet 1 will be 11000000.10101000.00000001.00100000 = 192.168.1.32
Subnet 2 will be 11000000.10101000.00000001.01000000 = 192.168.1.64
Subnet 3 will be 11000000.10101000.00000001.01100000 = 192.168.1.96
Subnet 4 will be 11000000.10101000.00000001.10000000 = 192.168.1.128
Subnet 5 will be 11000000.10101000.00000001.10100000 = 192.168.1.160
Subnet 6 will be 11000000.10101000.00000001.11000000 = 192.168.1.192
Subnet 7 will be 11000000.10101000.00000001.11100000 = 192.168.1.224

A quick check on your calculations is that the fourth octet (in decimal) of all subnets will
be multiples of the fourth octet (in decimal) of subnet 1.As originally defined, subnets
with all 0s and all 1s subnets 0 and 7 in this case were not to be used. But today's
routers can overcome this limitation.

Now we come to the host address for each of the subnets. Hosts are numbered from 1
onwards as against subnets which as we saw are numbered from 0 onwards. In this
case, we have 30 hosts in each subnet, and they will be numbered from 1 to 30. To
arrive at the host IP address, replace the host portion of the relevant subnet address
(the last five digits of the fourth octet in this case) with the binary equivalent of the host
number.Thus, the IP address of host number 3 on subnet 1 will be
11000000.10101000.00000001.00000011 = 192.168.1.3 and that for host number 30 in
subnet 6 will be 11000000.10101000.00000001.11011110 =192.168.1.222.The
broadcast address for subnet 4 is 11000000.10101000.00000001.10011111 =
192.168.1.159, which is one less than the subnet address of subnet 5.

The OSI Model's Seven Layers Defined and Functions Explained

Summary
The Open Systems Interconnect (OSI) model has seven layers. This article describes and
explains them, beginning with the 'lowest' in the hierarchy (the physical) and proceeding to the
'highest' (the application). The layers are stacked this way:
 

 Application
 Presentation
 Session
 Transport
 Network
 Data Link
 Physical

PHYSICAL LAYER
The physical layer, the lowest layer of the OSI model, is concerned with the transmission and
reception of the unstructured raw bit stream over a physical medium. It describes the
electrical/optical, mechanical, and functional interfaces to the physical medium, and carries the
signals for all of the higher layers. It provides:

 Data encoding: modifies the simple digital signal pattern (1s and 0s) used by the PC to
better accommodate the characteristics of the physical medium, and to aid in bit and
frame synchronization. It determines:
 
o What signal state represents a binary 1
o How the receiving station knows when a "bit-time" starts
o How the receiving station delimits a frame
 Physical medium attachment, accommodating various possibilities in the medium:
 
o Will an external transceiver (MAU) be used to connect to the medium?
o How many pins do the connectors have and what is each pin used for?
 Transmission technique: determines whether the encoded bits will be transmitted by
baseband (digital) or broadband (analog) signaling.
 Physical medium transmission: transmits bits as electrical or optical signals appropriate
for the physical medium, and determines:
 
o What physical medium options can be used
o How many volts/db should be used to represent a given signal state, using a
given physical medium
DATA LINK LAYER
The data link layer provides error-free transfer of data frames from one node to another over
the physical layer, allowing layers above it to assume virtually error-free transmission over the
link. To do this, the data link layer provides:
 

 Link establishment and termination: establishes and terminates the logical link between
two nodes.
 Frame traffic control: tells the transmitting node to "back-off" when no frame buffers
are available.
 Frame sequencing: transmits/receives frames sequentially.
 Frame acknowledgment: provides/expects frame acknowledgments. Detects and
recovers from errors that occur in the physical layer by retransmitting non-
acknowledged frames and handling duplicate frame receipt.
 Frame delimiting: creates and recognizes frame boundaries.
 Frame error checking: checks received frames for integrity.
 Media access management: determines when the node "has the right" to use the
physical medium.

NETWORK LAYER
The network layer controls the operation of the subnet, deciding which physical path the data
should take based on network conditions, priority of service, and other factors. It provides:
 

 Routing: routes frames among networks.


 Subnet traffic control: routers (network layer intermediate systems) can instruct a
sending station to "throttle back" its frame transmission when the router's buffer fills
up.
 Frame fragmentation: if it determines that a downstream router's maximum
transmission unit (MTU) size is less than the frame size, a router can fragment a frame
for transmission and re-assembly at the destination station.
 Logical-physical address mapping: translates logical addresses, or names, into physical
addresses.
 Subnet usage accounting: has accounting functions to keep track of frames forwarded
by subnet intermediate systems, to produce billing information.

Communications Subnet
The network layer software must build headers so that the network layer software residing in
the subnet intermediate systems can recognize them and use them to route data to the
destination address.
This layer relieves the upper layers of the need to know anything about the data transmission
and intermediate switching technologies used to connect systems. It establishes, maintains and
terminates connections across the intervening communications facility (one or several
intermediate systems in the communication subnet).

In the network layer and the layers below, peer protocols exist between a node and its
immediate neighbor, but the neighbor may be a node through which data is routed, not the
destination station. The source and destination stations may be separated by many
intermediate systems.
 

TRANSPORT LAYER
The transport layer ensures that messages are delivered error-free, in sequence, and with no
losses or duplications. It relieves the higher layer protocols from any concern with the transfer
of data between them and their peers.

The size and complexity of a transport protocol depends on the type of service it can get from
the network layer. For a reliable network layer with virtual circuit capability, a minimal
transport layer is required. If the network layer is unreliable and/or only supports datagrams,
the transport protocol should include extensive error detection and recovery.

The transport layer provides: 

 Message segmentation: accepts a message from the (session) layer above it, splits the
message into smaller units (if not already small enough), and passes the smaller units
down to the network layer. The transport layer at the destination station reassembles
the message.
 Message acknowledgment: provides reliable end-to-end message delivery with
acknowledgments.
 Message traffic control: tells the transmitting station to "back-off" when no message
buffers are available.
 Session multiplexing: multiplexes several message streams, or sessions onto one logical
link and keeps track of which messages belong to which sessions (see session layer).

Typically, the transport layer can accept relatively large messages, but there are strict message
size limits imposed by the network (or lower) layer. Consequently, the transport layer must
break up the messages into smaller units, or frames, prepending a header to each frame.

The transport layer header information must then include control information, such as message
start and message end flags, to enable the transport layer on the other end to recognize
message boundaries. In addition, if the lower layers do not maintain sequence, the transport
header must contain sequence information to enable the transport layer on the receiving end
to get the pieces back together in the right order before handing the received message up to
the layer above.
 

End-to-end layers
Unlike the lower "subnet" layers whose protocol is between immediately adjacent nodes, the
transport layer and the layers above are true "source to destination" or end-to-end layers, and
are not concerned with the details of the underlying communications facility. Transport layer
software (and software above it) on the source station carries on a conversation with similar
software on the destination station by using message headers and control messages.
 

SESSION LAYER
The session layer allows session establishment between processes running on different
stations. It provides:

 Session establishment, maintenance and termination: allows two application processes


on different machines to establish, use and terminate a connection, called a session.
 Session support: performs the functions that allow these processes to communicate
over the network, performing security, name recognition, logging, and so on.

PRESENTATION LAYER
The presentation layer formats the data to be presented to the application layer. It can be
viewed as the translator for the network. This layer may translate data from a format used by
the application layer into a common format at the sending station, then translate the common
format to a format known to the application layer at the receiving station.
The presentation layer provides:
 

 Character code translation: for example, ASCII to EBCDIC.


 Data conversion: bit order, CR-CR/LF, integer-floating point, and so on.
 Data compression: reduces the number of bits that need to be transmitted on the
network.
 Data encryption: encrypt data for security purposes. For example, password encryption.

APPLICATION LAYER

The application layer serves as the window for users and application processes to access
network services. This layer contains a variety of commonly needed functions:
 

 Resource sharing and device redirection


 Remote file access
 Remote printer access
 Inter-process communication
 Network management
 Directory services
 Electronic messaging (such as mail)
 Network virtual terminals

DHCP (UDP ports 67 and 68)


In most client-server-applications, the port number of a server is a well-known number, while the client
uses a currently available port number. DHCP is different. Here, both the client and the server use a
well-known port: UDP port 67 for the DHCP server, and UDP port 68 for the DHCP client.

Learn something new. Take control of your career.

Sign up
If you're planning on pursuing a field in networking or just looking to expand your networking
knowledge then this article is for you. TCP/IP utilities are essential -- not only will they help you on your
networking exams but you'll be able to diagnose most TCP/IP problems and begin working on solutions.

The top 7 tools that I will talk about today include: Ping, Tracert, ARP, Netstat, Nbtstat,
NSLookup, and IPconfig. These tools will help you to check the status of your network and
allow you to troubleshoot and test connectivity to remote hosts.

You use these utilities in Dos and you get there by clicking on Start, going to Run and typing
cmd.

Here are the top 7 TCP/IP utilities and their functions.

1. Ping

The PING utility tests connectivity between two hosts. PING uses a special protocol called the
Internet Control Message Protocol (ICMP) to determine whether the remote machine (website,
server, etc.) can receive the test packet and reply.

Also a great way to verify whether you have TCP/IP installed and your Network Card is
working.

We'll start by Pinging the loopback address (127.0.0.1) to verify that TCP/IP is installed and
configured correctly on the local computer.

Type: PING 127.0.0.1


This tells me that TCP/IP is working as well as my Network Card.

To test out connectivity to a website all you have to do is type: ping espn.com

The results should tell you if the connection was successful or if you had any lost packets.

Packet loss describes a condition in which data packets appear to be transmitted correctly at one
end of a connection, but never arrive at the other. Why? Well, there are a few possibilities.
The network connection might be poor and packets get damaged in transit or the packet was
dropped at a router because of internet congestion. Some Internet Web servers may be
configured to disregard ping requests for security purposes.

Note the IP address of espn.com -- 199.181.132.250. You can also ping this address and get the
same result.

However, Ping is not just used to test websites. It can also test connectivity to various servers:
DNS, DHCP, your Print server, etc. As you get more into networking you'll realize just how
handy the Ping utility can be.

2. Tracert

Tracert is very similar to Ping, except that Tracert identifies pathways taken along each hop,
rather than the time it takes for each packet to return (ping).

If I have trouble connecting to a remote host I will use Tracert to see where that connection fails.
Any information sent from a source computer must travel through many computers / servers /
routers (they're all the same thing, essentially) before it reaches a destination.
It may not be your computer but something that is down along the way. It can also tell you if
communication is slow because a link has gone down between you and the destination.

If you know there are normally 4 routers but Tracert returns 8 responses, you know your packets
are taking an indirect route due to a link being down.

3. ARP

The ARP utility helps diagnose problems associated with the Address Resolution Protocol
(ARP).

TCP/IP hosts use ARP to determine the physical (MAC) address that corresponds with a specific
IP address. Type arp with the – a option to display IP addresses that have been resolved to MAC
addresses recently.

4. Netstat

Netstat (Network Statistics) displays network connections (both incoming and outgoing), routing
tables, and a number of network interface statistics.

It is an important part of the Network + exam but it's a helpful tool in finding problems and
determining the amount of traffic on the network as a performance measurement.
Netstat –s provides statistics about incoming and outgoing traffic.
 

5. Nbtstat

Nbtstat (NetBios over TCP/IP) enables you to check information about NetBios names.

It helps us view the NetBios name cache (nbtstat -c) which shows the NetBios names and the
corresponding IP address that has been resolved (nbtstat -r) by a particular host as well as the
names that have been registered by the local system (nbtstat –n).
 

6. NSLookup

NSLookup provides a command-line utility for diagnosing DNS problems. In its most basic
usage, NSLookup returns the IP address with the matching host name.
7. IPConfig

Not part of the TCP/IP utilities but it is useful to show current TCP/IP settings.

The IPConfig command line utility will show detailed information about the network you are
connected to. It also helps with reconfiguration of your IP address through release and renew.

Let's say you want to know what you're IP address is -- ipconfig is what you type in the
command prompt.
ipconfig will give a quick view of you IP address, your subnet mask and default gateway.

ipconfig /all will give you more detailed information.


Through ipconfig /all we can find DNS severs, if we have DHCP enabled, MAC Address, along
with other helpful information. All good things to know if we have trouble getting connected to
the internet.

Other IPConfig tools that are helpful include ipconfig /release and ipconfig /renew. But before I
get into this let's discuss how we actually get an IP Address.

There are two ways to obtain an IP address. One way is to have a static IP address which we
manually assign. The second one is to have a dynamic IP address obtained through a DHCP
server.

If you were to right click on Network Connects, go to Properties, right click on Local Area
Connection, scroll down to Internet Protocol (TCP/IP), and select Properties -- you'll see two
options:

 Obtain an IP address automatically


 Use the following IP address
Unless you know your static IP address you'll want to stick to the option for automatically
obtaining the IP address. If you have it set to automatic your computer will be issued an IP
through a DHCP server.

And just in case you're wondering, Dynamic Host Configuration Protocol (DHCP) is a network
protocol that enables a server to automatically assign an IP address to a computer from a defined
range of numbers (i.e., a scope) configured for a given network.

In laymen's terms: I have a cable modem at home and I have that modem connected to a wireless
router that issues out IP address to anyone that connects to that router. That is DHCP that is
issuing out IP addresses.

Your company probably has a server dedicated to this. Understanding this is definitely important
for any networking exam.

Let's look at what happens when we release our IP address.

I've just lost internet connection and my IP address is 0.0.0.0. If I type ipconfig /renew this
option re-establishes TCP/IP connections on all network adapters and I can resume my internet
surfing.
Note: ipconfig /release renew won't work if you manually assigned your IP addresses.

That's about as far as these utilities go. Again not only are they important for any Network exam,
they are essential tools used in the field for troubleshooting and diagnosing network problems

Each layer explained


 Share
 Pin
 Email

Getty Images

by Bradley Mitchell

Updated March 06, 2018

Open Systems Interconnection (OSI) Model

The Open Systems Interconnection (OSI) model defines a networking framework to implement


protocols in layers, with control passed from one layer to the next. It is primarily used today as a
teaching tool. It conceptually divides computer network architecture into 7 layers in a logical
progression. The lower layers deal with electrical signals, chunks of binary data, and routing of
these data across networks. Higher levels cover network requests and responses, representation
of data, and network protocols as seen from a user's point of view. 

The OSI model was originally conceived as a standard architecture for building network systems
and indeed, many popular network technologies today reflect the layered design of OSI.

01

of 07

Physical Layer

At Layer 1, the Physical layer of the OSI model is responsible for ultimate transmission of digital
data bits from the Physical layer of the sending (source) device over network communications
media to the Physical layer of the receiving (destination) device. Examples of Layer 1
technologies include Ethernet cables and Token Ring networks. Additionally, hubs and other
repeaters are standard network devices that function at the Physical layer, as are cable
connectors.

At the Physical layer, data are transmitted using the type of signaling supported by the physical
medium: electric voltages, radio frequencies, or pulses of infrared or ordinary light.

02

of 07

Data Link Layer

When obtaining data from the Physical layer, the Data Link layer checks for physical
transmission errors and packages bits into data "frames". The Data Link layer also manages
physical addressing schemes such as MAC addresses for Ethernet networks, controlling access
of any various network devices to the physical medium. Because the Data Link layer is the single
most complex layer in the OSI model, it is often divided into two parts, the "Media Access
Control" sublayer and the "Logical Link Control" sublayer.

03

of 07

Network Layer
The Network layer adds the concept of routing above the Data Link layer. When data arrives at
the Network layer, the source and destination addresses contained inside each frame are
examined to determine if the data has reached its final destination. If the data has reached the
final destination, this Layer 3 formats the data into packets delivered up to the Transport layer.
Otherwise, the Network layer updates the destination address and pushes the frame back down to
the lower layers.

To support routing, the Network layer maintains logical addresses such as IP addresses for
devices on the network. The Network layer also manages the mapping between these logical
addresses and physical addresses. In IP networking, this mapping is accomplished through the
Address Resolution Protocol (ARP).

04

of 07

Transport Layer

The Transport Layer delivers data across network connections. TCP is the most common
example of a Transport Layer 4 network protocol. Different transport protocols may support a
range of optional capabilities including error recovery, flow control, and support for re-
transmission.

05

of 07

Session Layer

The Session Layer manages the sequence and flow of events that initiate and tear down network
connections. At Layer 5, it is built to support multiple types of connections that can be created
dynamically and run over individual networks.

06

of 07

Presentation Layer
The Presentation layer is the simplest in function of any piece of the OSI model. At Layer 6, it
handles syntax processing of message data such as format conversions and encryption /
decryption needed to support the Application layer above it.

07

of 07

Application Layer

The Application layer supplies network services to end-user applications. Network services are
typically protocols that work with user's data. For example, in a Web browser application, the
Application layer protocol HTTP packages the data needed to send and receive Web page
content. This Layer 7 provides data to (and obtains data from) the Presentation layer.

OSI Protocols

Definition - What does OSI Protocols mean?


OSI protocols are a family of standards for information exchange. These were developed and
designed by the International Organization of Standardization (ISO). In 1977 the ISO model was
introduced, which consisted of seven different layers. This model has been criticized because of
its technicality and limited features.

Each layer of the ISO model has its own protocols and functions. The OSI protocol stack was
later adapted into the TCP/IP stack. In some networks, protocols are still popular using only the
data link and network layers of the OSI model.

Techopedia explains OSI Protocols


The OSI protocol stack works on a hierarchical form, from the hardware physical layer to the
software application layer. There are a total of seven layers. Data and information are received
by each layer from an upper layer. After the required processing, this layer then passes the
information on to the next lower layer. A header is added to the forwarded message for the
convenience of the next layer. Each header consists of information such as source and
destination addresses, protocol used, sequence number and other flow-control related data.

The following are the OSI protocols used in the seven layers of the OSI Model:
1. Layer 1, the Physical Layer: This layer deals with the hardware of networks such as cabling. The
major protocols used by this layer include Bluetooth, PON, OTN, DSL, IEEE.802.11, IEEE.802.3,
L431 and TIA 449.
2. Layer 2, the Data Link Layer: This layer receives data from the physical layer and compiles it into
a transform form called framing or frame. The protocols are used by the Data Link Layer include:
ARP, CSLIP, HDLC, IEEE.802.3, PPP, X-25, SLIP, ATM, SDLS and PLIP.
3. Layer 3, the Network Layer: This is the most important layer of the OSI model, which performs
real time processing and transfers data from nodes to nodes. Routers and switches are the
devices used for this layer. The network layer assists the following protocols: Internet Protocol
(IPv4), Internet Protocol (IPv6), IPX, AppleTalk, ICMP, IPSec and IGMP.
4. Layer 4, the Transport Layer: The transport layer works on two determined communication
modes: Connection oriented and connectionless. This layer transmits data from source to
destination node. It uses the most important protocols of OSI protocol family, which are:
Transmission Control Protocol (TCP), UDP, SPX, DCCP and SCTP.
5. Layer 5, the Session Layer: The session layer creates a session between the source and the
destination nodes and terminates sessions on completion of the communication process. The
protocols used are: PPTP, SAP, L2TP and NetBIOS.
6. Layer 6, the Presentation Layer: The functions of encryption and decryption are defined on this
layer. It converts data formats into a format readable by the application layer. The following are
the presentation layer protocols: XDR, TLS, SSL and MIME.
7. Layer 7, the Application Layer: This layer works at the user end to interact with user
applications. QoS (quality of service), file transfer and email are the major popular services of
the application layer. This layer uses following protocols: HTTP, SMTP, DHCP, FTP, Telnet, SNMP
and SMPP.

HTTP, DHCP, DNS, FTP, SMTP, Proxy and Client Server Architecture.


July 1, 2013 w4university Information, Networkingbest wordpress blog, client-server-architecture, data
communication notes, dhcp, dns, ftp, http, networking notes, proxy, smtp, w3university, w4university,
xpramod

HTTP

HTTP is a request/response standard between a client and a server. A client is the end-user, the
server is the web site. The client making an HTTP request – using a web browser, spider, or
other end-user tool – is referred to as the user agent. The responding server – which stores or
creates resources such as HTML files and images – is called the origin server. In between the
user agent and origin server may be several intermediaries, such as proxies, gateways, and
tunnels. HTTP is not constrained to using TCP/IP and its supporting layers, although this is its
most popular application on the Internet. Indeed HTTP can be “implemented on top of any other
protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any
protocol that provides such guarantees can be used.”

Typically, an HTTP client initiates a request. It establishes a Transmission Control Protocol


(TCP) connection to a particular port on a host (port 80 by default ;). An HTTP server listening
on that port waits for the client to send a request message. Upon receiving the request, the server
sends back a status line, such as “HTTP/1.1 200 OK”, and a message of its own, the body of
which is perhaps the requested file, an error message, or some other information.

The reason that HTTP uses TCP and not UDP is because much data must be sent for a webpage,
and TCP provides transmission control, presents the data in order, and provides error correction.
See the difference between TCP and UDP.

Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or,
more specifically, Uniform Resource Locators (URLs)) using the http: or https URI schemes.

HTTPS

(Hypertext Transfer Protocol over Secure Socket Layer) is a URI scheme used to indicate a
secure HTTP connection. It is syntactically identical to the http:// scheme normally used for
accessing resources using HTTP. Using an https: URL indicates that HTTP is to be used, but
with a different default TCP port (443) and an additional encryption/authentication layer between
the HTTP and TCP. This system was designed by Netscape Communications Corporation to
provide authentication and encrypted communication and is widely used on the World Wide
Web for security-sensitive communication such as payment transactions and corporate logons.

DHCP

The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses,
subnet masks, default gateway, and other IP parameters. [1]

When a DHCP-configured client (be it a computer or any other network aware device) connects
to a network, the DHCP client sends a broadcast query requesting necessary information from a
DHCP server. The DHCP server manages a pool of IP addresses and information about client
configuration parameters such as the default gateway, the domain name, the DNS servers, other
servers such as time servers, and so forth. Upon receipt of a valid request the server will assign
the computer an IP address, a lease (the length of time for which the allocation is valid), and
other TCP/IP configuration parameters, such as the subnet mask and the default gateway. The
query is typically initiated immediately after booting and must be completed before the client can
initiate IP-based communication with other hosts.

DHCP provides three modes for allocating IP addresses. The best-known mode is dynamic, in
which the client is provided a “lease” on an IP address for a period of time. Depending on the
stability of the network, this could range from hours (a wireless network at an airport) to months
(for desktops in a wired lab). At any time before the lease expires, the DHCP client can request
renewal of the lease on the current IP address. A properly-functioning client will use the renewal
mechanism to maintain the same IP address throughout its connection to a single network,
otherwise it may risk losing its lease while still connected, thus disrupting network connectivity
while it renegotiates with the server for its original or a new IP address.
The two other modes for allocation of IP addresses are automatic (also known as DHCP
Reservation), in which the address is permanently assigned to a client, and manual, in which the
address is selected by the client (manually by the user or any other means) and the DHCP
protocol messages are used to inform the server that the address has been allocated.

The automatic and manual methods are generally used when finer-grained control over IP
address is required (typical of tight firewall setups), although typically a firewall will allow
access to the range of IP addresses that can be dynamically allocated by the DHCP server.

Depending on implementation, the DHCP server has three methods of allocating IP-addresses
Dynamic Allocation: A network administrator assigns a range of IP addresses to DHCP, and
each client computer on the LAN has its IP software configured to request an IP address from the
DHCP server during network initialization. The request-and-grant process uses a lease concept
with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP
addresses that are not renewed (dynamic re-use of IP addresses).

Automatic Allocation: The DHCP server permanently assigns a free IP address to a requesting
client from the range defined by the administrator.

Manual Allocation: The DHCP server allocates an IP address based on a table with MAC
address – IP address pairs manually filled in by the server administrator. Only requesting clients
with a MAC address listed in this table will be allocated an IP address.

Some DHCP server software can manage hosts by more than one of the above methods. For
example, the known hosts on the network can be assigned an IP address based on their MAC
address (manual allocation) whereas “guest” computers (such as laptops via Wi-Fi) are allocated
a temporary address out of a pool compatible with the network to which they’re attached
(dynamic allocation).

DNS

The Domain Name System (DNS) associates various sorts of information with domain names;
most importantly, it serves as the “phone book” for the Internet by translating human-readable
computer hostnames, e.g. http://www.example.com, into the IP addresses, e.g. 208.77.188.166,
that networking equipment needs to deliver information. It also stores other information such as
the list of mail exchange servers that accept email for a given domain. In providing a worldwide
keyword-based redirection service, the Domain Name System is an essential component of
contemporary Internet use.
The most basic task of DNS is to translate hostnames to IP addresses. In very simple terms, it can
be compared to a phone book. DNS also has other important uses.

Above all, DNS makes it possible to assign Internet names to organizations (or concerns they
represent), independently of the physical routing hierarchy represented by the numerical IP
address. Because of this, hyperlinks and Internet contact information can remain the same,
whatever the current IP routing arrangements may be, and can take a human-readable form (such
as “example.com”), which is easier to remember than the IP address 208.77.188.166. People take
advantage of this when they recite meaningful URLs and e-mail addresses without caring how
the machine will actually locate them.

The Domain Name System distributes the responsibility for assigning domain names and
mapping them to IP networks by allowing an authoritative server for each domain to keep track
of its own changes, avoiding the need for a central registrar to be continually consulted and
updated.

Parts of a domain name

A domain name usually consists of two or more parts (technically labels), separated by dots. For
example example.com.

The rightmost label conveys the top-level domain (for example, the address
http://www.example.com has the top-level domain com).

Each label to the left specifies a subdivision, or subdomain of the domain above it.
Note;”subdomain” expresses relative dependence, not absolute dependence. For example:
example.com comprises a subdomain of the com domain, and http://www.example.com
comprises a subdomain of the domain example.com. In theory, this subdivision can go down to
127 levels deep. Each label can contain up to 63 characters. The whole domain name does not
exceed a total length of 255 characters. In practice, some domain registries may have shorter
limits.

A hostname refers to a domain name that has one or more associated IP addresses; ie: the
http://www.example.com and example.com domains are both hostnames, however, the com
domain is not.

DNS servers

The Domain Name System consists of a hierarchical set of DNS servers. Each domain or
subdomain has one or more authoritative DNS servers that publish information about that
domain and the name servers of any domains “beneath” it. The hierarchy of authoritative DNS
servers matches the hierarchy of domains. At the top of the hierarchy stand the root nameservers:
the servers to query when looking up (resolving) a top-level domain name (TLD).

DNS resolvers
A resolver looks up the resource record information associated with nodes. A resolver knows
how to communicate with name servers by sending DNS queries and heeding DNS responses.

A DNS query may be either a recursive query or a non-recursive query:

A non-recursive query is one where the DNS server may provide a partial answer to the query
(or give an error). DNS servers must support non-recursive queries.

A recursive query is one where the DNS server will fully answer the query (or give an error).
DNS servers are not required to support recursive queries.

The resolver (or another DNS server acting recursively on behalf of the resolver) negotiates use
of recursive service using bits in the query headers.

Resolving usually entails iterating through several name servers to find the needed information.
However, some resolvers function simplistically and can only communicate with a single name
server. These simple resolvers rely on a recursive query to a recursive name server to perform
the work of finding information for them.

FTP

In computing, the File Transfer Protocol (FTP) (Port 21) is a network protocol used to transfer
data from one computer to another through a network, such as over the Internet.

FTP is a file transfer protocol for exchanging files over any TCP/IP based network to manipulate
files on another computer on that network regardless of which operating systems are involved (if
the computers permit FTP access). There are many existing FTP client and server programs. FTP
servers can be set up anywhere between game servers, voice servers, internet hosts, and other
physical servers.

SMTP

SMTP is a relatively simple, text-based protocol, in which one or more recipients of a message
are specified (and in most cases verified to exist) along with the message text and possibly other
encoded objects. The message is then transferred to a remote server using a procedure of queries
and responses between the client and server. Either an end-user’s email client, a.k.a. MUA (Mail
User Agent), or a relaying server’s MTA (Mail Transport Agents) can act as an SMTP client.

An email client knows the outgoing mail SMTP server from its configuration. A relaying server
typically determines which SMTP server to connect to by looking up the MX (Mail eXchange)
DNS record for each recipient’s domain name (the part of the email address to the right of the at
(@) sign). Conformant MTAs (not all) fall back to a simple A record in the case of no MX.
Some current mail transfer agents will also use SRV records, a more general form of MX, though
these are not widely adopted. (Relaying servers can also be configured to use a smart host.)
The SMTP client initiates a TCP connection to server’s port 25 (unless overridden by
configuration). It is quite easy to test an SMTP server using the telnet program (see below).

SMTP is a “push” protocol that does not allow one to “pull” messages from a remote server on
demand. To do this a mail client must use POP3 or IMAP. Another SMTP server can trigger a
delivery in SMTP using ETRN.

proxy server

In computer networks, a proxy server is a server (a computer system or an application program)


which services the requests of its clients by forwarding requests to other servers. A client
connects to the proxy server, requesting some service, such as a file, connection, web page, or
other resource, available from a different server. The proxy server provides the resource by
connecting to the specified server and requesting the service on behalf of the client. A proxy
server may optionally alter the client’s request or the server’s response, and sometimes it may
serve the request without contacting the specified server. In this case, it would ‘cache’ the first
request to the remote server, so it could save the information for later, and make everything as
fast as possible.

A proxy server that passes all requests and replies unmodified is usually called a gateway or
sometimes tunneling proxy.

A proxy server can be placed in the user’s local computer or at specific key points between the
user and the destination servers or the Internet.

Caching proxy server

A proxy server can service requests without contacting the specified server, by retrieving content
saved from a previous request, made by the same client or even other clients. This is called
caching.

Web proxy

A proxy that focuses on WWW traffic is called a “web proxy”. The most common use of a web
proxy is to serve as a web cache.

Content Filtering Web Proxy

A content filtering web proxy server provides administrative control over the content that may be
relayed through the proxy. It is commonly used in commercial and non-commercial
organizations (especially schools) to ensure that Internet usage conforms to acceptable use
policy.

Client-server
Client server is a computing architecture which separates a client from a server, and is almost
always implemented over a computer network. A client-server application is a distributed system
that constitutes of both client and server software. A client is a software or process that may
initiate a communication session, while a server can not initiate sessions, but is waiting for a
requests from a client. Client and server may also aim at the host computer hardware connected
to a network, that are residing the client and server software respectively.

Client/server describes the relationship between two computer programs in which one program,
the client, makes a service request from another program, the server, which fulfills the request.
Although the client/server idea can be used by programs within a single computer, it is a more
important idea in a network. In a network, the client/server model provides a convenient way to
interconnect programs that are distributed efficiently across different locations. Computer
transactions using the client/server model are very common. Most Internet applications, such as
email, web access and database access, are based on the client/server model. For example, a web
browser is a client program at the user computer that may access information at any web server
in the world. To check your bank account from your computer, a web browser client program in
your computer forwards your request to a web server program at the bank. That program may in
turn forward the request to its own database client program that sends a request to a database
server at another bank computer to retrieve your account balance. The balance is returned back to
the bank database client, which in turn serves it back to the web browser client in your personal
computer, which displays the information for you.

The client/server model has become one of the central ideas of network computing. Most
business applications being written today use the client/server model. So does the Internet’s main
application protocols, such as HTTP, SMTP, Telnet, DNS, etc. In marketing, the term has been
used to distinguish distributed computing by smaller dispersed computers from the “monolithic”
centralized computing of mainframe computers. But this distinction has largely disappeared as
mainframes and their applications have also turned to the client/server model and become part of
network computing.

Each instance of the client software can send data requests to one or more connected servers. In
turn, the servers can accept these requests, process them, and return the requested information to
the client. Although this concept can be applied for a variety of reasons to many different kinds
of applications, the architecture remains fundamentally the same.

The most basic type of client-server architecture employs only two types of hosts: clients and
servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share
files and resources.

These days, clients are most often web browsers, although that has not always been the case.
Servers typically include web servers, database servers and mail servers. Online gaming is
usually client-server too. In the specific case of MMORPG, the servers are typically operated by
the company selling the game; for other games one of the players will act as the host by setting
his game in server mode.
The interaction between client and server is often described using sequence diagrams. Sequence
diagrams are standardized in the Unified Modeling Language.

When both the client- and server-software are running on the same computer, this is called a
single seat setup.

Characteristics of a client

Request sender is known as client

Initiates requests

Waits for and receives replies.

Usually connects to a small number of servers at one time

Typically interacts directly with end-users using a graphical user interface

Characteristics of a Server

Receiver of request which is sent by client is known as server

Passive (slave)

Waits for requests from clients

Upon receipt of requests, processes them and then serves replies

Usually accepts connections from a large number of clients

Typically does not interact directly with end-users

Comparison to Peer-to-Peer Architecture

Another type of network architecture is known as peer-to-peer, because each host or instance of
the program can simultaneously act as both a client and a server, and because each has
equivalent responsibilities and status. Peer-to-peer architectures are often abbreviated using the
acronym P2P.

Both client-server and P2P architectures are in wide usage today.

Comparison to Client-Queue-Client Architecture

While classic Client-Server architecture requires one of communication endpoints to act as a


server, which is much harder to implement, Client-Queue-Client allows all endpoints to be
simple clients, while the server consists of some external software, which also acts as passive
queue (one software instance passes its query to another instance to queue, e.g. database, and
then this other instance pulls it from database, makes a response, passes it to database etc.). This
architecture allows greatly simplified software implementation. Peer-to-Peer architecture was
originally based on Client-Queue-Client concept.

Advantages

In most cases, a client-server architecture enables the roles and responsibilities of a computing
system to be distributed among several independent computers that are known to each other only
through a network. This creates an additional advantage to this architecture: greater ease of
maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server
while its clients remain both unaware and unaffected by that change. This independence from
change is also referred to as encapsulation.

All the data is stored on the servers, which generally have far greater security controls than most
clients. Servers can better control access and resources, to guarantee that only those clients with
the appropriate permissions may access and change data.

Since data storage is centralized, updates to those data are far easier to administer than would be
possible under a P2P paradigm. Under a P2P architecture, data updates may need to be
distributed and applied to each “peer” in the network, which is both time-consuming and error-
prone, as there can be thousands or even millions of peers.

Many mature client-server technologies are already available which were designed to ensure
security, ‘friendliness’ of the user interface, and ease of use.

It functions with multiple different clients of different capabilities.

Disadvantages

Traffic congestion on the network has been an issue since the inception of the client-server
paradigm. As the number of simultaneous client requests to a given server increases, the server
can become severely overloaded. Contrast that to a P2P network, where its bandwidth actually
increases as more nodes are added, since the P2P network’s overall bandwidth can be roughly
computed as the sum of the bandwidths of every node in that network.

The client-server paradigm lacks the robustness of a good P2P network. Under client-server,
should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are
usually distributed among many nodes. Even if one or more nodes depart and abandon a
downloading file, for example, the remaining nodes should still have the data needed to complete
the download.

Examples

Imagine you are visiting an e-commerce web site. In this case, your computer and web browser
would be considered the client, while the computers, databases, and applications that make up
the online store would be considered the server. When your web browser requests specific
information from the online store, the server finds all of the data in the database needed to satisfy
the browser’s request, assembles that data into a web page, and transmits that page back to your
web browser for you to view.

Specific types of clients include web browsers, email clients, and online chat clients.

Specific types of servers include web servers, ftp servers, application servers, database servers,
mail servers, file servers, print servers, and terminal servers. Most web services are also types of
servers

Active Directory
Active Directory stores information about network resources and makes these resources
accessible to users, computers and applications by uniquely identifying them on the network. It
provides mechanisms for naming, describing, locating, accessing, managing and securing
network resources.

Active Directory also allows for the central management of the Windows Server 2003 network,
and for the delegation of administrative control over Active Directory objects, such as user data,
printers, servers, databases, groups, computers and security principals and security policies that
are stored in the directory. In other words, the Active Directory service provides the structure
and functions for organising, managing, and controlling network resources.

When you install Active Directory on a Windows Server 2003 stand-alone server computer, the
server becomes a domain controller. You can use the Active Directory Installation wizard to
install Active Directory. To launch the wizard, click Run on the Start Menu and type
DCPromo.exe in the Run text box. DCPromo is a tool that promotes a stand-alone server or a
member server to a domain controller. The first computer in a domain to have Active Directory
installed becomes the root domain controller. If the domain controller is being installed in an
existing Windows domain or if it is being configured as a child domain, the Active Directory
installation process will automatically make the appropriate connections and establish initial
default trust relationships.

During the installation of Active Directory, a writable copy of the Active Directory database is
placed on the server's hard disk. The file is named NTDS.dit and is normally located in the
%systemroot%\NTDS folder. Each domain controller maintains its own copy of the directory
database, containing information about the domain in which it is located. If one domain
controller becomes unavailable, users and computers can still access the Active Directory data
stored on another domain controller. Because a domain can have more than one domain
controller, changes made to the directory on one domain controller must be updated on the
others. The process of copying these updates is called replication and is used to synchronise
information across all the domain controllers in a domain

DHCP Servers
A Dynamic Host Configuration Protocol (DHC'P) Server automatically issues IP addresses to
clients on TCP/IP networks. Each IP address uniquely identifies a client and allows it to send and
receive packets of data. Each packet contains the IP address of the sender and the receiver, so no
two clients can have the same IP address at the same time.

DHCP assigns IP addresses dynamically. The client contacts the DHCP server requesting an IP
address and the DHCP server responds by issuing an IP address from a pool of available
addresses and as other IP configuration details, such as WINS or DNS server information,
needed by the client.

DHCP servers do not require authentication when providing a lease, so any client that contacts
the DHCP server can obtain a lease and connect to the network. It is therefore important to
restrict physical and wireless access to your network to prevent unauthorised clients from
connecting to your network and obtaining a DHCP lease. Auditing should be enabled on the
DHCP server and the logs reviewed regularly for possible problems.

Rogue DHCP servers are also a potential problem. There is a rogue DHCP server on the network
clients may receive incorrect IP address and configuration information. This is unlikely to
happen if the rogue DHCP server is running Windows 2000 or Windows Server 2003, because
these servers must be authorised in Active Directory and if the server is not authorised, the
DHCP service will not start.

However, pre-Windows 2000 and non-Windows DHCP servers do not require authorisation and
can be used as rogue DHCP servers in a Windows Server 2003 environment. Issuing out bogus
DHCP leases that do not expire can be a very effective Denial of Service (DoS) technique, so it
is important to monitor network traffic for DHCP server traffic that does not come from
authorised DHCP servers.

You need to be a member of the Administrators group or the DHCP Administrators group to
administer DHCP servers remotely using the DHCP console or netsh utility, so restricting
membership in these groups limits the number of people who can authorise a DHCP server

Firewalls
Firewalls limit direct access between a network and clients. All traffic must pass through the
firewall, which determines if the traffic should be blocked or allowed. The firewall acts as a
buffer between a Web server and its clients, or between an internal network and external
networks like the Internet. Rules can be implemented on the firewall controlling the kinds of
traffic that pass and who can perform specified actions. A Web server should have at least three
security zones:

 The public Internet, which is untrusted


 The perimeter network, which is semi-trusted
 The private network, which is trusted.
This creates two borders: one between the public network and the perimeter network and the
other between the perimeter network and the private network. Each of these borders must have a
separate security policy.

Perimeter security is often implemented by using a two-stage firewall system: the first stage
allows access to public servers, while the second stage prevents access to the internal network.
The area between the public and private networks is called a perimeter network or a
demilitarised zone (DMZ).

The public side of the perimeter network is protected by a firewall that permits public access to
the services you wish to provide, such as Web access. The private side is protected by another
firewall that only allows encrypted and authenticated protocols needed to let public servers to
exchange data with private servers. The private network must be protected against attack from
servers in the perimeter network.

If the private-side firewall policy allows external servers access to your private network, there is
a danger that hackers will be able to work their way through the perimeter network to the private
side of the network. Servers in the perimeter network should never be linked to the domain, so
that domain account information cannot be obtained from them if they are compromised.

Mail Servers
Mail servers allow users to send and receive e-mail messages and all e-mail messages must pass
through at least one mail server. When a message arrives at a destination mail server it is stored
there until the user retrieves it. If the mail server does recognise an intended recipient, it will try
to transfer the message to a server that does and the mail servers will work together to ensure a
message reaches its intended recipient.

When Windows Server 2003 is configured as a mail server, Simple Mail Transfer Protocol
(SMTP) and Post Office Protocol (POP3) are enabled. SMTP is used to send outgoing e-mail,
while POP3 is used to receive incoming e-mail. If Windows Server 2003 is used in the mail
server role, it should be configured to require secure authentication from clients. Client software
and the POP3 service can be configured to accept only encrypted passwords to prevent
interception by unauthorised users.

Windows Server 2003 uses Secure Password Authentication (SPA) to ensure that authentication
between the mail server and clients is encrypted. SPA is integrated with Active Directory, which
is used to authenticate users when they try to retrieve their e-mail. If the POP3 service is
configured to accept only SPA authentication, clients must also be configured to use encrypted
authentication, or else clients will try to authenticate using clear text and will be rejected by the
mail server.

Server Roles
Windows Server 2003 can be configured to perform 11 different roles:
Domain controller: used to manage domains and domain objects; provides user authentication
through Active Directory.

File server: provides access to files stored on the server.

Print server: provides network printing functionality.

DHCP server: allocates IP addresses and provides configuration information to clients.

DNS server: resolves IP addresses to domain names.

WINS server: resolves IP addresses to NetBIOS names.

Mail server: provides incoming (POP3) and outgoing (SMTP) e-mail services.

Application server: makes distributed applications and Web applications available to clients.

Terminal server: allows clients to access applications running on the server.

Remote access/VPN server: provides remote access to machines through dial-up connections
and virtual private networks (VPNs).

Streaming media server: provides Windows Media Services so that clients can access
streaming audio and video.

You might also like