Professional Documents
Culture Documents
Agm Lice Broadband Module
Agm Lice Broadband Module
Chapter-16
OPEN SYSTEM INERCONNECTION (OSI) MODEL
2.1 Introduction
An ISO standard that covers all aspects of network communication is the Open
Systems Interconnection (OSI) model. An open system is a model that allows any two
different systems to communicate regardless of their underlying architecture. The purpose of
the OSI model is to open communication between different systems without requiring
changes to the logic of the underlying hardware and software. It is not a protocol but it is a
reference model. A reference model is a conceptual framework for understanding
relationships. The purpose of the OSI reference model is to guide vendors and developers so
the digital communication products and software programs they create will interoperate, and
to facilitate clear comparisons among communications tools.
It consists of seven separate but related layers, each of which defines a segment of the
process of moving information across a network. Understanding the fundamentals of the OSI
model provides a solid basis for exploration of data communication.
OSI Page 1 of 11
The seven layers belong to three subgroups. Layers 1, 2, and 3 – physical, data link,
and network – are the network support layers; they deal with the physical aspects of moving
data from one device to another .
Layers 5, 6, and 7 – session, presentation, and application – can be thought of as the user
support layers; they allow interoperability among unrelated software systems.
Layer 4, the transport layer, ensures end-to-end reliable data transmission (while layer 2
ensures reliable transmission on a single link). The upper OSI layers are almost always
implemented in software: lower layers are a combination of hardware and software ; except
for the physical layer, which is mostly hardware.
2.4 Peer-to-Peer processes
The active protocol elements in each layer are called entities, typically implemented
by means of a software process. Entities in the same layer on different computers are
called peer entities.
OSI Page 2 of 11
The passing of the data and network information down through the layers of the sending
machine and back up through the layers of the receiving machine is made possible by an
interface between each pair of adjacent layers.
The layer- n entity passes an interface data unit (IDU) to the layer-( n – 1) entity.
The IDU consists of a protocol data unit (PDU) and some interface control
information (ICI). The ICI is information, such as the length of the SDU, and the
addressing information that the layer below needs to perform its function.
The PDU is the data that the layer- n entity wishes to pass across the network to its peer
entity. It consists of the layer- n header and the data that layer n received from
layer (n+1).
The layer- n PDU becomes the layer-( n – 1) service data unit (SDU), because it is the
data unit that will be serviced by layer n.
When layer n – 1 receives the layer- n IDU, it strips off and "considers" the ICI, adds the
header information for its peer entity across the network, adds ICI for the layer below,
and passes the resulting IDU to the layer n – 2 entity.
OSI Page 3 of 11
The sending and receiving of data from a source device to the destination device is
possible with the help of networking protocols by using data encapsulation. The data is
encapsulated with protocol information at each layer of the OSI reference model when a host
transmits data to another device across a network
The process starts out at layer 7 (the application layer), then moves from layer in
descending sequential order. At each layer (except layer 7 and 1), a header is added to the
data unit. At layer 2, a trailer is added as well. When the formatted data unit passes through
the physical layer (layer 1), it is changed into an electromagnetic signal and transported along
a physical link.
OSI Page 4 of 11
Physical characteristics of interfaces and media. The physical layer defines the
characteristics of the interface between the devices and the transmission medium. It
also defines the type of transmission medium.
Representation of bits. The physical layer data consist of a stream of bits (sequence
of 0s and 1s) without any interpretation. To be transmitted, bits must be encoded into
signals – electrical or optical. The physical layer defines the type of encoding (how 0s
and 1s are changed to signals).
Data rate. The transmission rate – the number of bits sent each second – is also
defined by the physical layer. In order words, the physical layer defines the duration
of a bit, which is how long it lasts.
Synchronization of bits. The sender and receiver must be synchronized at the bit
level. In other word , the sender and the receiver clocks must be synchronized.
Line configuration. The physical layer is concerned with the connection of devices
to the medium. In a point-to-point configuration, two devices are connected together
through a dedicated link. In a multipoint configuration, a link is shared between
several devices.
Physical topology. The physical topology defines how devices are connected to make
a network. Devices can be connected using a mesh topology (every device connected
OSI Page 5 of 11
Transmission mode. The physical layer also defines the direction of transmission
between two devices: simplex, half-duplex, or full-duplex. In the simplex mode, only
one device can send; the other can only receive. The simplex mode is a one-way
communication. In the half-duplex mode, two devices can send and receive, but not at
the same time. In a full-duplex (or simply duplex) mode, two devices can send and
receive at the same time.
2.8.2 Data Link Layer
The data link layer transforms the physical layer, a raw transmission facility, to a reliable link
and is responsible for node-to-node delivery. It makes the physical layer appear error free to
the upper layer (network layer).
Specific responsibilities of the data link layer include the following:
Framing. The data link layer divides the stream of bits received from the network layer
in to manageable data units called frames.
Physical addressing. If frames are to be distributed to different systems on the network,
the data link layer adds a header to the frame to define the physical address of the
sender (source address) and /or receiver (destination address) of the frame. If the frame
is intended for a system outside the sender’s network, the receiver address is the address
of the device that connects one network to the next.
Flow control. If the rate at which the data are absorbed by the receiver is less than the
rate produced in the sender, the data link layer imposes a flow control mechanism to
prevent overwhelming the receiver.
Error control. the data link layer adds reliability to the physical layer by adding
mechanisms to detect and retransmit damaged or lost frames. It also uses a mechanism to
prevent duplication of frames. Error control is normally achieved through a trailer added
to the end of the frame.
Access control. when two or more devices are connected to the same link, data link layer
protocols are necessary to determine which device has control over the link at any given
time.
OSI Page 6 of 11
The network layer is responsible for the source-to destination delivery of a packet
possible across multiple network (link). Whereas the data link layer oversees the delivery of
the packet between two systems on the same network (link), the network layer ensures that
each packet gets from its point of origin to its final destination.
If two systems are connected to the same link, there is usually no need for a network
layer. However, if the two systems are attached to different networks (links) with connecting
devices between the networks (link), there is often a need for the network layer to accomplish
source-to-destination delivery.
Specific responsibilities of the network layer include the following:
Logical addressing. The physical addressing implemented by the data link layer
handles the addressing problem locally. If a packet passes the network boundary, we
need another addressing system to help distinguish the source and destination systems.
The network layer adds a header to the packet coming form the upper layer that, among
other things, includes the logical addresses of the sender and receiver.
Routing . When independent networks or links are connected together to create an
internetwork (a networks) or a large network, the connecting devices (called routers or
gateways) route the packets to their final destination. One of the functions of the
network layer is to provide this mechanism.
OSI Page 7 of 11
Service-point addressing. Computers often run several programs at the same time.
For this reason, source-to-destination delivery means delivery not only from one
computer to the next but also from a specific process (running program) on one
computer to a specific process (running program) on the other. The transport layer
header therefore must include a type of address called a service-point address (or
port address). The network layer gets each packet to the correct computer; the
transport layer gets the entire message to the correct process on that computer.
The services provided by the first three layers (physical, data link, and network) are not
sufficient for some processes. The session layer is the network dialog controller. It
establishes, maintains, and synchronizes the interaction between communicating systems
and allows two application processes on different machines to establish, use and terminate a
OSI Page 8 of 11
The presentation layer is concerned with the syntax and semantics of the information
exchanged between two systems.
Specific responsibilities of the presentation layer include the following:
Translation. The processes (running programs) in two systems are usually exchanging
information in the form of character strings, number, and so on. The information should be
changed to bit streams before being transmitted. Because different computers use different
encoding systems, the presentation layer is responsible for interoperability between these
different encoding methods. The presentation layer at the sender
OSI Page 9 of 11
.
Compression Data compression reduces the number of bits to be transmitted. Data
compression becomes particularly important in the transmission of multimedia such as
text, audio, and video.
The application layer enables the user, whether human or software, to access the
network. It provides user interfaces and support for services such as electronic mail, remote
file access and transfer, shared database management, and other types of distributed
information services.
Ex.: Of the many application services available, the figure shows only three: X.400
(message-handling services); X.500 (directory services): and file transfer, access, and
management (FTAM). The user in this example uses X.400 to send an e-mail message. Note
that no headers or trailers are added at this layer.
Specific services provided by the application layer include the following:
OSI Page 10 of 11
OSI Page 11 of 11
3.2 Introduction
TCP/IP (transmission control protocol/Internet protocol) is the suite of
communications protocols that is used to connect hosts on the Internet. The TCP/IP suite is
not a single protocol. Rather, it is four-layer communication architecture that provides some
reasonable network features, such as end-to-end communications, unreliable communications
line fault handling, packet sequencing, internet work routing. The TCP/IP protocol suite
maps to a four-layer conceptual model known as the DARPA model, which was named after
the U.S. government agency that initially developed TCP/IP. The four layers of the DARPA
model are: Application, Transport, Network or Internet and Data link or network interface.
Each layer in this suite corresponds to one or more layers of the seven-layer OSI model.
TCP/IP Page 1 of 10
TCP/IP Page 2 of 10
Data Offset This is the number of 32-bit words in the TCP header which, like
the IP header has a variable length options field.
Flag bits There are several bits used as status indicators to show, for
example, the resetting of the connection
Window This field is used by the receiver to set the window size.
Urgent pointer The sender can indicate that an urgent datagram is coming and
urges the receiver to handle it as quickly as possible.
TCP/IP Page 3 of 10
Sequence Number
Acknowledgement Number
Data
Reserved Flags
Window
Checksum Urgent Pointer
Options Padding
DATA
TCP/IP Page 4 of 10
16bits
Source Port 16bits
Destination Port
Length Checksum
DATA
Table 3.2
Version The version Number of IP. There have been several new releases, which
(given the size of ARPANET) must co-exist for some time.
IHL The IP header length. Because of the options field, the header is not a
fixed length. This field shows where the data starts.
Type of This field allows for a priority system to be imposed, plus an indication of
Service the desired, but not guaranteed, reliability required.
Length The total length of the IP packet. Although there is a theoretical maximum
of 64Kbytes, most networks operate with much smaller packets, though
all must accept at least 576bytes.
TCP/IP Page 5 of 10
ID/Flags/ These fields enable a gateway to split up the datagram into smaller segments.
Offset The ID field ensures that the receiver can piece together the fragments from
the correct datagrams, as fragments from many datagrams may arrive in any
order. The offset tells how far down the datagram this fragment is, and the
flags can be used to mark the datagram as non fragmentable
Time to live This is a count which limits the lifetime of a datagram on the catenet.
Each time it passes through a gateway, the count is decremented by one.
If it reaches zero, the gateway does not forward it. This prevents
permanently circulating datagrams.
Protocol This indicates which higher level protocol is being carried, e.g. TCP or
UDP
Checksum This checksum covers the header only. It is up to the higher layers to
detect transmission errors in the data.
Source/ dest To assist the gateways to route datagrams by the most efficient path, each.
Address IP address is structured into a Network Number and a local address.
There are three classes of network providing different numbers of locally
administered addresses.
Options The final part of the header is a variable number of optional fields, which
are used to enforce security or network management.
Padding This field is used to align the header to the next 32-bit boundary.
TCP/IP Page 6 of 10
TCP/IP Page 7 of 10
TCP/IP Page 8 of 10
POP is also called as POP3 protocol. This is a protocol used by a mail server in
conjunction with SMTP to receive and holds mail for hosts.POP3 mail server receives e-
TCP/IP Page 9 of 10
3.11 Conclusion
The aim of this chapter was to give and overview of TCP/IP Suite and the various
protocols. Data communication is wide and complex field. Covering the all the protocols and
concepts is beyond the scope and additional reading may be required to get the expertise in
the field.
******************
TCP/IP Page 10 of 10
Course Contents
What is IP Addressing
Different types of IP Addresses
Classful and Classless IP Addresses
Shortcomings of IPv4 Addresses
4.1 Objective
The objective of the is class is to understand IP addressing, Subnetting, VLSM and CIDR
4.2 Introduction
An IP address is an address used in order to uniquely identify a device on a computer
network. An IP address is an identifier for a computer or device on a TCP/IP network.
Networks uses IP address of the destination to route messages
An IP address is an identifier that is assigned at the Internet layer to an interface or a
set of interfaces. Each IP address can identify the source or destination of IP packets. When
you enable TCP/IP on an interface, you assign it one or more logical IP addresses, either
automatically or manually. The IP address is a logical address because it is assigned at the
Internet layer and has no relation to the physical addresses.
4.3 IP Address
The current version of IP, IP version 4 (IPv4), defines a 32-bit address which means
that there are only 232 (4,294,967,296) IPv4 addresses available. This might seem like a
large number of addresses, but as new markets open and a significant portion of the world's
population becomes candidates for IP addresses, the finite number of IP addresses will
eventually be exhausted. The address shortage problem is aggravated by the fact that portions
of the IP address space have not been efficiently allocated. Also, the traditional model of
classful addressing does not allow the address space to be used to its maximum potential.
IP Addressing Page 1 of 12
IP Addressing Page 2 of 12
Each Class A network address has an 8-bit network-prefix with the highest order bit
set to 0 and a seven-bit network number, followed by a 24-bit host number. Class A networks
are now referred to as "/8s" (pronounced "slash eight" or just "eights") since they have an 8-
bit network-prefix. A maximum of 126 (27 -2) /8 networks can be d fined. The calculation
requires that the 2 is subtracted because the /8 network 0.0.0.0 is reserved for use as the
default route and the /8 network 127.0.0.0 (also written 127/8 or 127.0.0.0/8) has been
reserved for the "loopback" function. Each /8 supports a maximum of 16,777,214 (224-2)
hosts per network. The host calculation requires that 2 is subtracted because the all-0s ("this
network") and all-1s ("broadcast") host-numbers may not be assigned to individual hosts.
4.6.2 Class B (/16 Prefixes)
Network Host
Figure 4-2: Structure of class B addresses
Each Class B network address has a 16-bit network-prefix with the two highest order
1 8
bits set to 1-0 and a 14-bit network number, followed by a 16-bit host-number. Class B
networks are now referred to as"/16s" since they have a 16-bit network-prefix.A maximum of
16,384 (214) /16 networks can be defined with up to 65,534 (216 -2) hosts per network.
4.6.3-Class C (/24 Prefixes)
IP Addressing Page 3 of 12
4.6.4 Class D
These addresses are reserved for IPv4 multicast addresses. The four high-order bits in
a class D address are always set to 1110, which makes the address prefix for all class D
addresses 224.0.0.0/4 (or 224.0.0.0, 240.0.0.0). For more information, see "IPv4 Multicast
Addresses" in this chapter.
4.6.5 Class E
These addresses are reserved for experimental use. The high-order bits in a class E
address are set to 1111, which makes the address prefix for all class E addresses 240.0.0.0/4
(or 240.0.0.0, 240.0.0.0).
The classful A, B, and C octet boundaries were easy to understand and implement,
but they did not foster the efficient allocation of a finite address space. A /24, which supports
254 hosts, is too small while a /16, which supports 65,534 hosts, is too large. In the past, the
Internet has assigned sites with several hundred hosts a single /16 address instead of a couple
of /24s addresses.
Given an IP address, its class can be determined from the three high-order bits (the
three left-most bits in the first octet). Figure 4-4 shows the range of addresses that fall into
each class. For informational purposes, Class D and Class E addresses are also shown.
Table 4.1 : Range of Address
IP CLASS IP RANGE
IP Addressing Page 4 of 12
CLASS A 1.0.0.0----126.255.255.255
CLASS B 128.0.0.0----191.255.255.255
CLASS C 192.0.0.0----223.255.255.255
CLASS D 224.0.0.0-----239.255.255.255
CLASS E 240.0.0.0-----255.255.255.254
IP Addressing Page 5 of 12
IP Addressing Page 6 of 12
4.10 Subnetting
Subnetting was introduced to overcome some of the problems that parts of the
Internet were beginning to experience with the classful addressing. Subnetting allows you to
create multiple logical networks that exist within a single Class A, B, or C network. If you do
not subnet, you are only able to use one network from your Class A, B, or C network, which
is unrealistic. Subnetting is a logical subdivision of an IP network. The practice of dividing a
network into two or more networks is called subnetting. Subnetting is the process of
designating some high-order bits from the host part and grouping them with the network part.
This divides a network into smaller subnets.
Subnetting attacked the expanding routing table problem by ensuring that the subnet
structure of a network is never visible outside of the organization's private network. The
route from the Internet to any subnet of a given IP address is the same, no matter which
subnet the destination host is on. This is because all subnets of a given network number use
the same network-prefix but different subnet numbers. The routers within the private
organization need to differentiate between the individual subnets, but as far as the Internet
routers are concerned, all of the subnets in the organization are collected into a single routing
table entry. This allows the local administrator to introduce arbitrary complexity into the
private network without affecting the size of the Internet's routing tables. Subnetting
overcame the registered number issue by assigning each organization one (or at most a few)
network number(s) from the IPv4 address space. The organization was then free to assign a
distinct subnetwork number for each of its internal networks. This allows the organization to
deploy additional subnets without needing to obtain a new network number from the Internet.
Table 4.2 details of Subnetting
Usable
Available Total
Prefix size Network mask hosts
subnets usable hosts
per subnet
IP Addressing Page 7 of 12
The router accepts all traffic from the Internet addressed to the network and forwards
traffic to the interior subnetworks based on the third octet of the classful address. The
deployment of subnetting within the private network provides several benefits: The size of
the global Internet routing table does not grow because the site administrator does not need to
obtain additional address space and the routing advertisements for all of the subnets are
combined into a single routing table entry. The local administrator has the flexibility to
deploy additional subnets without obtaining a new network number from the Internet. Route
flapping (i.e., the rapid changing of routes) within the private network does not affect the
Internet routing table since Internet routers do not know about the reachability of the
individual subnets - they just know about the reachability of the parent network number.
Extended-Network-Prefix Internet routers use only the network-prefix of the destination
address to route traffic to a subnetted environment. Routers within the subnetted environment
use the extended network- prefix to route traffic between the individual subnets. The
extended network-prefix is composed of the classful network-prefix and the subnet-number.
The extended-network-prefix has traditionally been identified by the subnet mask.
For example, if you have the /16 address of 130.5.0.0 and you want to use the entire third
octet to represent the subnet-number, you need to specify a subnet mask of 255.255.255.0.
The bits in the subnet mask and the Internet address have a one-to-one correspondence. The
bits of the subnet mask are set to 1 if the system examining the address should treat the
corresponding bit in the IP address as part of the extended-network- prefix. The bits in the
mask are set to 0 if the system should treat the bit as part of the host-number.
The standards describing modern routing protocols often refer to the extended-
network-prefix-length rather than the subnet mask. The prefix length is equal to the number
of contiguous one-bits in the traditional subnet mask. This means that specifying the network
address 130.5.5.25 with a subnet mask of 255.255.255.0 can also be expressed as
130.5.5.25/24. The /<prefix-length> notation is more compact and easier to understand than
writing out the mask in its traditional dotted-decimal format.
IP Addressing Page 8 of 12
IP Addressing Page 9 of 12
IP Addressing Page 10 of 12
CIDR Promotes the Efficient Allocation of the IPv4 Address Space CIDR eliminates
the traditional concept of Class A, Class B, and Class C network addresses and replaces them
with the generalized concept of a "network-prefix." Routers use the network-prefix, rather
than the first 3 bits of the IP address, to determine the dividing
point between the network number and the host number. As a result, CIDR supports the
deployment of arbitrarily sized networks rather than the standard 8-bit, 16- bit, or 24-bit
network numbers associated with classful addressing. In the CIDR model, each piece of
routing information is advertised with a bit mask (or prefix-length). The prefix-length is a
way of specifying the number of leftmost contiguous bits in the network-portion of each
routing table entry. For example, a network with 20 bits of network-number and 12-bits of
host-number would be advertised with a 20-bit prefix length (a /20).
The clever thing is that the IP address advertised with the /20 prefix could be a former
Class A, Class B, or Class C. Routers that support CIDR do not make assumptions based on
the first 3-bits of the address, they rely on the prefix- length. In a classless environment,
prefixes are viewed as bit wise contiguous blocks of the IP address space. For example, all
prefixes with a /20 prefix represent the same amount of address space (212 or 4,096 host
addresses). Furthermore, a /20 prefix can be assigned to a traditional Class A, Class B, or
Class C network number.
CIDR is now the routing system used by virtually all gateway routers on the Internet's
backbone network. The Internet's regulating authorities expect every Internet service
provider (ISP) to use it for routing. CIDR is supported by the Border Gateway Protocol, the
prevailing exterior (inter domain) gateway protocol and by the OSPF interior (or intra
domain) gateway protocol. Older gateway protocols like Exterior Gateway Protocol and
Routing Information Protocol do not support CIDR.
4.13 Conclusion
The aim of this chapter was to give an overview of IP Addressing. This subject being
quite vast and complex, it is very difficult to understand in one go. But the basis of data
networking lies on this concept.
*****
IP Addressing Page 11 of 12
IP Addressing Page 12 of 12
Chapter-7
Introduction to Internet Protocol version 6
Course Contents
Introduction to IPv6
What is IPv6?
Why is IPv6 Needed Now?
IPv6 Addressing & its representation
Advantages of IPv6
Features of IPv6
Types of IPv6 Addresses
Address Scope
Transition from IPv4 to IPv6
Objectives
After completion of this module the trainee will be able to
understand
Drawbacks of IPv4 addressing.
IPv6 notation and representation in hexadecimal.
Advantages of IPv6 and features of IPv6
IPv6 prefixes and types of IPv6 addresses.
Address scope of IPv6 addresses.
Technologies for transition from IPv4 to IPv6
IPv6 Page 1 of 19
IPv6 Page 2 of 19
IPv6 Page 3 of 19
Figure 7.1
An IPv6 address consists of 8 sets of 16-bit hexadecimal values separated by colons
(:), totaling 128 bits in length.
For example: 2001:0db8:1234:5678:9abc:def0:1234:5678
7.4.1 Representing IPv6 in binary format.
2001 : 0db8 : ac10 : fe01 : 0000 : 0000 : 0000 : 0000
0010 0000 0000 0001 0000 1101 1011 1000 1010 1100 0001 0000 1111 1110 0000 0001
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0010000000000001000011011011100010101100000100001111111000000001000000000
0000000000000000000000000000000000000000000000000000000
IPv6 Page 4 of 19
There are 2 methods we can use to reduce the size of the notation making it even
easier to read.
These methods are called 'Zero Compression', and 'Zero Suppression'.
Some addresses contain long sequences of zeros:
2001:0db8:ac10:0000:0000:8a2e:0000:0a52
We can use 'zero compression' to reduce them. If there is more than one
consecutive block where the characters are all zeros you can compress them to :: (a double
colon).
In the address above there are 3 blocks containing all zeros. However only the first
and second segments of zeros can be compressed. This is because they are consecutive
(they are next to each other in the address). The third block of zeros cannot be compressed
but can be supressed.
2001:0db8:ac10::8a2e:0000:0a52
2001:db8:ac10::8a2e:0:a52
Zero Compression and Zero Suppression conceptis used to reduce the size of the
IPv6 notation.
Leading zeros can be omitted, and consecutive zeros in contiguous blocks can be
represented by a double colon (::). Double colons can appear only once in the address.
For example:
1. 2001:0db8:0000:130f:0000:0000:087c:140b can be abbreviated as
2001:0db8:0:130f::o87c:140b --- Zero Compression
2001:db8:0:130f::87c:140b -- Zero Compression and Zero Supression
2. fe80 : 0000 : 0000 : 0000 : 0202 : b3ff : fe1e : 8329
fe80::0202:b3ff:fe1e:8329
3 2001 : 0000 : ac10 : 0000 : 0000 : fe01 : 0db8 : 0000
2001:0000:ac10::fe01:0db8:0000
2001:0:ac10::fe01:0db8:0
4 2001:0db8:0000:0000:c5ef:0000:0000:0001 Can be represented as
IPv6 Page 5 of 19
Figure 7.2
IPv6 Page 6 of 19
Figure 7.3
End-to-end Connectivity -- Every system now has unique IP address and can
traverse through the Internet without using NAT or other translating
components.
Auto-configuration -- IPv6 supports both stateful and stateless auto
configuration mode of its host devices. This way, absence of a DHCP server
does not put a halt on inter segment communication.
IPv6 Page 7 of 19
As with IPv4, IPv6 addresses are assigned to interfaces; however, unlike IPv4, an IPv6
interface is expected to have multiple addresses. The IPv6 addresses assigned to an
interface can be any of the following types:
IPv6 Page 8 of 19
Figure 7.4
Example of a Unicast address: 2000::a12:34ff:fe56:7890
Multicast address Identifies a group of nodes or interfaces. Traffic destined for a
multicast address is forwarded to all the nodes in the group. An IPv6 multicast
address identifies a group of interfaces, again typically belonging to different
nodes. Packets sent to a multicast address are delivered to all interfaces in the
group.
So multicast addresses facilitate communication between a single sender
and multiple receivers.
IPv6 Page 9 of 19
Figure 7.5
Multicast addresses begin with the prefix - ff00::/8.
Example of a Mulicast address: ff01:0:0:0:0:0:0:2
With IPv6, broadcast addresses are no longer used. Broadcast addresses are
too resource intensive, therefore IPv6 uses multicast addresses instead.
Anycast address Identifies a group of nodes or interfaces. Traffic destined to an
anycast address is forwarded to the nearest node in the group. An anycast
address is essentially a unicast address assigned to multiple devices with a host
ID = 0000:0000:0000:0000. (Anycast addresses are not widely used today.)
Figure 7.6
IPv6 Page 10 of 19
Figure 7.7
7.8.1 Interface ID
IPv6 has three different types of Unicast Address scheme. The second half of the
address (last 64 bits) is always used for Interface ID. The MAC address of a system is
composed of 48-bits and represented in Hexadecimal. MAC addresses are considered to be
IPv6 Page 11 of 19
.
Fig7.8: EUI-64 Interface ID
7.8.2 Conversion of EUI-64 ID into IPv6 Interface Identifier
To convert EUI-64 ID into IPv6 Interface Identifier, the most significant 7th bit of EUI-
64 ID is complemented. For example:
IPv6 Page 12 of 19
Field Size
Description
Name (bits)
Figure 7.11
Global Routing Prefix: The most significant 48-bits are designated as Global Routing
Prefix which is assigned to specific autonomous system. The three most significant bits of
Global Routing Prefix is always set to 001.
IPv6 Page 13 of 19
Figure 7.12
IPv6 Page 14 of 19
Figure 7.15
IPv6 Page 15 of 19
Figure 7.16
An address scope defines the region where an address can be defined as a unique
identifier of an interface.
These scopes or regions are the link, the site network, and the global network,
corresponding to link-local, unique local unicast, and global addresses.
Figure 7.17
IPv6 Page 16 of 19
Figure 7.18
IPv6 Page 17 of 19
Figure 7.19
Figure 7.20
A host with IPv4 address sends a request to an IPv6 enabled server on Internet that
does not understand IPv4 address. In this scenario, the NAT-PT device can help them
communicate. When the IPv4 host sends a request packet to the IPv6 server, the NAT-PT
device/router strips down the IPv4 packet, removes IPv4 header, and adds IPv6
header and passes it through the Internet. When a response from the IPv6 server comes for
the IPv4 host, the router does vice versa.
******************
IPv6 Page 18 of 19
IPv6 Page 19 of 19
Network Elements
Course Contents
Different topologies of Computers connectivity
Various Network types
Various network connecting devices
LAN Architecture
Objectives
After studying this module on Local Networking and the various network connecting
devices, participants are able to join and understand the networking devices
The layered protocol concept can be employed to describe the architecture of a LAN,
wherein each layer represents the basic functions of a LAN.
2.2.1 Protocol Architecture
The Protocols defined for LAN transmission address issues relating to the
transmission of blocks of data over the network. In the context of OSI model, higher layer
protocols (layer 3 or 4 and above) are independent of network architecture and are not
applicable to LAN. Therefore LAN protocols are concerned primarily with the lower layers of
the OSI model.
Application
IEEE 802
Presentation Reference
Model )LLC Service
Session Access Point
~ Upper
layer
~ (LSAP
Transport protocols
Network
() () ()
Logical Link
Control Scope of
Data Link IEEE 802
Medium Standards
access control
Physical Physical
Medium Medium
The lowest layer of the IEEE 802 reference model corresponds to the physical layer
of the OSI model, and includes the following functions:
LAN Topologies
The common topologies for LANs are bus, tree, ring, and star. The bus is a special
case of the tree, with only one trunk and no branches.
B
A
B A
A A
(a) C transmits a frame (c) A copies the frame
addressed to A as it goes by
C
Network Elements A Page 8 of 25
B
For Restricted Circulation
A
AGM LICE Broadband Module
d) Star Topology
In the Star type topology, each station is directly connected to a common central
node. Typically, each station attaches to a central node, referred to as the star coupler, via
two point-to point links, one for transmission in each direction.
There are two alternatives for the operation of the central node :
One method is for the central node to operate in a broadcast fashion.
Another method is for the central node to act as a frame switching device. An
incoming frame is buffered in the node and then retransmitted on an outgoing link to the
destination station.
More expensive than linear bus topologies because of the cost of the
concentrators.
The protocols used with star configurations are usually Ethernet or LocalTalk.
2.3 Medium Access Control
Some means of controlling access to the transmission medium is needed to provide
for an orderly and efficient use of network’s transmission capacity. This is the function of
medium access control (MAC) protocol.
There are two areas for MAC
Where and
How to implement MAC in a LAN.
Where refers to whether control is in a centralized or distributed fashion.
In a centralized scheme, a controller is designated that has the authority to grant
access to the network. A station wishing to transmit must wait until it receives permissions
from the controller.
In a decentralized network, each station collectively performs a medium access
control function to dynamically determine the order in which stations transmit
how, is determined by the topology and is a trade-off among competing factors such
as- including cost, performance, and complexity
Access control techniques could follow the same approach used in circuit switching,
viz. frequency-division multiplexing (FDM), and synchronous time-division multiplexing
(TDM). It is desirable to allocate capacity in an asynchronous (dynamic) fashion,. The
asynchronous approach can be further subdivided into three categories: round robin,
reservation and contention.
Table 2
1 octet 1 or 2 Variable
LLC
PDU DSAP SSAP LLC control Information
t1
t2
AGM LICE Broadband Module
New cards are software configurable, using a software programs to configure the
resources used by the card. Other cards are PNP (plug and Play), which automatically
configure their resources when installed in the computer, simplifying the installation. With
an operating system like Windows 95, auto-detection of new hardware makes network
connections simple and quick.
2.6.2 Cabling
Cables are used to interconnect computers and network components together.
There are 3 main cable types used today :
Twisted pair
Coaxial
Fibre optic
The choice of cable depends upon a number of factors like:
cost
distance
number of computers involved
speed
bandwidth i.e. how fast data is to be transferred
BACKPLANE
Figure 10 : HUB
2.7 Switch
A switch is a networking component used to connect workgroup hubs to form a
larger network or to connect computers that have high bandwidth needs.
Switch working
When a signal enters a port of the switch, the switch looks at the destination
address of the frame and internally establishes a logical connection with the port connected
to the destination node.
Each port on the switch corresponds to an individual collision domain, and network
congestion is avoided. Thus, if a 10-Mbps Ethernet switch has 10 ports, each port effectively
gets the entire bandwidth of 10 Mbps-to the frame, the switch's port appears to provide a
dedicated connection to the destination node. Ethernet switches are capable of establishing
multiple internal logical connections simultaneously, while routers generally process
packets on a first-come, first-served.
BRIDGE
2.9 Routers
In an environment consisting of several network segments with differing protocols
and architectures, a bridge may not be adequate for ensuring fast communication among all
of the segments. A network this complex needs a device which not only knows the address of
each segment, but also determine the best path for sending data and filtering broadcast traffic
to the local segment. Such a device is called a router.
Routers work at the Network layer of the OSI model. This means they can switch and
route packets across multiple networks. They do this by exchanging protocol-specific
information between separate networks. Routers read complex network addressing
information in the packet and, because they function at a higher layer in the OSI model than
bridges, they have access to additional information.
Routers can provide the following functions of a bridge :
Filtering and isolating traffic
Connecting network segments
Routers have access to more information in the packet than bridges, and use this
information to improve packet deliveries. Routers are used in complex network situation
because they provide better traffic management than bridges and do not pass broadcast
traffic. Routers can share status and routing information with one another and use this
information to bypass slow or malfunctioning connections.
Introduction to Router
Types of Algorithms used
Various protocols used for Router configuration
Differences between various protocols
Objectives
After studying this content on router concepts and router configuration, the trainees are
able to understand what is meant by routing and how the routers are configured for
networks
5.1 Routing
The primary function of a packet switching network is to receive packets from a
source and deliver them to the destination. To achieve this, a path or route through the
network has to be determined. More than one route may be possible. This requires a
routing function/ algorithm to be implemented.
The routing function must achieve the following requirements:
Correctness
Simplicity
Robustness
Stability
Fairness
Optimality
Efficiency
Correctness and Simplicity are self explanatory.
Robustness has to do with the routing of packets through alternate routes in the
network in case of route failures or overloads.
Stability is an important aspect of the routing algorithm. It implies that the routing
algorithm must converge to equilibrium as quickly as possible, however some never
converge, no matter how long they run.
Routing protocol
A routing protocol provides mechanisms for sharing routing information. Routing
protocol messages move between the routers. A routing protocol allows the routers to
communicate with other routers to update and maintain routing tables. Routing protocol
messages do not carry end-user traffic from network to network. A routing protocol uses
the routed protocol to pass information between routers.
Traffic to 10.1
R1 R2
WAN
Send all traffic to R1
Default Routing
X
R2
R3 R1
100.1
Routing update :
I can reach 100.1
Periodic & Frequent Updates results in Updates are triggered by events. Results in
slow convergence faster convergence
Copies of Routing Tables are passed to Link State Packets are passed to other
neighbouring Routers Routers
5.9 Interior Routing And Exterior Routing
Interior routing occurs within an autonomous system. Most common interior routing
protocols are RIP and OSPF. The basic routable element is the IP network or subnetwork, or
CIDR prefix for newer protocols.
Exterior routing occurs between autonomous systems, and is of concern to service
providers and other large or complex networks. Eg BGP-4 (Border Gateway Protocol Version
4) is exterior routing protocol.
IGP
Autonomous
System
BGP tems
Routing Concept & Routing Protocols Page 7 of 8
Autonomous
IGP Systems IGP
For Restricted Circulation
BGP
AGM LICE Broadband Module
Course Contents:
Introduction to Broadband
Broadband Services
Components of Broadband Network
Objectives
The main objective of this chapter is to build up the following
i) Introduction & To understand the need of broadband
ii) To understand what is Broadband
iii) To familiarize with the various broadband technologies
iv) To familiarize with Broadband Network
With the evolution of computer networking and packet switching concept a new era
of integrated communication has emerged in the telecom world. Rapid growth of data
communication market, integration of telecom and computer networking technology trend
have further amplified the importance of telecommunications in the field of information
communication.
The demand for high-speed bandwidth is growing at a fast pace. The rapid growth of
distributed business applications, e-commerce, and bandwidth-intensive applications (such as
multimedia, videoconferencing, and video on demand) generate the demand for bandwidth
and access network. Service providers and customers both are interested in economy with
fastest tool of communication with more throughput.
A concept of “broadband” services and the means of access technologies refers to
high-speed Internet access. Broadband Solutions represent the convergence of multiple
independent networks including voice, video and data into a single, unified, broadband
network.
4.2 DEFINITION OF BROADBAND
4.3.6 Telework
Organization firm workers that incorporate communication systems via satellite, can
work remotely connecting directly to their head offices Internet by a high speed connection
that permits users to work efficiently and comfortable.
4.3.7 Telemedicine
Doctors situated in different clinics can stay in contact and consult themselves
directly to other regional medical centers, using videoconference and the exchange of high
quality images, giving out test results and any type of information. Also rural zone can have
the opinion of specialists situated in remote hospitals quickly and efficiently.
4.3.8 Electronic commerce
Electronic commerce is a system that permits users to pay goods and services by
Internet.
These services are provided by BSNL by installing different network elements in a
phased manner under different projects of NIB .They are ;
I) Project 1 – MPLS core network
II) Project 2 – Access network
2.1 - Narrowband access
2.2 - Broadband access
III) Project 3 – Messaging, Storage, EMS etc.
Project 2.2. i.e. broadband access network elements and services are discussed below.
Broadband Multiplay
Broadband Multi-Play focuses on the augmentation of Broadband Access Network to
meet the targets fixed by DOT with planned capacity of 6 millions supporting multi-play
services like Video on Demand, IP TV, VoIP, VPN service etc with guaranteed control of
critical parameters like latency, throughput, jitter to ensure high grade delivery of real time
service, near real time, non real time and best effort”.
4.6.1 Core
4.6.3 Access
DSLAM to user
Below given figures shows very clearly the deployment of network elements, their
arrangement in different types of cities across the country.
..DSLAM..
GigE
FE FE FE
X-ge E
X-ge C
X-ge D
ADSL ADSL
ADSL ADSL ADSL
terminals terminals
terminals terminals terminals
The Aggregation Network for Multiplay will be in Ring Topology based on RPR
instead of the existing tree structure of Project 2.2. (for second layer of aggregation,
RPR is used).
• The Traffic aggregation to Core Backbone happens across 100 cities instead 23 cities
of Project 2.2.
– DSLAM---UTStarcom
– RPR-------UTStarcom
– OCLAN--- ZTE
– BNG------- Redback
– Servers---- SUN
• Miscellaneous Components
– Converters
– DSL Tester
– Desktop/Laptop
– UPS
• Applications
– AAA/SSSS -- Elitecore
– DNS/DHCP -- ISC
• Database - Oracle
Edge Server
Edge Server STM-16
Regional Server
RPR
10 G Aggregation
10 G RPR Layer
RPR
GE
PE Router Tier 1 Sw
In Association 5
MPLS
Layer
GE
RPR
Aggregation
1G
Layer
RPR
GE
PE Router Tier 1 Sw
BNG Tier 2 Sw
Core
router
BNG GigE
BB
FE
Tier 1 LAN SDH RING
Switch
OC city OC city
RPR
Tier 2 LAN
To nearest A/B cities with BNG Switch
Ethernet on GE
GE
Dark fibre X-ge C X-ge D
X-ge A X-ge B
i.) TVOIP (also called as IPTV) delivers television programmes to households via
broadband connection using Internet protocols.
ii.) It requires a subscription and IPTV set-top box (STB).
iii.) IPTV is typically bundled with other services like Video on Demand (VOD), Voice
Over IP (VOIP) or digital Phone, and Web access.
iv.) IPTV viewers will have full control over functionality such as rewind, fast-forward,
pause, and so on.
v.) IPTV (Internet Protocol Television) is a system where a digital television service is
delivered by using Internet Protocol over a network.
vi.) For residential users, IPTV is provided with Video On Demand and may be bundled
with Internet services such as Web access and VoIP.
vii.) The video stream is broken up into IP packets and dumped into the core network,
which is a massive IP network that handles all sorts of other traffic (data, voice, etc
4.11.2 VOIP
i.) The technology used to transmit voice conversations over a data network using the
Internet Protocol.
ii.) A category of hardware and software that enables people to use the Internet as the
transmission medium for telephone calls.
iii.) VoIP works through sending voice information in digital form in packets,
iv.) VoIP also is referred to as Internet telephony, IP telephony, or Voice over the Internet
(VOI)
4.11.3 NMS
F: Fault
C: Configuration
P: Performance
S: Security
IT Module for
SDE to AGM(LICE)
Each network device on the Internet has a unique Internet Protocol (IP) Address. Proxy
server is a middleman on the Internet may be located inside organization or may be available
on Internet with its own IP address that our computer knows.
A proxy server hides your IP address, so the web server doesn‗t know exactly from where the
request comes from.
It can encrypt data, so data is unreadable in transit. And lastly, a proxy server can block
access to certain web pages, based on IP address or contents which act as web filter blocking
unwanted contents from being accessed.
Bandwidth savings:
Organizations can also get better overall network performance with a good proxy server.
Proxy servers can cache (save a copy of the website locally) popular websites – so when you
ask for www.abc.com, the proxy server will check to see if it has the most recent copy of the
site, and then send you the saved copy. Refer Figure2-Proxy caching.
For example when hundreds of people use www.abc.com at the same time from the same
proxy server, the proxy server sends only one request to abc.com. This saves bandwidth
usage.
Improved speeds: Since the saved copy is served with in the network, improves the network
performance in terms of less latency, faster loading of contents.
Privacy benefits: Individuals as well as organizations use proxy servers to browse the
Internet more privately. Some proxy servers will change the IP address and other identities
sent along the web request from being exposed. The destination server does not know who
actually made the original request; this keeps personal information and browsing habits more
private.
Reverse Proxy
Unlike a forward proxy, which lies in front of clients, a reverse proxy is positioned in front of
web servers and forwards requests from a browser to the web servers. It works by analyzing
web requests from the user at the network edge of the web server.
It then forwards the requests to and receives replies from the origin server. Reverse proxies
are a strong option for popular websites that need to balance the load of many incoming
requests. They help an organization reduce bandwidth load because they act like another web
server managing incoming requests.
An anonymous proxy will identify itself as a proxy, but it will not pass client‗s IP address to
the website – this helps prevent identity theft and keep your browsing habits private. They
can also prevent a website from serving you targeted marketing content based on your
location.
Distorting proxy
A distorting proxy server passes a false IP address for clients while identifies itself as a
proxy. This serves similar purposes as the anonymous proxy, but by passing a false IP
address, client can appear to be from a different location to get around content restrictions.
High Anonymity proxy
High Anonymity proxy servers periodically change the IP address they present to the web
server, making it very difficult to keep track of what traffic belongs to who. High anonymity
proxies, like the TOR Network, is the most private and secure way to read the Internet.
SSL Proxy
A secure sockets layer (SSL) proxy provides encryption between the client and the server. As
the data is encrypted in both directions, the proxy hides its existence from both the client and
the server. These proxies are best suited for organizations that need enhanced protection
against threats. On the downside, content encrypted on an SSL proxy cannot be cached, so
when visiting websites multiple times, you may experience slower performance.
1.6 Proxy Server vs. VPN
Proxy servers and virtual private networks (VPNs) may seem interchangeable because both
route requests and responses through an external server. Both allows to access websites that
would otherwise block the country you‗re physically located in. However, VPNs provide
better protection against hackers because they encrypt all traffic.
A VPN is better suited for business use because users usually need secure data transmission
in both directions. Company information and personnel data can be very valuable in the
wrong hands, and a VPN provides the encryption you need to keep it protected. You can also
use both technologies simultaneously, particularly if you want to limit the websites that users
within your network visit while also encrypting their communications. For personal use
where a breach would only affect you, a single user, a proxy server may be an adequate
choice.
1.8 Conclusions
Proxy servers and virtual private networks (VPNs) may seem interchangeable because both
route requests and responses through an external server. Both allows to access websites that
would otherwise block the country you‗re physically located in. However, VPNs provide
better protection against hackers because they encrypt all traffic.
http://www.bsnl.co.in/pages/cellone.htm
"https://" quite commonly used. This simply means that the connection between you and the
web server is secured (meaning the information being sent back and forth is encrypted). You
should see "https://" when you are checking out, especially when they are entering credit card
information.
The next part, "www.bsnl.co.in" is called the Domain Name. The "www" used to be
more significant than it is today. Today, the "www" is, for the most part, assumed and you
can get to the same page regardless of whether or not you type in "www" your browser.
The part "/pages/cellone.htm" tells the web server to look in the directory called "pages" and
send the file called "cellone.htm" to your browser. It is just like the directories on your PC.
The ― in‖ of the Domain Name ―www.bsnl.co.in‖ is called as Top Level Domain
(TLD). It is the right extreme portion of the domain name. For example the TLD of
www.yahoo.com is com.
Let's understand the process of how DNS works.
DNS means Domain Name Service. It is actually a service that can keep large number of
machines‗IP addresses mapped with their domain names. Now the question arises why is this
needed. Let‗s understand this with the help of an illustration.
Example: Let‗s say rose1, rose2, rose3, rose4, and rose5 are the 5 machines in a network,
then for communication between each machine, each machine‗s /etc/hosts in Unix (or
hosts.txt in Windows) file should have all the five entries of the machine name. Within this
small network there would be no problem if you add another machine say rose6 in the
network. But for this too, the network administrator has to go to each machine, add the rose6
in /etc/hosts file and then come back to the new comer rose6 machine and add all the other
entries (rose1...rose5) including its own name also in /etc/hosts (or hosts.txt) file.
But what if the network is setup with say 60 machines and a 61st machine has to be added?
Then administrator will have to go to each machine again and write the new machine‗s name
at /etc/hosts/ (or hosts.txt) file and again comeback and write all the 60 machines name on
the 61st machine‗s etc/hosts file which is a tedious and time taking job.
Thus, it is better to keep a centralized server, where all the IP addresses will stay and if a new
one does enter into the network then the change will have to be done at the server and not on
the client‗s machine.
Sub-domains are often referred to as child domains. For example, the fully qualified domain
name (FQDN) for a computer within a human resources group could be designated as
jacob.hr.microsoft.com. Here, jacob is the host name, hr is the child domain, and microsoft.com
is the parent domain
2.12 conclusions
DNS server plays a very important role in network, to resolve the IP request for a particular URL.
The DNS server basically store the bindings between a URL and the IP of that server in which the
web site is working.
– Consistently the single most commonly listed program for any security initiatives, in both
public and private sector
• Communicate security policies, procedures, and processes
• Communicate and clarify roles and responsibilities
• Communicate lessons learned and share experiences for improvements
• Compliance requirement
3.15 VULNERABILITY
Vulnerability is weakness in Information Security system that could be
exploited by a threat; that is a weakness in Network System components, Network security
process & procedures. The common types of vulnerabilities errors in design, configuration of
Network System components, Communication Links, OS, Applications (Web based),
Databases, Protocols, Services etc.
The widespread use of many COTS (commercial off-the-shelf ) products
means that once a vulnerability is discovered, it can be exploited by attackers who target
many of the thousands or even millions of systems that have the vulnerable product installed.
A lack of security expertise by most Internet users means that vendor security
patches to remove the vulnerabilities will not be applied promptly.
3.18 VIRUSES
A virus is a small piece of software (code) that piggybacks on real programs, O.S. or e-mails.
Each time a program runs the virus gets executed.
Type of Viruses
Executable Viruses
Boot sector viruses
E-mail viruses
Executable Viruses
Traditional Viruses
– pieces of code attached to a legitimate program
– run when the legitimate program gets executed
– loads itself into memory &looks around to see if it can find any other
programs on disk
E-mail Viruses
– Moves around in e-mail messages
– Replicates itself by automatically mailing itself to dozens of people in the
victim‗s e-mail address book
– Example: Melissa virus etc.
– Some e-mail viruses don't even require a double-click, they launch when you
view the infected message in the preview pane of your e-mail software
Macro Viruses
– Infect programming environments rather than OS or files.
– Almost any application that has it‗s own macro programming environment
– MS Office (Word, Excel, Access…)
– Visual Basic
– Application loads a file containing macro and executes the macro upon
loading or runs it based on some application based trigger.
Antivirus programs offer protection against all viruses, worms, and Trojans
3.26 SPYWARE
A program that covertly gathers information about your online activities
without your knowledge, is called Spyware. Spyware usually enters the computer while
downloading or installing a new program and allows intruders to monitor and access your
computer.
Spyware differs from viruses and worms in that it does not usually self-
replicate. However, spyware – by design – exploits infected computers for commercial gain.
Typical tactics furthering this goal include:
delivery of unsolicited pop-up advertisements;
theft of personal information (including financial information such as credit
card numbers);
monitoring of Web-browsing activity for marketing purposes; or
routing of HTTP requests to advertising sites.
3.27 KEYLOGGER
Keylogger surveillance software has the capability to record
keystroke/captures Screen Shots and save it to a log file (usually encrypted) for future use.
Captures every key pressed on the computer viewed by the unauthorized user. Key logger
software can record instant messages, e-mail and any information you type at any time on
your keyboard. The log file created by the key logger can then be saved to a specific location
or mailed to the concerned person. The software will also record any e-mail address you use
and Website URLs visited by you.
Don’ts
Do not install pirated software such as
o Operating System Software (Windows, Unix, etc..).
o Application Software (Office, Database..etc).
o Security Software (Antivirus, Antispyware..etc).
Note: Remember, some Pirated Software them self can be rogue programs.
Do not plug the computer directly to the wall outlet as power surges may
destroy computer. Instead use a genuine surge protector to plug a computer.
Don‗t eat food or drink around the PC.
Don‗t place any magnets near the PC.
Never spray or squirt any liquid onto any computer component. If a spray is
needed, spray the liquid onto a cloth and then use that cloth to rub down the component.
Don‗t open the e-Mail attachments which have double extensions
Figure: Firewall
Many people have asked the question, ―Is a router with an access list a firewall?‖ The
answer is yes, a packet filter firewall can essentially be a router with packet filtering
capabilities. (Almost all routers can do this.) Packet filters are an attractive option where your
budget is limited and where security requirements are deemed rather low.
But there are drawbacks. Basic packet filtering firewalls are susceptible to IP
spoofing, where an intruder tries to gain unauthorized access to computers by sending
messages to a computer with an IP address indicating that the message is coming from a
trusted host. Information security experts believe that packet filtering firewalls offer the least
security because they allow a direct connection between endpoints through the firewall. This
leaves the potential for a vulnerability to be exploited. Another shortcoming is that this form
of firewall rarely provides sufficient logging or reporting capabilities.
3.36 Packet filtering firewall advantages
A single device can filter traffic for the entire network
Extremely fast and efficient in scanning traffic
Inexpensive
Minimal effect on other resources, network performance and end-user experience
Packet filtering firewall disadvantages
Because traffic filtering is based entirely on IP address or port information, packet
filtering lacks broader context that informs other types of firewalls
Chart comparing the advantages and disadvantages of the five different types of
firewalls
Compare the advantages and disadvantages of the five different types of firewalls to
find the ones that best suit your business needs.
3.38 Application-level gateway
This kind of device -- technically a proxy and sometimes referred to as a proxy
firewall -- functions as the only entry point to and exit point from the network. Application-
level gateways filter packets not only according to the service for which they are intended --
as specified by the destination port -- but also by other characteristics, such as the HTTP
request string.
While gateways that filter at the application layer provide considerable data security,
they can dramatically affect network performance and can be challenging to manage.
Application-level gateway advantages
An NGFW from Palo Alto Networks, which was among the first vendors to offer
advanced features, such as identifying the applications producing the traffic passing through
and integrating with other major network components, like Active Directory.
3.40 Next-generation firewall
A typical NGFW combines packet inspection with stateful inspection and also
includes some variety of deep packet inspection (DPI), as well as other network security
systems, such as an IDS/IPS, malware filtering and antivirus.
SDE to AGM(LICE) Page 33
While packet inspection in traditional firewalls looks exclusively at the protocol
header of the packet, DPI looks at the actual data the packet is carrying. A DPI firewall tracks
the progress of a web browsing session and can notice whether a packet payload, when
assembled with other packets in an HTTP server reply, constitutes a legitimate HTML-
formatted response.
A software-based firewall, or host firewall, runs on a server or other device. Host firewall
software needs to be installed on each device requiring protection. As such, software-based
firewalls consume some of the host device's CPU and RAM resources.
Choosing the ideal firewall begins with understanding the architecture and
functions of the private network being protected but also calls for understanding the
different types of firewalls and firewall policies that are most effective for the organization.
Current usage of the term big data tends to refer to the use of predictive analytics, user
behavior analytics, or certain other advanced data analytics methods that
extract value from big data, and seldom to a particular size of data set.
Big data is essentially the wrangling of the three Vs to gain insights and make
predictions, so it's useful to take a closer look at each attribute.
4.2.1 Volume
Big data is enormous. While traditional data is measured in familiar sizes like megabytes,
gigabytes and terabytes, big data is stored in petabytes and zettabytes.
To grasp the enormity of difference in scale, consider this comparison from the Berkeley
School of Information: one gigabyte is the equivalent of a seven minute video in HD,
while a single zettabyte is equal to 250 billion DVDs. This is just the tip of the iceberg.
According to a report by EMC, the digital universe is doubling in size every two
years and by 2020 is expected to reach 44 trillion zettabytes.
Big data provides the architecture handling this kind of data. Without the appropriate
solutions for storing and processing, it would be impossible to mine for insights.
4.2.2 Velocity
From the speed at which it's created, to the amount of time needed to analyze
it, everything about big data is fast. Some have described it as trying to drink from a fire
hose.
Companies and organizations must have the capabilities to harness this data and
generate insights from it in real-time, otherwise it's not very useful. Real-time
4.2.3 Variety
Roughly 95% of all big data is unstructured, meaning it does not fit easily
into a straightforward, traditional model. Everything from emails and videos to
scientific and meteorological data can constitute a big data stream, each with
their own unique attributes.
Big data analytics describes the process of uncovering trends, patterns, and correlations
in large amounts of raw data to help make data-informed decisions. These
processes use familiar statistical analysis techniques—like clustering and
regression—and apply those to more extensive datasets with the help of newer tools.
Big data has been a buzz word since the early 2000s, when software and
hardware capabilities made it possible for organizations to handle large amounts of
unstructured data. Since then, new technologies—from Amazon to smartphones—
have contributed even more to the substantial amounts of data available to
organizations.
With the explosion of data, early innovation projects like Hadoop, Spark, and
NoSQL databases were created for the storage and processing of big data. This field
continues to evolve as data engineers look for ways to integrate the vast
amounts of complex information created by sensors, networks, transactions, smart
devices, web usage, and more. Even now, big data analytics methods are being used
with emerging technologies, like machine learning, to discover and scale more
complex insights.
Raw or unstructured data that is too diverse or complex for a warehouse may be assigned
metadata and stored in a data lake.
Data big or small requires scrubbing to improve data quality and get stronger results; all
data must be formatted correctly, and any duplicative or irrelevant data must be
eliminated or accounted for. Dirty data can obscure and mislead, creating flawed insights.
Data mining sorts through large datasets to identify patterns and relationships by
identifying anomalies and creating data clusters.
Predictive analytics uses an organization‗s historical data to make predictions
about the future, identifying upcoming risks and opportunities.
Deep learning imitates human learning patterns by using artificial intelligence
and machine learning to layer algorithms and find patterns in the most complex
and abstract data.
The importance of big data does not revolve around how much data a company has but
how a company utilises the collected data. Every company uses data in its own way; the
more efficiently a company uses its data, the more potential it has to grow. The company
can take data from any source and analyse it to find answers which will enable:
i. Cost Savings: Some tools of Big Data like Hadoop and Cloud-Based Analytics can
bring cost advantages to business when large amounts of data are to be stored and
these tools also help in identifying more efficient ways of doing business.
ii. Time Reductions: The high speed of tools like Hadoop and in-memory analytics
can easily identify new sources of data which helps businesses analyzing data
immediately and make quick decisions based on the learnings.
iii. Understand the market conditions: By analyzing big data you can get a better
understanding of current market conditions. For example, by analyzing customers‗
purchasing behaviors, a company can find out the products that are sold the most
and produce products according to this trend. By this, it can get ahead of its
competitors.
iv. Control online reputation: Big data tools can do sentiment analysis. Therefore,
you can get feedback about who is saying what about your company. If you want to
monitor and improve the online presence of your business, then, big data tools can
help in all this.
v. Using Big Data Analytics to Boost Customer Acquisition and Retention
The customer is the most important asset any business depends on. There is no
single business that can claim success without first having to establish a solid
customer base. However, even with a customer base, a business cannot afford to
disregard the high competition it faces. If a business is slow to learn what customers
are looking for, then it is very easy to begin offering poor quality products. In the
end, loss of clientele will result, and this creates an adverse overall effect on
business success. The use of big data allows businesses to observe various customer
related patterns and trends. Observing customer behaviour is important to trigger
loyalty.
vi. Using Big Data Analytics to Solve Advertisers Problem and Offer
Marketing Insights
The insurance industry holds importance not only for individuals but also
business companies. The reason insurance holds a significant place is because it
supports people during times of adversities and uncertainties. The data collected from
these sources are of varying formats and change at tremendous speeds.
Collecting information
As big data refers to gathering data from disparate sources, this feature creates a
crucial use case for the insurance industry to pounce on. Eg: When a customer
intends to buy a car insurance, the companies can obtain information from which
they can calculate the safety levels for driving in the buyer‗s vicinity and his past
driving records. On basis of this they can effectively calculate cost of car insurance as
well.
Fraud detection
Insurance frauds are a common incidence. Big data use case for reducing fraud is highly
effective.
Welfare schemes
Cyber security
The amount of data in banking sectors is skyrocketing every second. Study and analysis
of big data can help detect -
More and more tools offer the possibility of real-time processing of Big Data.
4.8.1 Storm
Storm, which is now owned by Twitter, is a real-time distributed computation system.
4.8.2 Cloudera
Cloudera offers the Cloudera Enterprise RTQ tools that offers real-time,
interactive analytical queries of the data stored in HBase or HDFS.
4.8.3 Gridgrain
4.8.4 SpaceCurve
With increase in computer and Mobile user‗s, data storage has become a priority in all
fields. Large and small scale businesses today thrive on their data & they spent a huge
amount of money to maintain this data. It requires a strong IT support and a storage hub.
Not all businesses can afford high cost of in-house IT infrastructure and back up support
services. For them Cloud Computing is a cheaper solution. Perhaps its efficiency in
storing data, computation and less maintenance cost has succeeded to attract even bigger
businesses as well.
Cloud computing decreases the hardware and software demand from the user‗s side. The
only thing that user must be able to run is the cloud computing systems interface
software, which can be as simple as Web browser, and the Cloud network takes care of
the rest. We all have experienced cloud computing at some instant of time, some of the
popular cloud services we have used or we are still using are mail services like gmail,
hotmail or yahoo etc.
While accessing e-mail service our data is stored on cloud server and not on our
computer. The technology and infrastructure behind the cloud is invisible. It is less
important whether cloud services are based on HTTP, XML, Ruby, PHP or other specific
technologies as far as it is user friendly and functional. An individual user can connect to
cloud system from his/her own devices like desktop, laptop or mobile.
Cloud computing harnesses small business effectively having limited resources, it gives
small businesses access to the technologies that previously were out of their reach. Cloud
computing helps small businesses to convert their maintenance cost into profit. Let‗s see
how?
In an in-house IT server, you have to pay a lot of attention and ensure that there are no
flaws into the system so that it runs smoothly. And in case of any technical glitch you are
completely responsible; it will seek a lot of attention, time and money for repair.
Whereas, in cloud computing, the service provider takes the complete responsibility of
the complication and the technical faults.
The potential for cost saving is the major reason of cloud services adoption by many
organizations. Cloud computing gives the freedom to use services as per the requirement
and pay only for what you use. Due to cloud computing it has become possible to run IT
operations as an outsourced unit without much in-house resources.
1. Private Cloud: Here, computing resources are deployed for one particular
organization. This method is more used for intra-business interactions. Where
the computing resources can be governed, owned and operated by the same
organization.
2. Community Cloud: Here, computing resources are provided for a community
and organizations.
3. Public Cloud: This type of cloud is used usually for B2C (Business to Consumer)
type interactions. Here the computing resource is owned, governed and operated
by government, an academic or business organization.
4. Hybrid Cloud: This type of cloud can be used for both type of interactions -
B2B (Business to Business) or B2C ( Business to Consumer). This deployment
method is called hybrid cloud as the computing resources are bound together by
different clouds.
requirement.
Traditionaly, software application needed to be purchased upfront & then installed it onto
your computer. SaaS users on the other hand, instead of purchasing the software
subscribes to it, usually on monthly basis via internet.
SDE to AGM(LICE) Page 45
Subscribe can be one or two people or every thousands of employees in a corporation.
SaaS is compatible with all internet enabled devices. Many important tasks like
accounting, sales, invoicing and planning all can be performed using SaaS.
To understand in a simple terms, let compare this with painting a picture, where you are
provided with paint colors, different paint brushes and paper by your school teacher and
you just have to draw a beautiful picture using those tools.
PaaS services are constantly updated & new features added. Software developers, web
developers and business can benefit from PaaS. It provides platform to support
application development. It includes software support and management services, storage,
networking, deploying, testing, collaborating, hosting and maintaining applications.
IaaS is a complete package for computing. For small scale businesses who are looking for
cutting cost on IT infrastructure, IaaS is one of the solutions. Annually a lot of money is
spent in maintenance and buying new components like hard-drives, network connections,
external storage device etc. which a business owner could have saved for other expenses
by using IaaS.
Cloud computing distributes the file system over multiple hard disks and machines. Data is
never stored in one place only and in case one unit fails the other will take over
automatically. The user disk space is allocated on the distributed file system, while
another important component is algorithm for resource allocation. Cloud computing is a
strong distributed environment and it heavily depends upon strong algorithm.
Cloud Computing architecture comprises of many cloud components, which are loosely
coupled. We can broadly divide the cloud architecture into two parts:
Each of the ends is connected through a network, usually Internet. The following
diagram shows the graphical view of cloud computing architecture:
For software developers and testers virtualization comes very handy, as it allows
developer to write code that runs in many different environments and more importantly to
test that code.
Virtualization is mainly used for three main purposes - Network Virtualization, Server
Virtualization and Storage Virtualization
a. Network Virtualization: It is a method of combining the available resources in a
network by splitting up the available bandwidth into channels, each of which is
Virtualization is the key to unlock the Cloud system, what makes virtualization so
important for the cloud is that it decouples the software from the hardware. For example,
PC‗s can use virtual memory to borrow extra memory from the hard disk. Usually hard
disk has a lot more space than memory. Although virtual disks are slower than real
memory, if managed properly the substitution works perfectly. Likewise, there is
software which can imitate an entire computer, which means 1 computer can perform the
functions equals to 20 computers.
o Virtualization
o Service-Oriented Architecture (SOA) o
Grid Computing
o Utility Computing
4.14.1 Virtualization
The concept of Virtualization in cloud computing increases the use of virtual machines. A
virtual machine is a software computer or software program that not only works as a
physical computer but can also function as a physical machine and perform tasks such as
running applications or programs as per the user's demand.
Types of Virtualization
i. Hardware virtualization
ii. Server virtualization
iii. Storage virtualization
iv. Operating system virtualization
v. Data Virtualization
Service Provider and Service consumer are the two major roles within SOA.
Mainly, grid computing is used in the ATMs, back-end infrastructures, and marketing
research.
Large organizations such as Google and Amazon established their own utility services
for computing storage and application.
4.4.2 Portability
This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their platforms.
4.4.3 Interoperability
It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very
complex.
Introduction to AI Levels
Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc.,
where machine can think of large number of possible positions based on heuristic
knowledge.
Natural Language Processing − It is possible to interact with the computer that
understands natural language spoken by humans.
Expert Systems − There are some applications which integrate machine,
software, and special information to impart reasoning and advising. They provide
explanation and advice to the users.
Vision Systems − These systems understand, interpret, and comprehend visual input on
the computer. For example,
o A spying aeroplane takes photographs, which are used to figure out spatial
information or map of the areas.
o Doctors use clinical expert system to diagnose the patient.
o Police use computer software that can recognize the face of criminal with the
stored portrait made by forensic artist.
Speech Recognition − Some intelligent systems are capable of hearing and
comprehending the language in terms of sentences and their meanings while a human
talks to it. It can handle different accents, slang words, noise in the background,
change in human‗s noise due to cold, etc.
Handwriting Recognition − The handwriting recognition software reads the text written
on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and
convert it into editable text.
Intelligent Robots − Robots are able to perform the tasks given by a human. They
have sensors to detect physical data from the real world such as light, heat, temperature,
movement, sound, bump, and pressure. They have efficient processors, multiple
4.17.1 AI in Astronomy
o Artificial Intelligence can be very useful to solve complex universe problems. AI
technology can be helpful for understanding the universe such as how it works,
origin, etc.
4.17.2 AI in Healthcare
o In the last, five to ten years, AI becoming more advantageous for the healthcare
industry and going to have a significant impact on this industry.
o Healthcare Industries are applying AI to make a better and faster diagnosis than
humans. AI can help doctors with diagnoses and can inform when patients are
worsening so that medical help can reach to the patient before hospitalization.
4.17.3 AI in Gaming
o AI can be used for gaming purpose. The AI machines can play strategic games
like chess, where the machine needs to think of a large number of possible places.
4.17.4
o AI and finance industries are the best matches for each other. The finance industry is
implementing automation, chatbot, adaptive intelligence, algorithm trading, and machine learning
into financial processes.
4.17.9 AI in Robotics:
o Artificial Intelligence has a remarkable role in Robotics. Usually, general robots
are programmed such that they can perform some repetitive task, but with the help
of AI, we can create intelligent robots which can perform tasks with their own
experiences without pre-programmed.
o Humanoid Robots are best examples for AI in robotics, recently the intelligent
Humanoid robot named as Erica and Sophia has been developed which can talk
and behave like humans.
4.17.10 AI in Entertainment
o We are currently using some AI based applications in our daily life with some
entertainment services such as Netflix or Amazon. With the help of ML/AI
algorithms, these services show the recommendations for programs or shows.
4.17.11 AI in Agriculture
o Agriculture is an area which requires various resources, labor, money, and time
for best result. Now a day's agriculture is becoming digital, and AI is emerging in
this field. Agriculture is applying AI as agriculture robotics, solid and crop
monitoring, predictive analysis. AI in agriculture can be very helpful for farmers.
4.17.12 AI in E-commerce
o AI is providing a competitive edge to the e-commerce industry, and it is becoming
more demanding in the e-commerce business. AI is helping shoppers to discover
associated products with recommended size, color, or even brand.
4.17.13 AI in education:
o AI can automate grading so that the tutor can have more time to teach. AI chatbot
can communicate with students as a teaching assistant.
o AI in the future can be work as a personal virtual tutor for students, which will be
accessible easily at any time and any place.
Note: Among all of the above, Machine learning plays a crucial role in AI.
Machine learning and deep learning are the ways of achieving AI in real life.
NLP plays an important role in AI as without NLP, AI agent cannot work on human
instructions, but with the help of NLP, we can instruct an AI system on our language.
Today we are all around AI, and as well as NLP, we can easily ask Siri, Google or
Cortana to help us in our language.
Natural language processing application enables a user to communicate with the system in
their own words directly.
o Speech
o Text
There is some speech recognition software which has a limited vocabulary of words and
phrase. This software requires unambiguous spoken language to understand and perform
specific task. Today's there are various software or devices which contains speech
recognition technology such as Cortana, Google virtual assistant, Apple Siri, etc.
We need to train our speech recognition system to understand our language. In previous
days, these systems were only designed to convert the speech to text, but now there are
various devices which can directly convert speech into commands.
1. Speaker Dependent
2. Speaker Independent
Artificial Intelligence has facilitated the processing of a large amount of data and its use
in the industry. The number of tools and frameworks available to data scientists and
developers has increased with the growth of AI and ML.
4.26 Conclusion
Big Data, Cloud Computing and Artificial Intelligence are the trending technology in
today‘s world and are going to play pivotal role in the growth and development of Human
race.
5.3 Deployment
Where do Web applications run? The server environment can be proprietary or open
source. Web application development has been driven by a move towards open source and
standardized components. This trend has spread also to the server environment where Web
servers run. Considering that there are many small organizations, small companies and non-profit
organizations, that run their own Web servers, there is both business reasons and technical
reasons for the move to open source. The standard, bare-bones, Web server environment is
commonly referred to as LAMP. Each of the four letters in LAMP stands for one
component of the environment. The components are:
DELICIOUS
Del.icio.us is a Web application that helps users manage and share their bookmarks. As
the amount of information on the Web has grown, it has become more and more difficult to
keep track of the information you find and want to remember for future reference. The
bookmark feature in browsers was intended for this purpose but it has limited functionality.
Del.icio.us lets users store bookmarks and tag those bookmarks with user-defined terms. The
tags are therefore available to the user from anywhere on the Internet and they make it
easier to search for bookmarks. Further, by sharing bookmarks and tags, users can help each
other find related Web pages. The system can also suggest tags that other users have applied to
the same document, thus giving the user ideas on how to classify a document.
WIKI SYSTEMS
Wiki systems are a form of content management system that enable a repository of
information that may be updated easily by its users. Wiki systems such as wikipedia.org are
similar to blogs in principle as they are based on user partic- ipation to add content. The
fundamental element of wikis is pages as in typical Web sites, as opposed to blogs in which
basic elements are posts (which can be displayed together within the same pages). Wikis allow
users not only to read but also to update the content of the pages. The underlying assumption is
that over time the wiki will represent the con- sensus knowledge (or at least the opinions) of all
the users. As blogs, wikis exhibit high link density. In addition, wikis have high linking within
the same wiki as they provide a simple syntax for the user to link to pages, both to existing
pages and to those yet to be created. Many wikis also provide authentication and versioning to
restrict editing by users and to be able to recover the history.
5.6 HTML
HTML stands for Hyper Text Markup Language. It is the standard markup language
for creating Web pages. It describes the structure of a Web page.
HTML consists of a series of elements which tell the browser how to display the
content.
<h1>This is the
Heading</h1> <p>Here is a
Paragraph.</p>
</bod
y>
</htm
l>
In the above example:
The <!DOCTYPE html> declaration defines that this document is
an HTML5 document
The <html> element is the root element of an HTML page
The <head> element contains meta information about the HTML page
The <title> element specifies a title for the HTML page (which is
shown in the browser's title bar or in the page's tab)
The <body> element defines the document's body, and is a container
for all the visible contents, such as headings, paragraphs, images, hyperlinks,
tables, lists, etc.
Web browsers (Chrome, Edge, Firefox, Safari) read HTML documents and display them
correctly. A browser does not display the HTML tags, but uses them to determine how to
display the document.
HyperLinks - Links are found in nearly all web pages. Links allow users to click their way
from page to page. HTML links are hyperlinks. We can click on a link and jump to another
document.
When we move the mouse over a link, the mouse arrow will turn into a little hand. A link
does not have to be text. A link can be an image or any other HTML element.
The HTML <a> tag defines a hyperlink. It has the following syntax:
The href attribute of the <a> element indicates the link's destination. The link text is the part
that will be visible to the reader. Clicking on the link text, will send the reader to the specified
URL address.
5.7 CSS
CSS stands for Cascading Style Sheets. It is the language we use to style a Web page. CSS
describes how HTML elements are to be displayed on screen, paper, or in other media. CSS
saves a lot of work. It can control the layout of multiple web pages all at once. External
stylesheets are stored in CSS files
HTML was NEVER intended to contain tags for formatting a web page. HTML was
created to describe the content of a web page, like:
<h1>This is a heading</h1>
<p>This is a paragraph.</p>
When tags like <font>, and color attributes were added to the HTML 3.2 specification, it started
a nightmare for web developers. Development of large websites, where fonts and color
information were added to every single page, became a long and expensive process.
CSS Syntax:
The selector points to the HTML element you want to style. The eclaration block
contains one or more declarations separated by semicolons. Each declaration includes a CSS
property name and a value, separated by a colon. Multiple CSS declarations are separated with
semicolons, and declaration blocks are surrounded by curly braces.
Example
p{
color: red;
text-align: center;
}
p is a selector in CSS (it points to the HTML element you want to style: <p>).
color is a property, and red is the property value
text-align is a property, and center is the property value
In this example all <p> elements will be center-aligned, with a red text color:
5.8 PHP
PHP is mainly focused on server-side scripting, so you can do anything any other CGI program
can do, such as collect form data, generate dynamic page content, or send and receive cookies.
But PHP can do much more.
There are three main areas where PHP scripts are used.
o Server-side scripting. This is the most traditional and main target field for PHP. You
need three things to make this work: the PHP parser (CGI or server module), a web server and a
web browser. You need to run the web server, with a connected PHP installation. You can access
the PHP program output with a web browser, viewing the PHP page through the server. All these
can run on your home machine if you are just experimenting with PHP programming. See the
installation instructions section for more information.
o Command line scripting. You can make a PHP script to run it without any server or
browser. You only need the PHP parser to use it this way. This type of usage is ideal for scripts
regularly executed using cron (on *nix or Linux) or Task Scheduler (on Windows). These scripts
can also be used for simple text processing tasks. See the section about Command line usage of
PHP for more information.
o Writing desktop applications. PHP is probably not the very best language to create a
desktop application with a graphical user interface, but if you know PHP very well, and would
like to use some advanced PHP features in your client-side applications you can also use PHP-
GTK to write such programs. You also have the ability to write cross-platform applications this
PHP can be used on all major operating systems, including Linux, many Unix variants (including
HP-UX, Solaris and Open BSD), Microsoft Windows, macOS, RISC OS, and probably others.
PHP also has support for most of the web servers today. This includes Apache, IIS, and many
others. And this includes any web server that can utilize the FastCGI PHP binary, like lighttpd
and nginx. PHP works as either a module, or as a CGI processor.
So with PHP, you have the freedom of choosing an operating system and a web server.
Furthermore, you also have the choice of using procedural programming or object oriented
programming (OOP), or a mixture of them both.
With PHP you are not limited to output HTML. PHP's abilities includes outputting images, PDF
files and even Flash movies (using libswf and Ming) generated on the fly. You can also output
easily any text, such as XHTML and any other XML file. PHP can autogenerate these files, and
save them in the file system, instead of printing it out, forming a server-side cache for your
dynamic content.
One of the strongest and most significant features in PHP is its support for a wide range of
databases. Writing a database-enabled web page is incredibly simple using one of the database
specific extensions (e.g., for mysql), or using an abstraction layer like PDO, or connect to any
database supporting the Open Database Connection standard via the ODBC extension. Other
databases may utilize cURL or sockets, like CouchDB.
PHP also has support for talking to other services using protocols such as LDAP, IMAP, SNMP,
NNTP, POP3, HTTP, COM (on Windows) and countless others. You can also open raw network
sockets and interact using any other protocol. PHP has support for the WDDX complex data
exchange between virtually all Web programming languages.
Talking about interconnection, PHP has support for instantiation of Java objects and using them
transparently as PHP objects.
PHP has useful text processing features, which includes the Perl compatible regular expressions
(PCRE), and many extensions and tools to parse and access XML documents.
5.9 JAVA
Java is a programming language and computing platform. Java is used to develop mobile apps,
web apps, desktop apps, games and much more. It was originally designed for embedded
network applications running on multiple platforms. It is a portable, object-oriented, interpreted
language.
Although it is primarily used for Internet-based applications, Java is a simple, efficient, general-
purpose language. Java is extremely portable. The same Java application will run identically on
any computer, regardless of hardware features or operating system, as long as it has a Java
interpreter. Besides portability, another of Java's key advantages is its set of security features
which protect a PC running a Java program not only from problems caused by erroneous code
but also from malicious programs (such as viruses). A Java applet downloaded from the Internet
is safe to run, because Java's security features prevent these types of applets from accessing a
PC's hard drive or network connections. An applet is typically a small Java program that is
embedded within an HTML page.
SDE to AGM(LICE) Page 67
Java can be considered both a compiled and an interpreted language because its source code is
first compiled into a binary byte-code. This byte-code runs on the Java Virtual Machine (JVM),
which is usually a software-based interpreter. The use of compiled byte-code allows the
interpreter (the virtual machine) to be small and efficient (and nearly as fast as the CPU running
native, compiled code).
In addition, this byte-code gives Java its portability: it will run on any JVM that is correctly
implemented, regardless of computer hardware or software configuration. Most Web browsers
(such as Microsoft Internet Explorer or Netscape Communicator) contain a JVM to run Java
applets.
Java is a dynamic language where you can safely modify a program while it is running. This is
especially important for network applications that cannot afford any downtime. Another key
feature of Java is that it is an open standard with publicly available source code.
Java Virtual Machine (JVM) is an engine that provides a runtime environment to drive the
Java Code or applications. It converts Java bytecode into machine language. JVM is a part of
the Java Run Environment (JRE). In other programming languages, the compiler produces
machine code for a particular system. However, the Java compiler produces code for a Virtual
Machine known as Java Virtual Machine. Here are some important Java applications:
It is used for developing Android Apps
Helps you to create Enterprise Software
Wide range of Mobile java Applications
Scientific Computing Applications
Use for Big Data Analytics
Java Programming of Hardware devices
Used for Server-Side Technologies like Apache, JBoss, GlassFish, etc.
5.10 Python
Python is an interpreted, object-oriented, high-level programming language with dynamic
semantics. Its high-level built in data structures, combined with dynamic typing and dynamic
binding, make it very attractive for Rapid Application Development, as well as for use as a
scripting or glue language to connect existing components together. Python's simple, easy to
learn syntax emphasizes readability and therefore reduces the cost of program maintenance.
Python supports modules and packages, which encourages program modularity and code reuse.
Often, programmers fall in love with Python because of the increased productivity it provides.
Since there is no compilation step, the edit-test-debug cycle is incredibly fast. Debugging
Python programs is easy: a bug or bad input will never cause a segmentation fault. Instead,
when the interpreter discovers an error, it raises an exception. When the program doesn't catch
the exception, the interpreter prints a stack trace. A source level debugger allows inspection of
local and global variables, evaluation of arbitrary expressions, setting breakpoints, stepping
through the code a line at a time, and so on. The debugger is written in Python itself, testifying
to Python's introspective power. On the other hand, often the quickest way to debug a program
is to add a few print statements to the source: the fast edit-test-debug cycle makes this simple
approach very effective.
Python can be used on a server to create web applications. It can be used alongside software to
create workflows. It can connect to database systems. It can also read and modify files. It can be
used to handle big data and perform complex mathematics. It is also used for rapid prototyping,
or for production-ready software development.
Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc). It has a simple
syntax similar to the English language. It allows developers to write programs with fewer lines
than some other programming languages.
Python runs on an interpreter system, so that code can be executed as soon as it is written. This
means that prototyping can be very quick.
Python can be treated in a procedural way, an object-oriented way or a functional way. Python
is one of the most widely used language over the web.
Easy-to-learn − Python has few keywords, simple structure, and a clearly
defined syntax. This allows the student to pick up the language quickly.
Easy-to-read − Python code is more clearly defined and visible to the eyes.
Easy-to-maintain − Python's source code is fairly easy-to-maintain.
A broad standard library − Python's bulk of the library is very portable and
cross-platform compatible on UNIX, Windows, and Macintosh.
Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
Portable − Python can run on a wide variety of hardware platforms and has the
same interface on all platforms.
Extendable − You can add low-level modules to the Python interpreter. These
modules enable programmers to add to or customize their tools to be more
efficient.
Libraries
Python's large standard library, commonly cited as one of its greatest strengths, provides tools
suited to many tasks. For Internet-facing applications, many standard formats and protocols
such as MIME and HTTP are supported. It includes modules for creating graphical user
interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic
with arbitrary-precision decimals, manipulating regular expressions, and unit testing.
Some parts of the standard library are covered by specifications (for example, the Web Server
Gateway Interface (WSGI) implementation wsgiref follows PEP 333), but most modules are
not. They are specified by their code, internal documentation, and test suites. However, because
most of the standard library is cross-platform Python code, only a few modules need altering or
rewriting for variant implementations.
Python Package Index (PyPI), the official repository for third-party Python software, contains
around 3lac packages with a wide range of functionality.
Other shells, including IDLE and IPython, add further abilities such as improved auto-
completion, session state retention and syntax highlighting.
As well as standard desktop integrated development environments, there are Web browser-based
IDEs; SageMath (intended for developing science and math-related Python programs);
PythonAnywhere, a browser-based IDE and hosting environment; and Canopy IDE, a
commercial Python IDE emphasizing scientific computing.
Installation on Windows
We can also click on the customize installation to choose desired location and features. Other
important thing is install launcher for the all user must be checked.
Now, try to run python on the command prompt. Type the command python -version in case of
python3.
5.13 Conclusion
The area of Web application development from a software engineering point of view. The Web is
an attractive playground for software engineers where you can quickly release an application to
millions of users and receive instant feedback. Web application development requires agility, the
use of standard components, inter-operability, and close attention to user needs. Indeed, one of
the important features of popular Web applications is to support user participation to add
value to the application and collaborate with other users.