Professional Documents
Culture Documents
ABSTRACT
1.1 ABSTRACT
The need for sending and receiving huge files over the internet is increasing
day by day. During the past few years we could have seen a dramatic growth of
internet Applications .We could see applications which involve continuous transfer
of files from a file server to the clients. It won’t be a huge risk if the file to be sent to
the clients is small and the number of clients connected is less. But let us consider
a situation in which the file size is large and a large number of clients are
connected to the server. The server may be sending a huge file from its memory to
a set of clients in broadcast mode and around 80% of file has been sent, If a new
client enters the group and it requests the same huge file from the server the
server has to send the complete file from the beginning to the client .Although this
is a simple thing the server has to work a lot for sending the same file again to the
client, even the traffic has to be increased on the network. In order to support a
large population of clients, techniques that efficiently utilize server and network
resources are essential. In designing such techniques, another important factor
that must be taken into consideration is the service latency, i.e., the time a client
has to wait until the object he/she has requested is being sent.
In this section we propose a proxy-assisted file delivery architecture that
employs a central-server-based periodic broadcast scheme to efficiently utilize
central server and network resources, while in the same time exploiting proxy
servers too significantly reduce service latency experienced by clients. The data
file broadcasted form the server is splitted to packets and is stored in the proxy
while delivering to the clients .When a new client comes the time it arrived is noted
and the packets that are to be sent to the other clients are send to the new client
.the other packets those were send to other clients will be sent to the new client
after the other packets transmission are over.
1
AIM AND SCOPE OF PRESENT INVESTIGATION
1.2 AIM OF THE PROJECT
2
1.3 OBJECTIVE
In this project, we propose Object Transmission Proxy (OTP) algorithm which
provides flexible way of getting data in a group from the source. And also it
provides minimum work load to the server. Here we use multicasting, which is also
a building block of many existing and emerging Internet applications, such as
WebTV and distance learning and has recently received a great deal of attention
3
1.6 PROPOSED SYSTEM
In our proposed System, a proxy-assisted file delivery architecture that
employs a central-server-based periodic broadcast scheme to efficiently utilize
central server and network resources, while in the same time exploiting proxy
servers too significantly reduce service latency experienced by clients. We could
see applications which involve continuous transfer of files from a file server to the
clients. When the data files are broadcast the server split that files into packets
and that packets stored in the proxy while delivering to the clients .When a new
client comes the time it arrived is noted and the packets that are to be sent to the
other clients are send to the new client .The other packets those were send to
other clients will be sent to the new client after the other packets transmission are
over. Proxy Multicasting acting as a Server used to access frequently accessed
files, Network traffic is less and Time is saved.
4
CHAPTER 2
MODULES
In this module, the server is the administrator of all the processes. The server
only checks the user is authorized or not. When the clients Login to the internet
services they will give the user name and password, on that time server checking
the user name and password in service database, if it is correct server give the
authenticity to that client. So, this module gives the explanation of the acceptance
of a client to the internet services.
In this Module, first the user gives the user name and password for security.
Then, the user gives request for retrieving the file and gets the file from the server.
Suppose, the server communicates with another user also, the server acts as a
supervisor between those users. Those users share the file with respect to the
server. Every user accessing the server with use of port and also identifies these
port with its number. We are stored requested file in the client side our specified
domain.
2.3 MULTICASTING
Server
C1 C2 C3 C4 C5 C6
FIG.2.3.1 MULTICASTING
5
Multicasting is the process for sending the files in group forms. We are using
this multicasting process and then forming groups. For example, if we want to
send a message to much number of people on that time we are going to the
multicasting process. Before we go to there we want to make a group for that
purpose. So we form a group by any name and then we send a message we
select only on that group name to proceed multicasting approach. Group formation
is performing in this module.
2.4 PROXY
Server
Proxy
Client
FIG.2.4.1 PROXY
6
2.5 TIME SCHEDULING
Server
Packet splitting
Proxy
In this module packets are scheduling by time. If any clients loose any packets,
that packets are getting the time values, and the proceeding the unicasting
process of that packets.
Proxy
Client -1 Client-3
Client-2
7
Unicast is the process, communicating between one client to another client
communication. It sends the missed packets to victim clients. Every client gets the
missed packets in clear way of this module. In this process performing by proxy
server. So, we are increasing the main server efficiency for using proxy.
8
CHAPTER 3
BLOCK DIAGRAM
Server
PACKET SPLITTING
PROXY
CLIENT 2 CLIENT 3
CLIENT 1
9
3.2 FLOW CHART
Start
Stop
FIG.3.2.1 FLOW CHART
10
CHAPTER 4
LITERATURE SURVEY
Multi rate video multicast over the best effort Internet remains a work-in-
progress area. We classify existing approaches according to the methods of
generating the multi rate video: stream replication, cumulative layering, and non
cumulative layering. We then discuss representative techniques used in these
approaches from both video coding and network transport perspectives. This
specifically includes the efficient transmission of multi rate video streams to a large
group of heterogeneous receivers using the Internet multicast infrastructure. We
also investigate the trade-offs of the approaches based on important design issues
and performance criteria, including bandwidth economy, adaptation granularity,
and coding complexity.
11
be addressed for such layered video broadcasting systems. At the session level, it
is not clear how to allocate bandwidth resources among competing video sessions.
For a session with a given bandwidth, questions such as how to set up the video
layering structure (i.e., number of layers) and how much bandwidth should be
allocated to each layer remain to be answered. The solutions to these questions
are further complicated by practical issues such as uneven popularity among video
sessions and video layering overhead. This paper presents a systematic study to
address these issues for a layered video broadcasting system in a wireless
environment. Our approach is to employ a generic utility function for each receiver
under each video session. We cast the joint problem of layering and bandwidth
allocation (among sessions and layers) into an optimization problem of total
system utility among all the receivers. By using a simple 2-step decomposition of
inter-session and intra-session optimization, we derive efficient algorithms to solve
the optimal layering and bandwidth allocation problem. Practical issues for
deploying the optimal algorithm in wireless networks are also discussed.
Simulation results show that the optimal layering and bandwidth allocation
improves the total system utility.
12
However, multi-layered encoding alone is not such client to achieve high
bandwidth utilization and high video quality, because network bandwidth
constraints often change over time. To improve the bandwidth utilization of the
network and optimize the quality of video obtained by each of the receivers, the
source must dynamically adjust the number of video layers it generates as well as
the rate at which each layer is transmitted. In order to do this, the source must
obtain congestion feedback from the network. We define a Source-Adaptive Multi-
layered Multicast (SAMM) algorithm as any multicast traffic control algorithm that
uses congestion feedback from the network to adapt the transmission rates of
multiple layers of data. In this paper, we introduce two novel and promising SAMM
algorithms: one in which congestion in the network is monitored at and indicated
by the network's intermediate nodes, and another in which the responsibility for
congestion control resides exclusively at the source and receivers. We refer to the
former as a network-based SAMM algorithm and the latter as an end-to-end
SAMM algorithm. Both SAMM algorithms are closed-loop; that is, video traffic
flows from the source to the receivers and a stream of congestion feedback flows
from the receivers back to the source.
13
CHAPTER 5
14
indexing file is generated during the encoding process or by parsing the bit stream,
and classification thus can be performed based on this file.
15
the video. Additional layers, called enhancement layers, contain data
that progressively refine the reconstructed video quality . On the
other hand, non-cumulative layering supposes all layers have the
same priority, and any subset of the layers can be used for video
reconstruction. Therefore, it yields higher flexibility than cumulative
layering. The above multi rate adaptation approaches all rely on end
to-end services, where adaptation is performed on end nodes (the
sender or receivers). The argument for active services, however, is
that many applications can best be supported or enhanced using
information or intelligent services only available inside a network. For
example, we can deploy several agents in a large-scale network; the
agents partition the network into several confined regions, and each
agent can thus handle the requirements from its local region in a
much easier manner. There are many trade-offs between end-to-end
services and active services. In particular, the deployment of agents
is mainly subject to network operators and service providers. In this
article we focus only on multi rate video multicast using end-to-end
adaptation.
Multicast protocols target applications involving a large number of
receiver switch heterogeneous data reception capabilities. To
accommodate heterogeneity, the sender may transmit at multiple
rates, requiring mechanisms to determine the rates and allocate
receiver storages. In this paper, we develop protocol to control the
rate so far multi- cast session with the goal of maximizing the inter-
receiver fairness an intra-session measure that captures the
collective satisfaction of the session receivers. Our target
environment is the Internet, where fairs haring of band-width must be
achieved via end-system mechanisms and fairness to TCP is
important. We develop and evaluate protocols to maximize this
measure by maintaining a rate base group and a variable rate group.
We show that our scheme offer improvement over single-rate
sessions, while maintaining TCP-friendliness.
16
concealment features in MPEG-4, our algorithm yields finer granularity for rate
control than the basic temporal scalability.
CHAPTER 6
17
PROXY
SELECT A GROUP TO
WHICH DATA SHOULD
BE SEND.
USER ID
PASSWORD
SERVER
SELECT A GROUP TO ENTER GROUP MANAGEMENT
BROWSE THE
DATAS RECEIVED IN FORM OF ADD OR REMOVE A
VEDIO FILE AND
PACKETS TO CLIENTS GROUP
SEND TO PROXY
FILES RECIVED TO CLIENTS
DATA FLOW
ACK
6.2 MULTICAST
18
FIG.6.2.1 MULTICAST
19
FIG.6.2.2 IP MULTICAST
20
Figure Multicast Transmission Sends a Single Multicast Packet Addressed to All
Intended Recipients
21
6.3 MULTICAST GROUP CONCEPTS
Multicast addresses specify an arbitrary group of IP hosts that have joined the
group and want to receive traffic sent to this group.
22
In this example, multicast traffic from the source hosts A and D travels to the
root (Router D) and then down the shared tree to the two receivers, hosts B and C.
Because all sources in the multicast group use a common shared tree, a wildcard
notation written as (*, G), pronounced "star comma G," represents the tree. In this
case, * means all sources, and the G represents the multicast group. Therefore,
the shared tree shown in Figure would be written as (*, 224.2.2.2). Both SPT and
shared trees are loop-free. Messages are replicated only where the tree branches.
Members of multicast groups can join or leave at any time, so the distribution
trees must be dynamically updated. When all the active receivers on a particular
branch stop requesting the traffic for a particular multicast group, the routers,
prune that branch from the distribution tree and stop forwarding the traffic down
that branch. If one receiver on that branch becomes active and requests the
multicast traffic, the router dynamically modifies the distribution tree and starts
forwarding traffic again.
Shortest path trees have the advantage of creating the optimal path between
the source and the receivers. This guarantees the minimum amount of network
latency for forwarding multicast traffic. This optimization does come with a price,
though: The routers must maintain path information for each source. In a network
that has thousands of sources and thousands of groups, this can quickly become a
resource issue on the routers. Memory consumption from the size of the multicast
routing table is a factor that network designers must take into consideration.
Shared trees have the advantage of requiring the minimum amount of state in
each router. This lowers the overall memory requirements for a network that allows
only shared trees. The disadvantage of shared trees is that, under certain
circumstances, the paths between the source and receivers might not be the
optimal paths—which might introduce some latency in packet delivery. Network
designers must carefully consider the placement of the RP when implementing an
environment with only shared trees.
23
6.5 MULTICAST FORWARDING
In unicast routing, traffic is routed through the network along a single path from
the source to the destination host. A unicast router does not really care about the
source address—it only cares about the destination address and how to forward
the traffic towards that destination. The router scans through its routing table and
then forwards a single copy of the unicast packet out the correct interface in the
direction of the destination.
In multicast routing, the source is sending traffic to an arbitrary group of hosts
represented by a multicast group address. The multicast router must determine
which direction is upstream (toward the source) and which direction (or directions)
is downstream. If there are multiple downstream paths, the router replicates the
packet and forwards the traffic down the appropriate downstream paths—which is
not necessarily all paths. This concept of forwarding multicast traffic away from the
source, rather than to the receiver, is called reverse path forwarding.
When a multicast packet arrives at a router, the router performs an RPF check
on the packet. If the RPF check is successful, the packet is forwarded. Otherwise,
it is dropped. For traffic flowing down a source tree, the RPF check procedure
works as follows:
If packet has arrived on the interface leading back to the source, the RPF
check is successful and the packet is forwarded.
24
Figure shows an example of an unsuccessful RPF check.
25
Figure shows an example of a successful RPF check.
This time the multicast packet has arrived on S1. The router checks the unicast
routing table and finds that S1 is the correct interface. The RPF check passes and
the packet is forwarded.
In the PIM (Protocol Independent Multicast) Sparse mode model, multicast
sources and receivers must register with their local Rendezvous Point (RP).
Actually, the closest router to the sources or receivers registers with the RP but the
point is that the RP knows about all the sources and receivers for any particular
group. RPs in other domains has no way of knowing about sources located in
other domains. MSDP is an elegant way to solve this problem. MSDP is a
mechanism that connects PIM-SM domains and allows RPs to share information
about active sources. When RPs in remote domains knows about active sources
they can pass on that information to their local receivers and multicast data can be
forwarded between the domains. A nice feature of MSDP is that it allows each
domain to maintain an independent RP which does not rely on other domains, but
it does enable RPs to forward traffic between domains.
26
The RP in each domain establishes an MSDP peering session using a TCP connection
with the RPs in other domains or with border routers leading to the other domains. When
the RP learns about a new multicast source within its own domain (through the normal
PIM register mechanism), the RP encapsulates the first data packet in a Source Active (SA)
message and sends the SA to all MSDP peers. The SA is forwarded by each receiving peer
using a modified RPF check, until it reaches every MSDP router in the interconnected
networks—theoretically the entire multicast internet. If the receiving MSDP peer is an RP,
and the RP has a (*, G) entry for the group in the SA (there is an interested receiver), the
RP will create (S, G) state for the source and join to the shortest path tree for the state of
the source. The encapsulated data is decapsulated and forwarded down that RP's shared
tree. When the packet is received by a receiver's last hop router, the last-hop may also join
the shortest path tree to the source. The source's RP periodically sends SAs, which include
all sources within that RP's own domain. Figure shows how data would flow between a
source in domain A to a receiver in domain E.
6.7 MSDP
FIGURE SHOWS MSDP EXAMPLE
27
MDSP was developed for peering between Internet Service Providers (ISPs).
ISPs did not want to rely on an RP maintained by a competing ISP to service their
customers. MSDP allows each ISP to have their local RP and still forward and
receive multicast traffic to the Internet.
6.8 IP MULTICAST
The most common transport layer protocol to use multicast addressing is User
Datagram Protocol (UDP). By its nature, UDP is not reliable—messages may be
lost or delivered out of order. Reliable multicast protocols such as Pragmatic
General Multicast (PGM) have been developed to add loss detection and
retransmission on top of IP multicast.
28
CHAPTER 7
Still, the large state requirements in routers make applications using a large
number of trees unworkable using IP multicasts. Take presence information as an
example where each person needs to keep at least one tree of its subscribers if
not several. No mechanism has yet been demonstrated that would allow the IP
multicast model to scale to millions of senders and millions of multicast groups
and, thus, it is not yet possible to make fully-general multicast applications
practical. For these reasons, and also reasons of economics, IP multicast is not in
general use in the commercial Internet backbone. The increasing availability of
WiFi Access Points that support multicast IP is facilitating the emergence of
WiCast WiFi Multicast which allows the binding of data to geographical locations.
Other multicast technologies not based on IP multicast are more widely used.
Notably the Internet Relay Chat (IRC), which is more pragmatic and scales better
for large numbers of small groups. IRC implements a single spanning tree across
its overlay network for all conference groups. This leads to suboptimal routing for
some of these groups however. Additionally IRC keeps a large amount of
distributed state, which limits growth of an IRC network, leading to fractioning into
several non-interconnected networks. The lesser known PSYC technology uses
custom multicast strategies per conference. Also some peer-to-peer technologies
employ the multicast concept when distributing content to multiple recipients.
29
CHAPTER 8
PROXY SERVER
A server that sits between a client application, such as a Web browser, and a
real server. It intercepts all requests to the real server to see if it can fulfill the
requests itself. If not, it forwards the request to the real server. Proxy servers have
two main purposes.
Proxy servers can dramatically improve performance for groups of users. This
is because it saves the results of all requests for a certain amount of time.
Consider the case where both user X and user Y access the World Wide Web
through a proxy server. First user X requests a certain Web page, which we'll call
Page 1. Sometime later, user Y requests the same page. Instead of forwarding the
request to the Web server where Page 1 resides, which can be a time-consuming
operation, the proxy server simply returns the Page 1 that it already fetched for
user X. Since the proxy server is often on the same network as the user, this is a
much faster operation. Real proxy servers support hundreds or thousands of
users. The major online services such as America Online, MSN and Yahoo, for
example, employ an array of proxy servers.
Proxy servers can also be used to filter requests. For example, a company
might use a proxy server to prevent its employees from accessing a specific set of
Web sites.
30
When it receives a request for a Web resource (specified by a URL), a caching
proxy looks for the resulting URL in its local cache. If it is found, it will return the
document immediately. Otherwise it fetches it from the remote server, returns it to
the requester and saves a copy in the cache. The cache usually uses an expiry
algorithm to remove documents from the cache, according to their age, size, and
access history. Two simple cache algorithms are Least Recently Used (LRU) and
Least Frequently Used (LFU). LRU removes the least-recently used documents,
and LFU removes the least-frequently used documents.
Web proxies can also filter the content of Web pages served. Some
censorware applications - which attempt to block offensive Web content - are
implemented as Web proxies. Other web proxies reformat web pages for a specific
purpose or audience; for example, Skweezer reformats web pages for cell phones
and PDAs. Network operators can also deploy proxies to intercept computer
viruses and other hostile content served from remote Web pages.
A special case of web proxies are "CGI proxies." These are web sites which
allow a user to access a site through them. They generally use PHP or CGI to
implement the proxying functionality. CGI proxies are frequently used to gain
access to web sites blocked by corporate or school proxies. Since they also hide
the user's own IP address from the web sites they access through the proxy, they
are sometimes also used to gain a degree of anonymity.
This type of proxy server identifies itself as a proxy server and also makes
the original IP address available through the http headers. These are generally
used for their ability to cache websites and do not effectively provide any
anonymity to those who use them. However, the use of a
transparent proxy will get you around simple IP bans. They are transparent
in the terms that your IP address is exposed, not transparent in the terms that
you do not know that you are using it (your system is not specifically
configured to use it.)
31
8.3.3 ANONYMOUS PROXY SERVERS
This type of proxy server identifies itself as a proxy server, but does not
make the original IP address available. This type of proxy server is detectable,
but provides reasonable anonymity for most users.
This type of proxy server identifies itself as a proxy server, but make an
incorrect original IP address available through the http headers.
This type of proxy server does not identify itself as a proxy server and does not
make available the original IP address.
32
CHAPTER 9
RUNNING PROCEDURE
ADMINISTRATIVE TOOLS
33
STEP 1:
Start the server window.
34
STEP 2:
Start the login window and the new user can click the “New user click
here” button for registration.
Fill the column provided for registration. Then click the “Registration”
button for registering. It will show the dialog box to wait for some time.
If the same user trying to entering in the login page before authenticating
by the server, it will show the dialog box as “Invalid user”.
37
STEP 5:
Now in the server side will check the users by user profile.
38
STEP 6:
Now the server will refer the user profile and allow the user for
authentication by clicking the button “Accept”. A dialog box will be displayed.
40
STEP 8:
Now the user can login to the page by selecting the group(Chennai).
41
STEP 9:
42
STEP 10:
Now start the proxy. Proxy will decide to send the file to particular
group (Chennai). After selecting the group clicks the “Start” button. Port
number will be displayed.
43
STEP 11:
44
STEP 12:
45
STEP 13:
46
STEP 14:
After clicking the send button the file will be transferred. The file
name and file size will be displayed in the file content.
47
STEP 15:
In proxy side, the file will be splitted into packets. It will send the
packets to the clients.
48
STEP 16:
In login side, click the client status tab. The file received in
packets is seen for the group select by the proxy. The Chennai group will get
the file.
49
STEP 17:
The person who belongs to another group won’t get the file.
50
STEP 18:
The proxy is used to send the data for another group also. So “Refresh”
it.
51
STEP 19:
Select the group to send the data and click the “Start” button.
52
STEP 20:
53
STEP 21:
54
STEP 22:
55
STEP 23:
56
STEP 24:
The received file is seen in “client files” tab in the login page.
57
CHAPTER 10
10.1.2 APPLICATIONS
10.1.3 CONCLUSION
Most of the work in proxy server is dealt with server work load decreasing. In
our proposed a proxy-assisted file delivery architecture that employs a central-
server-based periodic broadcast scheme to efficiently utilize central server and
network resources, while in the same time exploiting proxy servers too significantly
reduce service latency experienced by clients.
58
CHAPTER 11
BIBLIOGRAPHY
[1] B. Li and J. Liu, “Multi-rate video multicast over the Internet: an overview,”
IEEE Network, vol. 17, no. 1, pp. 24–29, Jan. 2003.
[5] J. Liu, B. Li, Y.-T. Hou, and I. Chlamtac, “On optimal layering and bandwidth
allocation for multi-session video broadcasting,” IEEE Transactions on Wireless
Communications,, vol. 3, no. 3, pp. 656–667, Mar. 2004.
59