CHAPTER 1 ABSTRACT

1.1 ABSTRACT The need for sending and receiving huge files over the internet is increasing day by day. During the past few years we could have seen a dramatic growth of internet Applications .We could see applications which involve continuous transfer of files from a file server to the clients. It won’t be a huge risk if the file to be sent to the clients is small and the number of clients connected is less. But let us consider a situation in which the file size is large and a large number of clients are connected to the server. The server may be sending a huge file from its memory to a set of clients in broadcast mode and around 80% of file has been sent, If a new client enters the group and it requests the same huge file from the server the server has to send the complete file from the beginning to the client .Although this is a simple thing the server has to work a lot for sending the same file again to the client, even the traffic has to be increased on the network. In order to support a large population of clients, techniques that efficiently utilize server and network resources are essential. In designing such techniques, another important factor that must be taken into consideration is the service latency, i.e., the time a client has to wait until the object he/she has requested is being sent. In this section we propose a proxy-assisted file delivery architecture that employs a central-server-based periodic broadcast scheme to efficiently utilize central server and network resources, while in the same time exploiting proxy servers too significantly reduce service latency experienced by clients. The data file broadcasted form the server is splitted to packets and is stored in the proxy while delivering to the clients .When a new client comes the time it arrived is noted and the packets that are to be sent to the other clients are send to the new client .the other packets those were send to other clients will be sent to the new client after the other packets transmission are over.

1

AIM AND SCOPE OF PRESENT INVESTIGATION
1.2 AIM OF THE PROJECT In this project, we introduce the concept of object transmission proxy (OTP).Proxy is a computer system or router that breaks the connection between sender and receiver. Functioning as a relay between client and server, proxy servers help prevent an attacker from invading a private network and are one of several tools used to build a firewall. The word proxy means "to act on behalf of another," and a proxy server acts on behalf of the user. All requests from clients to the Internet go to the proxy server first. The proxy evaluates the request, and if allowed, re-establishes it on the outbound side to the Internet. Likewise, responses from the Internet go to the proxy server to be evaluated. The proxy then relays the message to the client. Both client and server think they are communicating with one another, but, in fact, are dealing only with the proxy. OTP is which filters incoming data using object-based bandwidth adaptation to meet dynamic network conditions. Multiple OPTs can form an overlay network that interconnects diverse multicast islands with semi-uniform demands within each single island. We concur with the wisdom that an application best knows the utility of its data. Hence, the bandwidth-adaptation algorithm for the OTPs adaptively allocates bandwidth among video objects according to their respective utilities and then performs application-level filtering based on an effective stream classification and packetization scheme. It is particularly suitable for object-based video multicasting where the objects are of different importance. Here we overcome the process of current existing proxy in which performs client side operation. The new client when he/she logon to the network at mid time of file transfer the file from proxy send complete file to the new client also where else existing proxy will not send the complete file to the new client, the new user will get only the minimum numbers of packets, so loss of packets occurs.

2

1.3 OBJECTIVE In this project, we propose Object Transmission Proxy (OTP) algorithm which provides flexible way of getting data in a group from the source. And also it provides minimum work load to the server. Here we use multicasting, which is also a building block of many existing and emerging Internet applications, such as WebTV and distance learning and has recently received a great deal of attention 1.4 TITLE JUSTIFICATION The project entitled “A PROXY-ASSISTED ADAPTATION FRAMEWORK FOR OBJECT VIDEO MULTICASTING”, provides a advanced way of file transfer in a network. It allows sending files over that network, so that each and every user in the network can receive the file at a time. Through the proxy assisted adaptation, we can reduce the work load of the server. Where the proxy acts as the second server and transmits the data. Here the data is splitted in the form of packets and transmitted. One can check whether the data is completely sent or the packets are received completely to the client. Now if new client enters into the group the file is sent to client completely after sending the file to the existing client. 1.5 EXISTING SYSTEM Existing system can’t perform unicast and multicast operations. It’s performing the client side operation. On the time of file sending process in a group, if any clients are “Log out” or “Login” in that group that process to be take in place of server side, so the server going to huge work loads and then its efficiency will be reducing. If any new comer of file sending process in a group, that new comer can’t get previous process immediately, and then they give the more work load to the server. So, the server Occurring huge load its efficiency will be going to reducing conditions. In Existing system can’t do the multicasting process. The main problem statement that exist in the current network system is sending of packets to the unusual client is a difficult one. Because it is not possible to say that all the members will be in a group. Frequently accessing files from a group is difficult to access in current proxy System.
3

1.6 PROPOSED SYSTEM In our proposed System, a proxy-assisted file delivery architecture that employs a central-server-based periodic broadcast scheme to efficiently utilize central server and network resources, while in the same time exploiting proxy servers too significantly reduce service latency experienced by clients. We could see applications which involve continuous transfer of files from a file server to the clients. When the data files are broadcast the server split that files into packets and that packets stored in the proxy while delivering to the clients .When a new client comes the time it arrived is noted and the packets that are to be sent to the other clients are send to the new client .The other packets those were send to other clients will be sent to the new client after the other packets transmission are over. Proxy Multicasting acting as a Server used to access frequently accessed files, Network traffic is less and Time is saved. 1.7 FUTURE ENHANCEMENT The data file broadcasted form the server is splitted into packets and is stored in the proxy while delivering to the clients .When a new client comes the time it arrived is noted and the packets that are to be sent to the other clients are send to the new client .The other packets those were send to other clients will be sent to the new client after the other packets transmission are over. So, it gives the good working environment to the central-server and also clients.

4

CHAPTER 2 MODULES
2.1 SERVER AUTHENTICATION In this module, the server is the administrator of all the processes. The server only checks the user is authorized or not. When the clients Login to the internet services they will give the user name and password, on that time server checking the user name and password in service database, if it is correct server give the authenticity to that client. So, this module gives the explanation of the acceptance of a client to the internet services. 2.2 CLIENT AUTHENTICATION In this Module, first the user gives the user name and password for security. Then, the user gives request for retrieving the file and gets the file from the server. Suppose, the server communicates with another user also, the server acts as a supervisor between those users. Those users share the file with respect to the server. Every user accessing the server with use of port and also identifies these port with its number. We are stored requested file in the client side our specified domain. 2.3 MULTICASTING

Server

Group-1

Group-2

Group-3

C1

C2

C3

C4

C5

C6

FIG.2.3.1 MULTICASTING

5

Multicasting is the process for sending the files in group forms. We are using this multicasting process and then forming groups. For example, if we want to send a message to much number of people on that time we are going to the multicasting process. Before we go to there we want to make a group for that purpose. So we form a group by any name and then we send a message we select only on that group name to proceed multicasting approach. Group formation is performing in this module.

2.4 PROXY

Server

Proxy

Client

FIG.2.4.1 PROXY

Proxy acts as a server. It is called as a virtual server. Proxy does the monitoring process of each and every packet and process for every client and it has storage of previous process. When a client missed a pocket, they will retrieve that packet from proxy. The proxy further proceeding the unicasting process of victim client.

6

2.5 TIME SCHEDULING

Server

Packet splitting Time setting for each packet Proxy

FIG.2.5.1 TIME SCHEDULING

In this module packets are scheduling by time. If any clients loose any packets, that packets are getting the time values, and the proceeding the unicasting process of that packets. 2.6 UNICASTING OF MISSED PACKETS

Proxy

Client -1

Client-2

Client-3

FIG.2.6.1 UNICASTING OF MISSED PACKETS

7

Unicast is the process, communicating between one client to another client communication. It sends the missed packets to victim clients. Every client gets the missed packets in clear way of this module. In this process performing by proxy server. So, we are increasing the main server efficiency for using proxy.

8

CHAPTER 3 BLOCK DIAGRAM

3.1 BLOCK DIAGRAM OF VIDEO MULTICASTING

Server

PACKET SPLITTING

PROXY

CLIENT 1

CLIENT 2

CLIENT 3

FIG.3.1.1 BLOCK DIAGRAM OF VIDEO MULTICASTING

9

3.2 FLOW CHART
Start Start the Server

Server split the file into number of Packets

Send the packets to Proxy Server

Proxy store the packets into temporarily

Check existing users dynamically

Send packets to existing user Client/User receive the packets serially

Send negative acknowledgement for unrecieved packets

Proxy receive the negative acknowledgement

Send the packets from its temporary storage

Client/User receive the missed packets

Stop

FIG.3.2.1 FLOW CHART

10

CHAPTER 4 LITERATURE SURVEY
Given the rapid development and deployment of multimedia applications and the multi receiver nature of video programs, real-time video distribution has emerged as one of the most important IP multicast applications. It is also an essential component of many current and emerging Internet applications, such as videoconferencing and distance learning, and thus has received a great deal of attention. The Internet’s intrinsic heterogeneity, however, makes video multicast a challenging problem. In traditional end-to-end adaptation schemes, the sender adjusts its transmission rate according to some feedback from its receiver. In a multicast environment, this solution tends to be suboptimal because there is no single target rate for a group of heterogeneous receivers. In other words, some receivers would be unfairly treated, and, at some branches of the multicast tree, the single-rate video stream would compete for bandwidth unfairly with other adaptive traffic, such as TCP flows. It is thus necessary to use multi rate video multicast, in which receivers in a multicast session can receive video data at different rates according to their respective bandwidths or processing capabilities. Multi rate video multicast over the best effort Internet remains a work-inprogress area. We classify existing approaches according to the methods of generating the multi rate video: stream replication, cumulative layering, and non cumulative layering. We then discuss representative techniques used in these approaches from both video coding and network transport perspectives. This specifically includes the efficient transmission of multi rate video streams to a large group of heterogeneous receivers using the Internet multicast infrastructure. We also investigate the trade-offs of the approaches based on important design issues and performance criteria, including bandwidth economy, adaptation granularity, and coding complexity. For video broadcasting applications in a wireless environment, layered transmission is an effective approach to support heterogeneous receivers with varying bandwidth requirements. There are several important issues that need to
11

be addressed for such layered video broadcasting systems. At the session level, it is not clear how to allocate bandwidth resources among competing video sessions. For a session with a given bandwidth, questions such as how to set up the video layering structure (i.e., number of layers) and how much bandwidth should be allocated to each layer remain to be answered. The solutions to these questions are further complicated by practical issues such as uneven popularity among video sessions and video layering overhead. This paper presents a systematic study to address these issues for a layered video broadcasting system in a wireless environment. Our approach is to employ a generic utility function for each receiver under each video session. We cast the joint problem of layering and bandwidth allocation (among sessions and layers) into an optimization problem of total system utility among all the receivers. By using a simple 2-step decomposition of inter-session and intra-session optimization, we derive efficient algorithms to solve the optimal layering and bandwidth allocation problem. Practical issues for deploying the optimal algorithm in wireless networks are also discussed. Simulation results show that the optimal layering and bandwidth allocation improves the total system utility. Multicast protocols target applications involving a large number of receivers with heterogeneous data reception capabilities. Depending on the type of application, this heterogeneity can be accommodated in one of two ways. In a single-rate session, the source adjusts its send-ingrate based on feedback it receives from the network and / or other receivers. In a typical single-rate protocol the rate is picked to match the lowest capacity receiver (or path to a receiver).In a multi-rate session the sender can transmit at different rates to different receivers through layering or destination-set splitting In either case there needs to be criteria for the setting of the session rate(s) and the allocation of receivers to the rates (in the case of multi-rate sessions) and protocols for implementing the appropriate rate settings and allocations. In this paper we aim to develop a protocol to control the rate of a multicast session with the goal of maximizing inter-receiver fairness, an intra-session measure that captures the collective satisfaction of the session receivers based on the rate at which they are receiving and the data loss they are experiencing. We also strive to achieve inter-session fairness among similarly controlled multicast sessions and between multicast sessions and TCP sessions.

12

However, multi-layered encoding alone is not such client to achieve high bandwidth utilization and high video quality, because network bandwidth constraints often change over time. To improve the bandwidth utilization of the network and optimize the quality of video obtained by each of the receivers, the source must dynamically adjust the number of video layers it generates as well as the rate at which each layer is transmitted. In order to do this, the source must obtain congestion feedback from the network. We define a Source-Adaptive Multilayered Multicast (SAMM) algorithm as any multicast traffic control algorithm that uses congestion feedback from the network to adapt the transmission rates of multiple layers of data. In this paper, we introduce two novel and promising SAMM algorithms: one in which congestion in the network is monitored at and indicated by the network's intermediate nodes, and another in which the responsibility for congestion control resides exclusively at the source and receivers. We refer to the former as a network-based SAMM algorithm and the latter as an end-to-end SAMM algorithm. Both SAMM algorithms are closed-loop; that is, video traffic flows from the source to the receivers and a stream of congestion feedback flows from the receivers back to the source.

13

CHAPTER 5 OVERVIEW OF THE PROJECT
 In MPEG-4, a video stream is structured into video packets (VPs) with individual synchronization markers. Each VP is a self-contained decoding unit and, given some threshold, its size can be well controlled in encoding. Therefore, a VP (or several VPs) is often suggested to serve as the payload for an RTP(Real time transport protocol) packet. A data-partitioning mode can be enabled to further separates the shape, motion or texture data in a VP by a DC Marker or Motion Marker. As such, a stream can be effectively resynchronized in the presence of bit errors.  For Internet-based transmission, most errors are caused by packet loss, in which a VP is completely lost. Consequently, the error isolation capability of the DC or Motion Marker becomes useless. Since the data contained in a VP, such as shape, motion, and texture, have different levels of importance to decoding, it is beneficial to classify them in a finer granularity beyond the coarse-grained classification of VOP types (I, P, and B VOPs). Different types of the data thus can be more promptly handled during transmission.  To this end, we classify the data in a bit stream into several categories according to VP boundaries as well as DC or Motion Markers and assign priorities to the categories to differentiate their importance, as shown in below table. TABLE NO. 5.1 STREAM CLASSIFICATION AND PRIORITIZATION STREAM CLASSIFICATION AND PRIORITAZATION
PRIORITY 1 2 3 4 5 6 CATEGORY Control data and VO headers IVOP shape and texture DC IVOP texture AC PVOP shape and motion PVOP texture BVOP

We refer to the content belonging to the same category in a VP as a data item. For each video object stream, we use a file to record the indexing information of all the data items, including their categories, starting positions, and sizes. This

14

indexing file is generated during the encoding process or by parsing the bit stream, and classification thus can be performed based on this file.

 After the classification and prioritization, the data items are encapsulated into application level packets (ALPs).We limit the size of an ALP to be less than the maximum transmission unit (MTU) of the underlying network. It can thus serve as a transmission unit, e.g., RTP payload. However, for low-rate streams, the size of a VOP itself is relatively small, and the size of a VP thus could be small even if its size threshold is set at a high value during encoding. This is particularly true for PVOPs and BVOPs. Our experiments show that, for the standard video sequence Akiyo (CIF 92 Kb/s, 300 VOPs), the average VP size is only 2527 bits for a threshold of 4000 bits. As a result, if each ALP contains only one data item, the packet header overhead (IP+UDP+RTP) is greater than 11%. Hence, for PVOP and BVOP data, we multiplex the data items of the same priority from several VPs into one ALP. To reduce losses of important data, we also limit the number of data items in an ALP, specifically, four for PVOP shape and motion items. Our experiments show that, in this case, the average ALP size is 5891 b, and the efficiency is thus improved to 94.8%.  From the viewpoint of a video source, multi rate video streams can be produced via two methods. The first is information replication; that is, the sender generates replicated streams for the same video content but at different rates. Each stream thus can serve a subset of receivers that have similar bandwidths. The second is information decomposition. A commonly used decomposition scheme is layering, in which a raw video sequence is compressed into some nonoverlapping streams, or layers. The video quality is low if only one layer is decoded, but can be refined by decoding more layers. A receiver thus can selectively subscribe to a subset of layers according to its capacity or capability. There are two kinds of layering schemes: cumulative and non-cumulative. In cumulative layering, there is a layer with the highest importance, called a base layer, which contains the data representing the most important features of
15

the video. Additional layers, called enhancement layers, contain data that progressively refine the reconstructed video quality . On the other hand, non-cumulative layering supposes all layers have the same priority, and any subset of the layers can be used for video reconstruction. Therefore, it yields higher flexibility than cumulative layering. The above multi rate adaptation approaches all rely on end to-end services, where adaptation is performed on end nodes (the sender or receivers). The argument for active services, however, is that many applications can best be supported or enhanced using information or intelligent services only available inside a network. For example, we can deploy several agents in a large-scale network; the agents partition the network into several confined regions, and each agent can thus handle the requirements from its local region in a much easier manner. There are many trade-offs between end-to-end services and active services. In particular, the deployment of agents is mainly subject to network operators and service providers. In this article we focus only on multi rate video multicast using end-to-end adaptation.  Multicast protocols target applications involving a large number of receiver switch heterogeneous data reception capabilities. To accommodate heterogeneity, the sender may transmit at multiple rates, requiring mechanisms to determine the rates and allocate receiver storages. In this paper, we develop protocol to control the rate so far multi- cast session with the goal of maximizing the interreceiver fairness an intra-session measure that captures the collective satisfaction of the session receivers. Our target environment is the Internet, where fairs haring of band-width must be achieved via end-system mechanisms and fairness to TCP is important. We develop and evaluate protocols to maximize this measure by maintaining a rate base group and a variable rate group. We show that our scheme offer improvement over single-rate sessions, while maintaining TCP-friendliness. our proposed filtering algorithm is essentially an extension to the temporal scalability-based filtering . Taking advantage of the data partitioning and error16

concealment features in MPEG-4, our algorithm yields finer granularity for rate control than the basic temporal scalability.

CHAPTER 6 ARCHITECTURAL DESIGN OF THE PROJECT
6.1 ARCHITECTURAL DESIGN OF THE PROJECT The below diagram shows architectural view of the project, where unicasting and multicasting takes place. The unicasting and multicasting is the basic concept under this project to send a video file over the internet.

17

PROXY SELECT A GROUP TO WHICH DATA SHOULD BE SEND.

Validation & client details USER ID PASSWORD SELECT A GROUP TO ENTER DATAS RECEIVED IN FORM OF PACKETS TO CLIENTS FILES RECIVED TO CLIENTS

GROUP MANAGEMENT ADD OR REMOVE A GROUP

SERVER BROWSE THE VEDIO FILE AND SEND TO PROXY

DATA FLOW ACK

FIG.6.1.1 ARCHITECTURAL DESIGN OF THE PROJECT

6.2 MULTICAST

18

FIG.6.2.1 MULTICAST  Multicast is sometimes also (incorrectly) used to refer to a multiplexed broadcast. Multicast is a network addressing method for the delivery of information to a group of destinations simultaneously using the most efficient strategy to deliver the messages over each link of the network only once, creating copies only when the links to the multiple destinations split.  The word "multicast" is typically used to refer to IP Multicast, which is often employed for streaming media and Internet television applications. In IP Multicast the implementation of the multicast concept occurs on the IP routing level, where routers create optimal distribution paths for datagram sent to a multicast destination address spanning tree in real-time. "Multicast" is also used to describe data link layer one-to-many distribution such as Ethernet multicast addressing, ATM point-to-multipoint VCs or In fine band multicast.

19

FIG.6.2.2 IP MULTICAST  Internet Protocol (IP) multicast is a bandwidth-conserving technology that reduces traffic by simultaneously delivering a single stream of information to thousands of corporate recipients and homes. Applications that take advantage of multicast include videoconferencing, corporate communications, distance learning, and distribution of software, stock quotes, and news.  IP Multicast delivers source traffic to multiple receivers without adding any additional burden on the source or the receivers while using the least network bandwidth of any competing technology. Multicast packets are replicated in the network by Cisco routers enabled with Protocol Independent Multicast (PIM) and other supporting multicast protocols resulting in the most efficient delivery of data to multiple receivers possible. All alternatives require the source to send more than one copy of the data. Some even require the source to send an individual copy to each receiver. If there are thousands of receivers, even lowbandwidth applications benefit from using Cisco IP Multicast. High-bandwidth applications, such as MPEG video, may require a large portion of the available network bandwidth for a single stream. In these applications, the only way to send to more than one receiver simultaneously is by using IP Multicast. Figure 43-1 demonstrates how data from one source is delivered to several interested recipients using IP multicast.

20

Figure Multicast Transmission Sends a Single Multicast Packet Addressed to All Intended Recipients

FIG.6.2.3 SINGLE MULTICAST

21

6.3 MULTICAST GROUP CONCEPTS

Multicast is based on the concept of a group. An arbitrary group of receivers expresses an interest in receiving a particular data stream. This group does not have any physical or geographical boundaries—the hosts can be located anywhere on the Internet. Hosts that are interested in receiving data flowing to a particular group must join the group using IGMP. Hosts must be a member of the group to receive the data stream. 6.4 GROUP MULTICAST ADDRESSES Multicast addresses specify an arbitrary group of IP hosts that have joined the group and want to receive traffic sent to this group.

FIG.6.4.1 GROUP MULTICAST

22

In this example, multicast traffic from the source hosts A and D travels to the root (Router D) and then down the shared tree to the two receivers, hosts B and C. Because all sources in the multicast group use a common shared tree, a wildcard notation written as (*, G), pronounced "star comma G," represents the tree. In this case, * means all sources, and the G represents the multicast group. Therefore, the shared tree shown in Figure would be written as (*, 224.2.2.2). Both SPT and shared trees are loop-free. Messages are replicated only where the tree branches. Members of multicast groups can join or leave at any time, so the distribution trees must be dynamically updated. When all the active receivers on a particular branch stop requesting the traffic for a particular multicast group, the routers, prune that branch from the distribution tree and stop forwarding the traffic down that branch. If one receiver on that branch becomes active and requests the multicast traffic, the router dynamically modifies the distribution tree and starts forwarding traffic again. Shortest path trees have the advantage of creating the optimal path between the source and the receivers. This guarantees the minimum amount of network latency for forwarding multicast traffic. This optimization does come with a price, though: The routers must maintain path information for each source. In a network that has thousands of sources and thousands of groups, this can quickly become a resource issue on the routers. Memory consumption from the size of the multicast routing table is a factor that network designers must take into consideration. Shared trees have the advantage of requiring the minimum amount of state in each router. This lowers the overall memory requirements for a network that allows only shared trees. The disadvantage of shared trees is that, under certain circumstances, the paths between the source and receivers might not be the optimal paths—which might introduce some latency in packet delivery. Network designers must carefully consider the placement of the RP when implementing an environment with only shared trees.

23

6.5 MULTICAST FORWARDING In unicast routing, traffic is routed through the network along a single path from the source to the destination host. A unicast router does not really care about the source address—it only cares about the destination address and how to forward the traffic towards that destination. The router scans through its routing table and then forwards a single copy of the unicast packet out the correct interface in the direction of the destination. In multicast routing, the source is sending traffic to an arbitrary group of hosts represented by a multicast group address. The multicast router must determine which direction is upstream (toward the source) and which direction (or directions) is downstream. If there are multiple downstream paths, the router replicates the packet and forwards the traffic down the appropriate downstream paths—which is not necessarily all paths. This concept of forwarding multicast traffic away from the source, rather than to the receiver, is called reverse path forwarding.

6.6 RPF CHECK When a multicast packet arrives at a router, the router performs an RPF check on the packet. If the RPF check is successful, the packet is forwarded. Otherwise, it is dropped. For traffic flowing down a source tree, the RPF check procedure works as follows: Router looks up the source address in the unicast routing table to

determine whether it has arrived on the interface that is on the reverse path back to the source.  If packet has arrived on the interface leading back to the source, the RPF check is successful and the packet is forwarded. If the RPF check in Step 2 fails, the packet is dropped.

24

Figure shows an example of an unsuccessful RPF check. Figure RPF Check Fails

FIG.6.6.1 RPF CHECK FAILS A multicast packet from source 151.10.3.21 is received on interface S0. A check of the unicast route table shows that the interface that this router would use to forward unicast data to 151.10.3.21 is S1. Because the packet has arrived on S0, the packet will be discarded.

25

Figure shows an example of a successful RPF check. Figure RPF Check Succeeds

FIG.6.6.2 RPF CHECK SUCCEEDS This time the multicast packet has arrived on S1. The router checks the unicast routing table and finds that S1 is the correct interface. The RPF check passes and the packet is forwarded. In the PIM (Protocol Independent Multicast) Sparse mode model, multicast sources and receivers must register with their local Rendezvous Point (RP). Actually, the closest router to the sources or receivers registers with the RP but the point is that the RP knows about all the sources and receivers for any particular group. RPs in other domains has no way of knowing about sources located in other domains. MSDP is an elegant way to solve this problem. MSDP is a mechanism that connects PIM-SM domains and allows RPs to share information about active sources. When RPs in remote domains knows about active sources they can pass on that information to their local receivers and multicast data can be forwarded between the domains. A nice feature of MSDP is that it allows each domain to maintain an independent RP which does not rely on other domains, but it does enable RPs to forward traffic between domains.

26

The RP in each domain establishes an MSDP peering session using a TCP connection with the RPs in other domains or with border routers leading to the other domains. When the RP learns about a new multicast source within its own domain (through the normal PIM register mechanism), the RP encapsulates the first data packet in a Source Active (SA) message and sends the SA to all MSDP peers. The SA is forwarded by each receiving peer using a modified RPF check, until it reaches every MSDP router in the interconnected networks—theoretically the entire multicast internet. If the receiving MSDP peer is an RP, and the RP has a (*, G) entry for the group in the SA (there is an interested receiver), the RP will create (S, G) state for the source and join to the shortest path tree for the state of the source. The encapsulated data is decapsulated and forwarded down that RP's shared tree. When the packet is received by a receiver's last hop router, the last-hop may also join the shortest path tree to the source. The source's RP periodically sends SAs, which include all sources within that RP's own domain. Figure shows how data would flow between a source in domain A to a receiver in domain E. 6.7 MSDP FIGURE SHOWS MSDP EXAMPLE

FIG.6.7.1 MSDP EXAMPLE

27

MDSP was developed for peering between Internet Service Providers (ISPs). ISPs did not want to rely on an RP maintained by a competing ISP to service their customers. MSDP allows each ISP to have their local RP and still forward and receive multicast traffic to the Internet.

6.8 IP MULTICAST IP multicast is a technique for one-to-many communication over an IP infrastructure in a network. It scales to a larger receiver population by not requiring prior knowledge of whom or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary. The most common transport layer protocol to use multicast addressing is User Datagram Protocol (UDP). By its nature, UDP is not reliable—messages may be lost or delivered out of order. Reliable multicast protocols such as Pragmatic General Multicast (PGM) have been developed to add loss detection and retransmission on top of IP multicast. IP multicast is widely deployed in enterprises, commercial stock exchanges, and multimedia content delivery networks. A common enterprise use of IP multicast is for IPTV applications such as distance learning and televised company meetings

28

CHAPTER 7 OTHER MULTICAST TECHNOLOGIES
Most effort at scaling multicast up to large networks have concentrated on the simpler case of single-source multicast, which seems to be more computationally tractable. Still, the large state requirements in routers make applications using a large number of trees unworkable using IP multicasts. Take presence information as an example where each person needs to keep at least one tree of its subscribers if not several. No mechanism has yet been demonstrated that would allow the IP multicast model to scale to millions of senders and millions of multicast groups and, thus, it is not yet possible to make fully-general multicast applications practical. For these reasons, and also reasons of economics, IP multicast is not in general use in the commercial Internet backbone. The increasing availability of WiFi Access Points that support multicast IP is facilitating the emergence of WiCast WiFi Multicast which allows the binding of data to geographical locations. Explicit Multi-Unicast (XCAST) is an alternate multicast strategy to IP multicast that provides reception addresses of all destinations with each packet. As such, since the IP packet size is limited in general, XCAST cannot be used for multicast groups of large number of destinations. The XCAST model generally assumes that the stations participating in the communication are known ahead of time, so that distribution trees can be generated and resources allocated by network elements in advance of actual data traffic. Other multicast technologies not based on IP multicast are more widely used. Notably the Internet Relay Chat (IRC), which is more pragmatic and scales better for large numbers of small groups. IRC implements a single spanning tree across its overlay network for all conference groups. This leads to suboptimal routing for some of these groups however. Additionally IRC keeps a large amount of distributed state, which limits growth of an IRC network, leading to fractioning into several non-interconnected networks. The lesser known PSYC technology uses custom multicast strategies per conference. Also some peer-to-peer technologies employ the multicast concept when distributing content to multiple recipients.

29

CHAPTER 8 PROXY SERVER
A server that sits between a client application, such as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfill the requests itself. If not, it forwards the request to the real server. Proxy servers have two main purposes. 8.1 IMPROVE PERFORMANCE Proxy servers can dramatically improve performance for groups of users. This is because it saves the results of all requests for a certain amount of time. Consider the case where both user X and user Y access the World Wide Web through a proxy server. First user X requests a certain Web page, which we'll call Page 1. Sometime later, user Y requests the same page. Instead of forwarding the request to the Web server where Page 1 resides, which can be a time-consuming operation, the proxy server simply returns the Page 1 that it already fetched for user X. Since the proxy server is often on the same network as the user, this is a much faster operation. Real proxy servers support hundreds or thousands of users. The major online services such as America Online, MSN and Yahoo, for example, employ an array of proxy servers. 8.2 FILTER REQUESTS Proxy servers can also be used to filter requests. For example, a company might use a proxy server to prevent its employees from accessing a specific set of Web sites. 8.3 TYPES OF PROXY SERVERS 8.3.1 WEB PROXY SERVERS such as memory A common proxy application is a caching Web proxy. This provides a nearby cache of Web pages and files available on remote Web servers, allowing local network clients to access them more quickly or reliably.
30

When it receives a request for a Web resource (specified by a URL), a caching proxy looks for the resulting URL in its local cache. If it is found, it will return the document immediately. Otherwise it fetches it from the remote server, returns it to the requester and saves a copy in the cache. The cache usually uses an expiry algorithm to remove documents from the cache, according to their age, size, and access history. Two simple cache algorithms are Least Recently Used (LRU) and Least Frequently Used (LFU). LRU removes the least-recently used documents, and LFU removes the least-frequently used documents. Web proxies can also filter the content of Web pages served. Some censorware applications - which attempt to block offensive Web content - are implemented as Web proxies. Other web proxies reformat web pages for a specific purpose or audience; for example, Skweezer reformats web pages for cell phones and PDAs. Network operators can also deploy proxies to intercept computer viruses and other hostile content served from remote Web pages. A special case of web proxies are "CGI proxies." These are web sites which allow a user to access a site through them. They generally use PHP or CGI to implement the proxying functionality. CGI proxies are frequently used to gain access to web sites blocked by corporate or school proxies. Since they also hide the user's own IP address from the web sites they access through the proxy, they are sometimes also used to gain a degree of anonymity. 8.3.2 TRANSPARENT PROXY SERVERS This type of proxy server identifies itself as a proxy server and also makes the original IP address available through the http headers. These are generally used for their ability to cache websites and do not effectively provide any anonymity to those who use them. However, the use of a transparent proxy will get you around simple IP bans. They are transparent in the terms that your IP address is exposed, not transparent in the terms that you do not know that you are using it (your system is not specifically configured to use it.)

31

8.3.3 ANONYMOUS PROXY SERVERS This type of proxy server identifies itself as a proxy server, but does not make the original IP address available. This type of proxy server is detectable, but provides reasonable anonymity for most users. 8.3.4 DISTORTING PROXY SERVERS This type of proxy server identifies itself as a proxy server, but make an incorrect original IP address available through the http headers. 8.3.5 HIGH ANONYMITY PROXY SERVERS This type of proxy server does not identify itself as a proxy server and does not make available the original IP address.

FIG.8.1.1 MSDP PROXY SERVER

32

CHAPTER 9 RUNNING PROCEDURE

9.1 DATABASE CONNECTIVITY

To connect the database follows the given procedure. Click START SETTING CONTROL PANEL ADMINISTRATIVE TOOLS

 Next double click the Data Source ODBC.  After that it shows a dialog box, in that click the ADD button.  It displays a dialog box in that window double click the Microsoft access Driver.  It shows again a dialog box with some data fields.  That dialog box contains a text box with the name of Data Source Name.  In that text box enter the Data source name as router and press the ok button.  Select the path where the information is stored in a MS-Access table.

33

STEP 1: Start the server window.

FIG.9.1.1 START THE SERVER WINDOW

34

STEP 2: Start the login window and the new user can click the “New user click here” button for registration.

FIG.9.1.2 START NEW USER LOGIN WINDOW
35

STEP 3: Fill the column provided for registration. Then click the “Registration” button for registering. It will show the dialog box to wait for some time.

FIG.9.1.3 REGISTERATION WINDOW
36

STEP 4: If the same user trying to entering in the login page before authenticating by the server, it will show the dialog box as “Invalid user”.

FIG.9.1.4 LOGIN WINDOW BEFORE AUTHENTICATION

37

STEP 5: Now in the server side will check the users by user profile.

FIG.9.1.5 USER PROFILE WINDOW

38

STEP 6: In authentication button the new users list will be seen.

FIG.9.1.6 USER LIST WINDOW
39

STEP 7: Now the server will refer the user profile and allow the user for authentication by clicking the button “Accept”. A dialog box will be displayed.

FIG.9.1.7 USER AUTHENTICATION WINDOW
40

STEP 8: Now the user can login to the page by selecting the group(Chennai).

FIG.9.1.8 USER LOGIN WINDOW (a)

41

STEP 9: Another user is entering in another group by the same process(Bangalore).

FIG.9.1.9 USER LOGIN WINDOW( b)

42

STEP 10: Now start the proxy. Proxy will decide to send the file to particular group (Chennai). After selecting the group clicks the “Start” button. Port number will be displayed.

FIG.9.1.10 PROXY WINDOW

43

STEP 11: In server side, click the “Browser” button to select the file to be sent to the group.

FIG.9.1.11 FILE BROWSER WINDOW
44

STEP 12: Select the file to be sent.

FIG.9.1.12 FILE SELECTION WINDOW

45

STEP 13: Now the file name will be displayed in the file name.

FIG.9.1.13 SELECTED FILE WINDOW

46

STEP 14: After clicking the send button the file will be transferred. The file name and file size will be displayed in the file content.

FIG.9.1.14 FILE TRANSFERING WINDOW

47

STEP 15: In proxy side, the file will be splitted into packets. It will send the packets to the clients.

FIG.9.1.15 PACKET SPLITTING WINDOW

48

STEP 16: In login side, click the client status tab. The file received in packets is seen for the group select by the proxy. The Chennai group will get the file.

FIG.9.1.16 PACKET RECEVING WINDOW

49

STEP 17: The person who belongs to another group won’t get the file.

FIG.9.1.17 ANOTHER GROUP USER WINDOW

50

STEP 18: The proxy is used to send the data for another group also. So “Refresh” it.

FIG.9.1.18 SEND DATA FOR ANOTHER GROUP

51

STEP 19: Select the group to send the data and click the “Start” button.

FIG.9.1.19 GROUP SELECTION WINDOW
52

STEP 20: In server side, browse the file to be sent and send it.

FIG.9.1.20 SENDING FILE WINDOW

53

STEP 21: The proxy shows the file in packet form.

FIG.9.1.21 PROXY WINDOW

54

STEP 22: The group member will receive the file.

FIG.9.1.22 RECEVING FILE WINDOW (a)

55

STEP 23: The other group member will not receive the file.

FIG.9.1.23 RECEVING FILE WINDOW (b)

56

STEP 24: The received file is seen in “client files” tab in the login page.

FIG.9.1.24 CLIENT FILE WINDOW

57

CHAPTER 10 FUTURE ENHANCEMENT AND APPLICATIONS
10.1.1 FUTURE ENHANCEMENT The data file broadcasted form the server is splitted into packets and is stored in the proxy while delivering to the clients .When a new client comes the time it arrived is noted and the packets that are to be sent to the other clients are send to the new client .The other packets those were send to other clients will be sent to the new client after the other packets transmission are over. So, it gives the good working environment to the central-server and also clients. 10.1.2 APPLICATIONS  Video file delivery or file transfer over Internet.  Video communications over wireless channels.  IPTV applications such as distance learning and televised company meetings.

10.1.3 CONCLUSION Most of the work in proxy server is dealt with server work load decreasing. In our proposed a proxy-assisted file delivery architecture that employs a centralserver-based periodic broadcast scheme to efficiently utilize central server and network resources, while in the same time exploiting proxy servers too significantly reduce service latency experienced by clients.

58

CHAPTER 11 BIBLIOGRAPHY

[1] B. Li and J. Liu, “Multi-rate video multicast over the Internet: an overview,” IEEE Network, vol. 17, no. 1, pp. 24–29, Jan. 2003. [2] Overview of the MPEG-4 Standard, Mar. 2002. [3] 114. [4] B. Vickers, C. Albuquerque, and T. Suda, “Source adaptive multi-layered T. Jiang, E. Zegura, and M. Ammar, “Inter-receiver fair multicast

communication over the Internet,” in Proc. NOSSDAV’ 99, Jun. 1999, pp. 103–

multicast algorithms for real-time video distribution,” IEEE/ACM Trans. Networking, vol. 8, no. 12, pp. 720–733, Dec. 2000. [5] J. Liu, B. Li, Y.-T. Hou, and I. Chlamtac, “On optimal layering and bandwidth

allocation for multi-session video broadcasting,” IEEE Transactions on Wireless Communications,, vol. 3, no. 3, pp. 656–667, Mar. 2004. [6] Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

Communications. Upper Saddle River, NJ: Prentice-Hall, 2001.

59

Sign up to vote on this title
UsefulNot useful