You are on page 1of 20

DESIGN AND IMPLEMENTATION OF PEER TO PEER FILE

SHARING NETWORK SYSTEM

A PROJECT PROPOSAL

BY

OKWOR JOSEPH

FCAI/CST/HND/2020/2021/0202

PRESENTED TO

THE DEPARTMENT OF COMPUTER SCIENCE DEPARTMENT,

FEDERAL COLLEGE OF AGRICULTURE ISHIAGU, EBONY


STATE,

IN THE PARTIAL FULFILLMENT OF THE REQUIREMENT


FOR THE AWARD OF HIGHER NATIONAL DIPLOMA (HND)
IN COMPUTER SCIENCE, FEDERAL COLLEGE OF
AGRICULTURE, ISHIAGU,

EBONY STATE.

JULY, 2022
CHAPTER ONE
BACKGROUND OF THE STUDY
During the COVID-19 pandemic, many teachers have adapted and started using
video lectures. This poses a number of technical challenges which many schools
and universities might not be prepared for. Everything from camera equipment and
microphones to file distribution needs to be set up and be maintained in a way that
works for the teachers, students, and the organization as a whole.

For teachers to be able to distribute files to their students, such as video lectures, a
method of delivery must be used which ensures that students will be able to access
the files asynchronously, i.e. on a schedule that may not match that of the teachers.
One such widely applied solution today is to upload the file to a constantly online
centralized server, which allows the students to access the file at their own leisure.
However, this traditional approach is reliant on the constant availability of a
centralized service, which poses not only economic concerns but also concerns
regarding reliance on a third party. One can easily imagine a scenario where the
third party owning the centralized service might have very high fees, place
advertisements, or sell user data to advertising companies. By using a centralized
system the teacher and students will be dependent on the centralized system and
will have minimal control over the distribution of the files.

An alternative solution for distributing video lectures and other files to students is to
use a peer-to-peer file-sharing network, such as the popular BitTorrent protocol. In
this protocol, peers connect in order to exchange data, thus alleviating bandwidth
and uptime requirements of any single device. The peers are also able to distribute
the file amongst each other in an efficient manner. In the BitTorrent protocol, the
file is split into pieces which are distributed amongst the peers. A peer must also
determine which of the available able pieces it should get next. In BitTorrent, this is
called a “piece selection process”. The way a peer selects the next piece will affect
the total duration of the file transfer, not only for the peer itself but for everyone in
the network. One can also imagine that other parameters affect the average
download time for peers in the network, such as the file size, the upload and
download speed of peers, and the size of each piece of the file.
STATEMENT OF THE PROBLEM

In the traditional method of sharing and distributing files, the file parts are stored in
public available services, authorized person can access and download them. Hence,
by sharing index data of a file with another PiCsMu user, he or she can obtain all
necessary file parts from the corresponding cloud services and reconstruct the file
using index data information. As a client-server (C/S) application, PiCsMu offers a
centralized way of sharing files. However, having one central authority bears
drawbacks in applying file sharing. High network traffic from many concurrent
users may rapidly lead to congestion at the server, causing a temporal denial of
service or complete breakdown. Further, the identity of the users (i.e., Internet
Protocol (IP) address) is known to the server and can be required by governmental
authorities for prosecution in case of sharing copyrighted or sensitive files. Having
said this, it is also easy for an authority to shut down the entire system by disabling
the central server.

AIM AND OBJECTIVES

The aim of the project is design and implement a running decentralized peer to peer
file distribution system

Objectives includes the following:

I. To design and develop an efficient BitTorrent protocol which helps peers


downloading a file are also uploading pieces of the file that they already have
to other peers, which makes it possible to have a very large network of peers
downloading the file from one or many peers who have parts of the file.

II. To develop sufficient metainfo file (ie a metadata file containing all the
information a peer needs to start downloading the file from the network. It
contains the URI of one or multiple trackers, but also information about the
file the peer wants to download).

III. Design a reliable tracker (ie is a simple application that keeps track of the
current state of each peer and provides that data to other peers.)
SCOPE OF THE STUDY

The scope of the study focuses on the research topic:

How do download and upload speed, piece size, and file size affect the average
download time in a distributed peer-to-peer file-sharing network?

To answer the research question, we specifically examine the popular peer-peer


file-sharing protocol known as BitTorrent, built around key concepts such as
distributed resilience, scalability, and mutual co-operability .The scope is further
restricted to produce results from an actual real-time cluster of controlled peers,
without simulating or simplifying components such as network transfer. The
examined scenario consists of 100 downloaders and one original uploader. This
bears resemblance to scenarios such as a university class, within which all
participants attempt to begin downloading the file from a single original uploader at
the same time. The participants also continue uploading the file until all participants
have downloaded it in full, rather than displaying a more realistic distribution of
start- and stop times in their participation. Under the examined scenario, all
participants are considered to be geographically close to each other (confined to the
continent of Europe), and operate under good network conditions. The participants
are furthermore to exhibit no considerable resource contention arising due to
hardware performance issues.

LIMITATION OF THE STUDY

Compared to downloading a file from a traditional centralized file-hosting service,


which utilizes only the users’ browsers, BitTorrent does pose a significant increase
in the technical challenges that both students and in particular teachers may face;
students would have to familiarize themselves with a new client, perhaps less
intuitively designed than the more mainstream web browsers, and would require
that concepts such as "seeding" are explained to them in order to ensure availability
of the file. Meanwhile, teachers would need not only that same additional
knowledge, but also an understanding of concepts such as trackers and
miscellaneous networking concepts. These problems could potentially be mitigated
by the creation of an app with reasonable default configuration, a potential field of
future study.
The unfortunate side-effect of not relying on a centralized service also means that
one or more peers are required to be online at all times that the file is supposed to
be available. Compared to a centralized server which likely serves a wide variety of
files and can be scaled horizontally depending on demand, for smaller decentralized
networks this may mean that one or more computers or servers are kept running at
all times, dedicated only to this task, regardless of demand. Thus, the protocol in its
current form poses environmental sustainability concerns. Research into the
feasibility of using phones or similar always-on devices to sustain the torrent, may
find ways to mitigate this issue.

Also, time and financial constraints have limited these project research.
CHAPTER TWO

BRIEF REVIEW OF RELATED LITERATURE

The main focus of this work is to enable share functionality in PiCsMu using a P2P
network. In order to position PiCsMu as a P2P file-sharing system among other
existing solutions, a brief analysis and comparison of main characteristics is
necessary. Therefore, this section describes the most well-known P2P systems and
highlights specific characteristics that are compared to PiCsMu.

NAPSTER
Napster is one of the first widely used file-sharing systems that became popular. It
was founded in early 1999 as an online service to share audio files. During its prime
time in 2001, approximately 1.6 million users were online at the same time. It is
estimated that over 2 billion audio files were downloaded up until that point.
Despite its success, Napster had severe problems with copyright issues and
subsequent Lawsuits. Based on a court decision in 2001, an injunction was passed
ordering Napster to shut down. The service had a restart in 2003. It now uses a pay-
per-song charging model to avoid additional copyright lawsuits.

Saroiu et al. classifies Napster as an unstructured, centralized P2P system.


Unstructured systems are characterized by data placement in the network without
any basis of knowledge on its topology, as in structured P2P systems. The main
component in the system architecture is the cluster of dedicated central servers.
These are responsible for bootstrapping as well as providing the lookup service. The
cluster maintains an index with all information on file locations. This includes a list
of currently connected users and their files. Each time a peer starts Napster, it
establishes a connection to the central server cluster. When looking for a file, the
peer queries the index servers. The query is processed by checking each connected
user on availability of the file. A list of possible trade partners is returned. Then, the
requesting peer can choose a user to download from. A direct Hypertext Transfer
Protocol (HTTP) connection is established between both peers, and the file can be
downloaded. After the file exchange, the HTTP connection is closed.
GNUTELLA

Gnutella is a term that has different meanings. It is mostly referred to as an open


source distributed file-sharing protocol, originally developed by Justin Frankel and
Tom Pepper in early 2000. When talking about Gnutella, you can either refer to the
file sharing network itself, or to the original client software used to connect to the
network.
In this work, only the protocol is of interest for comparison with PiCsMu. The
system developed based on the Gnutella 0.4 protocol is classified as an unstructured
P2P network. The network has no knowledge on file locations. There is no central
server or database that could be in control of this task. This is a major difference
from the central index paradigm that Napster used. In order to be part of the
Gnutella network, the clients have to bootstrap first to find an existing node. Since
the network topology changes constantly, finding peers already connected to the
network can be a problem .There are different possibilities implemented that
address this issue. The most common one is to use a predefined list of well-known
hosts that should always be reachable. This solution is similar to having multiple
bootstrap servers.

Gnutella Web caches or UDP (User Datagram Protocol) host caches are additional
solutions. Web caches are programs placed on any Web server that stores IP
addresses of hosts online in the Gnutella network. These servers constantly refresh
their cache to be up-to-date and can be queried by the Gnutella application. UDP
host caches work in a similar way. Peer information is included in the UDP packets
transferred within a Gnutella network. If a peer contacts a bootstrap peer (e.g., from
the list of well-known hosts), it receives an additional list of online peers in the
UDP response message. Since UDP messages are very small and do not take much
bandwidth, this approach scales and performs better than using Web caches. In
addition, every peer in the network can be a UDP host cache.

BITTORRENT

The biggest player in P2P file sharing nowadays is the BitTorrent protocol. It
accounts for 50-75% of the overall P2P traffic and an equal amount of all Internet
traffic. BitTorrent was established in 2001 and has since developed further, now
having a variety of clients available. The main strength of BitTorrent lies in the
possibility of downloading large files considerably fast by using parallel downloads
from multiple peer. The architecture models an unstructured overlay network in
which peers participate to share and download files. Peers that actively participate
in the file exchange are called a swarm. A file is being simultaneously downloaded
from multiple other peers inside the swarm instead of just a single peer. This
procedure allows downloading pieces (i.e., chunks) of a file from single sources. In
order to organize the swarm, BitTorrent uses central servers called trackers. A
tracker is responsible for finding and connecting peers that possess parts of the file.
Unlike other P2P file-sharing systems discussed in this chapter, BitTorrent does not
support the search of files integrated into the client application. In order to
download a file, the user has to possess a torrent file. These are very small files that
contain meta-data about the file to be shared and about a specific tracker, in order to
join the swarm .The torrent files are normally obtained through third parties Web
servers. Peers can download a file if the peer providing the resource runs a
BitTorrent client being the seed. The seed (or seeder) refers to a peer holding a
complete file (i.e., possessing all chunks.

ANONYMOUS P2P AND FREENET

The growth of censorship and erosion of privacy on the Internet is the driving force
behind the idea of anonymous P2P. In these systems, peers that share information
and files try to protect their identity. This is one of the motivations behind PiCsMu
as well. There are many reasons to favor anonymity in the Internet. The distribution
of content (e.g., sharing of audio and video files) may be illegal. Users could fear
retribution from the government or an organization. The so-called whistle-blower
affairs around WikiLeaks is the most popular example. Further reasons to prefer
anonymity include censorship and personal privacy preferences. Users do not want
data about their behavior to be stored or analyzed. Ian Clarke and other developers
behind Freenet were among the first to represent this philosophy and provided a
P2P file-sharing software that was built to protect anonymity.
Freenet is a fully decentralized architecture with no central control. The basic
principles are encryption, data forwarding, and data storage. Contrary to other P2P
systems presented, peers provide two essential services.

First, each peer provides local storage space (data store) to the Freenet network,
building a large distributed cache. The user has no control and no knowledge of
what will be stored on his data store. Files are not only transmitted between peers
but actually stored. Therefore, Freenet is referenced as a file storage service rather
than a file-sharing service. Second, peers are responsible for routing. Each peer
holds a private routing table in which known routes to file keys are stored.
Information contained in the routing table will be kept secret only to the peer itself.
Each time data is inserted into the network, a file key is generated to locate the data.
File keys are calculated using Secure Hash Algorithms (SHA), e.g., SHA-1 and
SHA-2. This key-based approach is similar to the DHT approach used in PiCsMu
and selected BitTorrent clients, e.g., µTorrent, Vuze, and BitTorrent (since version
4.2.0).

COMPARISON

This section compares the PiCsMu system with P2P capabilities. The detailed
procedures behind PiCsMu are explained throughout this work. Although a lot more
systems exist that are not directly considered in the comparison, each selected
example represents the characteristics of a typical group of P2P systems. A
discussion on security-related issues is not part of this work. The reason is that
security is mainly influenced by very specific implementations rather than a general
system.

Lua et al. present a complete comparison of structured and unstructured P2P


overlays.

The following terminology is used:

I. Topology - Categorizes the network topology into centralized or


decentralized.
II. Architecture - Categorizes the overlay scheme into structured or
unstructured.
III. Lookup - The implemented protocol to query other peers for information.
IV. Efficiency - Is the lookup guaranteed to find content and how efficient is it?
V. Bootstrapping - Describes the mechanism on how a peer joins the network.
VI. Storage - What do peers store, and what is exchanged?
VII. File Search - Describes how the user can search for files.
VIII. Download - Identifies the entity where data is downloaded from.
IX. Upload - Identifies the entity where data is uploaded to.
X. Public Sharing - Does the system support the sharing of files with all peers?
XI. Private Sharing - Does the system support the sharing of files with specific
peers only?

CHAPTER THREE

SYSTEM DESIGN AND ANALYSIS

METHOD
One of the primary goals of the methods was to design and develop away to run
scalable and reliable network experiments for a large number of peers. In this study,
the designed method was used to simulate a medium-sized network, but the method
was designed to scale both horizontally; over an arbitrary amount of virtual
machines, and vertically; up to as many peers per virtual machine as desired and
tenable.
To accomplish this, automation was key, as manually configuring hundreds of peers
across a multitude of virtual machines and different experiment runs would not be
feasible. In this study, a total of 9696 different peers were set up and configured.
Such an amount would inevitably open up to the risk of human error influencing the
end results of the experiment.

There were several pros and cons with this approach; naturally running a real
network on real virtual machines makes some external factors difficult to control. In
fact, networking has quite a lot of external factors that were not controlled for in
this experiment, such as ping, router performance, NATs, and many more. It’s also
clear that there’s a _1 second time difference when the peers start, as they are
synchronized by polling the data ingestion system once every half second.

There are also several smaller factors of error in this experiment, such as the
unpredictability of Google Cloud Compute resources, a tiny delay between when an
event happened in a peer and when it was reported, potential resource contention,
and monitoring that use processing and networking resources. These smaller factors
of errors were however actively monitored during the experiment, and we saw no
evidence of resource contention or unexpected Google Cloud Compute resource
changes. Other methods were considered and could lead to smaller margins of
errors. One such method that was considered was to replace the networking layer of
applications and instead use process-to-process communication on the same
computer or in-process communication. This approach, while technically feasible,
turned out to be impractical. Due to the replaced networking layer, there would be
no way to horizontally scale the experiment across multiple machines.

METHODOLOGY
A software development methodology is a framework that is used to structure,
plan, and control the process of developing an information system, this includes the
pre-definition of specific deliverables and artefact’s that are created and completed
by a project team to develop or maintain an application. A wide variety of such
frameworks have evolved over the years, each with its own recognized strengths
and weakness. One software development methodology framework is not
necessarily suitable for use by all projects. Each of the available methodology
frameworks are best suited to specific kinds of projects, based on various technical,
organizational, project and team considerations. These software development
frameworks are often bound to some kind of organization, which further develops,
supports the use, and promotes the methodology framework.

SYSTEM DEVELOPMENT LIFE CYCLE


System development life cycle is a process of developing software on the basis of
the requirement of the end user to develop efficient and good quality software. It is
necessary to follow a particular procedure. The sequence of phases that must be
followed to develop good quality software is known as SDLC (System
Development Life Cycle).The software is said to have a life cycle composed of
several phases. Each of these phases results in the development of either a part of
the system or something associated with the system, such as a test plan or a user
manual. In the life cycle model, called the “spiral model,” each phase has well
defined starting and ending points, with clearly identifiable deliverables to the next
phase. As with most undertakings, planning is an important factor in determining
the success or failure of any software project. Essentially, good project planning
will eliminate many of the mistakes that would otherwise be made, and reduce the
overall time required to complete the project. As a rule of thumb, the more complex
the problem is, and the more thorough the planning process must be. Most
professional software developers plan a software project using a series of steps
generally referred to as the software development life cycle. The following example
is a generic model that should give you some idea of the steps involved in a typical
software project.

THE DIAGRAM OF SYSTEM DEVELOPMENT LIFECYCLE

SYSTEM REQUIREMENTS
Hardware Requirement
Processor: Intel(R) Core or higher
Installed Memory: 4.00GB or higher
Speed: 1.40GHz or faster
Operating System: 32/64-Bit operating system, x86/x64-based processor
Software Requirement
Operating System: Windows 7/8/8.1/10
Data Base: MySQL Server Version 5.3 and above
Web Server: ApacheWeb
Technologies: HTML, CSS, JQuery/Ajax and
PHPIDE & Tools: Aptana Studio, Phpmyadmin

ANALYSIS OF THE EXISTING SYSTEM

In a centralized method of sharing and distributing files, the file parts are stored in
public available central server owned by an individual or cooperate bodies, Hence
any authorized person can access and download them.
It provides a broad selection of popular online material. These services are quite
often used with Internet collaboration methods, including email, blogs, forums, or
other mediums, where direct download links from the file hosting services can be
included. These service websites usually host files to enable users to download
them. Once users download or make use of a file using a file-sharing network, their
computer also becomes a part of that network, allowing other users to download
files from the user's computer.
However, having one central authority bears drawbacks in applying file sharing.
High network traffic from many concurrent users may rapidly lead to congestion at
the server, causing a temporal denial of service or complete breakdown. Further, the
identity of the users (i.e., Internet Protocol (IP) address) is known to the server and
can be required by governmental authorities for prosecution in case of sharing
copyrighted or sensitive files. Having said this, it is also easy for an authority to
shut down the entire system by disabling the central server.

Another major issue with centralized file-sharing applications is the problem of


spyware or adware, as some file-sharing websites have placed spyware programs in
their websites. These spyware programs are often installed on users' computers
without their consent and awareness.
ANALYSIS OF THE PROPOSED SYSTEM

Unlike centralized file sharing system, Decentralized file system or peer to peer file
sharing method tends to bypass the issue of third party software called central
server which host the files needed to be downloaded or uploaded.

This chapter explains the composition of the PiCsMu software. The designed
protocols are shown and explained with the help of selected figures. The conception
of the main components is described in detail, and design decisions are analyzed,
showing their trade-of.

FILE UPLOAD AND DOWNLOAD PROTOCOL


An abstraction of the protocols for file upload and file download is introduced.
These two are the most basic functions in the PiCsMu system.
Figures below shows a very simplified sequence diagram of the file upload and
download protocol. The Application Core forms the central point of the software
and handles
(1) initialization and
(2) Control flow.

UPLOAD PROTOCOL
Each operation is executed from here once a specific start condition is available. A
start condition can be a user input (e.g., click a button in the application) or the
result of a previously finished operation. File encoding and fragmentation are the
two main mechanisms provided by the Application File Handler.
Its responsibilities are on the one hand, prepare a file to be uploaded, and on the
other hand, reconstruct a file from the file parts downloaded. The P2P/Central
Server object represents the storage for the index (i.e., the set of all information
needed to store and retrieve a file). The index is explained in more detail in the later
chapter. If a file is stored as private, the central server is used. Otherwise, if a file is
being shared, the P2P network provides the index storage. Finally, Cloud Service
represents the set of cloud services that provide the actual data storage for the
encoded file parts. A file upload in general works as follows: The user selects a file
to upload for sharing or private usage. In order to make use of the weak data
validation in cloud services, the file first needs to be split into several file parts.
This fragmentation process is handled by the Application File Handler. Each of the
file parts is then encoded into files with file types that are accepted in the cloud
services in use. To be more precise, the file types of the encoded file parts have to
comply with regulations of the cloud services. If only image files (e.g., PNG, JPEG,
GIF) are allowed to be uploaded, the file parts have to be encoded into images.
The number and order of the produced file parts, together with the applied
encoding algorithms, are stored in the index. Next, if all file parts are encoded, each
is uploaded to one or more cloud services. The personal accounts of the user are
used to gain access. In order to find all parts related to a file and to decode them
using the correct algorithm, the index needs to be stored as well. Depending on the
state of the file, shared or private, the index is stored on the P2P network or the
central server respectively.

DOWNLOAD PROTOCOL
The process of the file download works exactly the opposite way than the upload
process. The Application Core obtains a Universally Unique Identifier (UUID) for
the file to download. UUID is an identifier standard to enable distributed systems to
uniquely identify information without central control. UUIDs are included in search
results or share notifications. A notification is received when a PiCsMu user shares
a file specific to another user. The index is downloaded from the P2P/Central
Server using the file UUID. With the information contained in the index, the
Application Core knows all cloud services from which to download the necessary
file parts in order to reconstruct the file.
Before the file can be reassembled, each file part needs to be decoded using the
corresponding algorithm from the encoding. This way, the original data is obtained.
With the number of file parts and their order known from the index, the file is
reassembled and presented to the user.

INDEX
The index is a collective term that describes information about the resulting
PiCsMu upload process. It contains all parameters necessary to locate and
reconstruct a file in the PiCsMu network. The following parts build the index:

1) Basic file information: Data to identify and describe a file in the PiCsMu
system. Each file can be identified by a unique number. In addition, helpful
user information such as file name, description, tags, upload date, etc., are
included.
2) Fragmentation: The information on how a file has been fragmented into
multiple parts. In order to reconstruct a file, the exact order of fragmentation
and size of each file part needs to be known. Otherwise, the bytes forming the
data cannot be correctly aligned, and the file becomes corrupted and
unusable.
3) Encoding/Decoding: Each file part is encoded separately. Hence, the
individually used encoding algorithm and the embedded file type has to be
stored in the index. Without this information, a receiver of the file would not
be able to decode a file part in the correct way, resulting in loss of data.
4) Location: Knowledge of where a file part has been stored. This includes
individual locations (i.e., web address) of all cloud services used during the
upload process as well as methods to authorize against them. Without proper
authorization, PiCsMu would not be able to gain access to the file parts.

UPLOAD PROTOCOL
DOWNLOAD
JUSTIFICATION PROTOCOL
OF THE PROPOSED SYSTEM

The proposed system will be designed to bypass issues encountered with centralized
system of file sharing such as:

High network traffic from many concurrent users may rapidly lead to congestion at
the server, causing a temporal denial of service or complete breakdown.
Further, the identity of the users (i.e., Internet Protocol (IP) address) is known to the
server and can be required by governmental authorities for prosecution in case of
sharing copyrighted or sensitive files. Having said this, it is also easy for an
authority to shut down the entire system by disabling the central server.

The proposed system offers some major advantages over the traditional centralized
system of file sharing, those advantages includes the following:

1. Cost
The overall cost of building and maintaining a peer to peer network is relatively
inexpensive. The setup cost has been greatly reduced due to the fact that there is no
central configuration. Moreover for the windows server, there is no payment
required for each of the users on the network. The payment should be done only
once.
2. Reliability
Peer to Peer network is not dependent on a centralized system. Which means that
the connected computers can function independently with each other. Even if one
part of the network fails, it will not disrupt other parts. Only the user will not be
able to access those files.
3. Implementation
It is generally easy to setup a peer to peer network requiring no advanced
knowledge. Only a hub or a switch is needed for the connection. And also since all
the connected computers can manage themselves, there should be no much
configurations. However it needs some specialized software.
4. Scalability
P2P networking has one of the best scalability features. Even if there are extra
clients added, the performance of the network will remain the same. Sometimes
more users tends to share a single file. For this case, the network will increase the
availability of bandwidth.
5. Administration
There is no need for any specialized network administrator since all the users are
given the right to manage their own system. They can choose what type of files they
are willing to share.
6. Server Requirement
In peer to peer networking, each connected computers acts as a server and a
workstation. Therefore, there is no need to use a dedicated server. All the
authorized users can use their respective client computer to access the required files.
This can lead to saving more overhead costs.
7. Resource Sharing
In P2P networking, the resources are shared equally among all the users. The
connected devices can provide and consume resources at the same time. And also
this peer to peer networking can be used for locating and downloading online files
easily.
REFERENCES
Bram Cohen. (June 2003). “Incentives build robustness in BitTorrent”.
In:Workshop on Economics of Peer to Peer systems 6.

Carliss Y. Baldwin and Kim B. Clark. (March 2000). Design Rules, Vol. 1: The
Power of Modularity. The MIT Press, first edition.

Ian Clarke, Oskar Sandberg, Matthew Toseland, and Vilhelm Verendel. (2010).
Private Communication through a Network of Trusted Connections: The
Dark Freenet.

Ian Clarke, Scott G. Miller, Theodore W. Hong, Oskar Sandberg, and Brandon
Wiley. (February 2002). Protecting Free Expression Online with Freenet.
Internet Computing, IEEE, 6(1):40 {49.

John Buford, Heather Yu, and Eng Keong Lua. (2008). P2P Networking and
Applications. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.

Michael Armbrust, Armando Fox, Rean Gri_th, Anthony D. Joseph, Randy H.


Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, and Matei
Zaharia. (February 2009). Above the Clouds: A Berkeley View of Cloud Computing.
Technical report, University of California at Berkeley, Berkeley, CA, USA.

Petar Maymounkov and David Mazières. (2002) “Kademlia: A Peer-to-Peer


Information System Based on the XOR Metric”. In: Peer-to-Peer Systems.
Ed. by Peter Druschel, Frans Kaashoek, and Antony Rowstron. Berlin,
Heidelberg: Springer Berlin Heidelberg, (2002) pp. 53–65. isbn: 978-3-
540-45748-0.

Rajkumar Buyya, James Broberg, and Andrzej M. Goscinski. (2011). Cloud


Computing Principles and Paradigms. Wiley Publishing.
Simone Cirani and Luca Veltri. , (December 2009) pages 1. A Multicast-Based
Bootstrap Mechanism for Self-Organizing P2P Networks. In Global
Telecommunications Conference, 2009. GLOBE-COM 2009. IEEE.

Thomas Bocek, Ela Hunt, David Hausheer, and Burkhard Stiller. (April 2008).
Fast Similarity Search in Peer-to-Peer Networks. In IEEE, editor, 11th
IEEE/IFIP Network Operations and Management Symposium (NOMS
2008), number 2008 in Network Operations and Management
Symposium, pages 240{247, Los Alamitos, IEEE.

You might also like