You are on page 1of 163

1

2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

KENDALL BRILL & KLIEGER LLP
Richard B. Kendall (90072)
rkendall@kbkfirm.com
Laura W. Brill (195889)
lbrill@kbkfirm.com
Richard M. Simon (240530)
rsimon@kbkfirm.com
Dorian S. Berger (264424)
dberger@kbkfirm.com
10100 Santa Monica Blvd., Suite 1725
Los Angeles, CA 90067
Telephone: 310.556.2700
Facsimile: 310.556.2705

FENWICK & WEST LLP
Laurence F. Pulgram (115163)
lpulgram@fenwick.com
Jennifer L. Kelly (193416)
jkelly@fenwick.com
555 California Street, 12th Floor
San Francisco, CA 94104
Telephone: 415.875.2300
Facsimile: 415.281.1350

Attorneys for CBS Interactive Inc. and
CNET Networks, Inc.

UNITED STATES DISTRICT COURT
CENTRAL DISTRICT OF CALIFORNIA, WESTERN DIVISION

ALKIVIADES DAVID, et al.,

Plaintiffs,

v.

CBS INTERACTIVE INC., CNET
NETWORKS, INC.,

Defendants.

Case No. CV11-9437 DSF (JCx)

DECLARATION OF GLENN
REINMAN, PH.D. IN SUPPORT OF
DEFENDANTS OPPOSITION TO
PLAINTIFFS MOTION FOR
PRELIMINARY INJUNCTION

[Filed concurrently with Opposition to
Motion for Preliminary Injunction,
Declaration of Leana Golubchik, Ph.D.,
Declaration of Sean Murphy,
Declaration of Dorian Berger, and
Evidentiary Objections]

Hon. Dale S. Fischer

Date: February 25, 2013
Time: 1:30 p.m.
Crtrm.: 840

Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 1 of 163 Page ID
#:929
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 1
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

DECLARATION OF GLENN REINMAN, PH.D.
I, Glenn Reinman, Ph.D., declare as follows:
1. I have been retained by defendant CBS Interactive Inc. (CBSI) in the
above-entitled action. I have personal knowledge of the facts set forth herein,
except as to those stated on information and belief and, as to those, I am informed
and believe them to be true. If called as a witness, I could and would competently
testify to the matters stated herein.
I. Introduction
2. My name is Glenn Reinman. I have been asked by CBSI to conduct
analysis and provide opinions regarding matters at issue in this case. This
declaration focuses, in particular, on addressing non-infringing beneficial uses of
BitTorrent technology, the availability of Plaintiffs works online, and the
functionality of CBSIs Web site.
A. Qualifications
3. I am currently an Associate Professor in the Department of Computer
Science at the University of California Los Angeles in Los Angeles, California
(UCLA). I joined the faculty of UCLA in 2001 as an Assistant Professor.
4. I have a Bachelor of Science from the Massachusetts Institute of
Technology (1996), a M.S. from UCLA (1999), and a Ph.D from UCLA (2001).
5. For more than 15 years, my research has focused on computer systems,
including network design. I am an expert in the fields of computer architecture and
systems, including the areas of computer networking. I have recently published in
the International Conference on Mobile Computing and Networking
(MOBICOMM), one of the premier conferences in computer networking. In my
work I use peer-to-peer (P2P) software and BitTorrent.
6. I am being compensated at a rate of $350 per hour for my work on this
case. I am also being reimbursed for reasonable and customary expenses associated
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 2 of 163 Page ID
#:930
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 2
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

with my work and testimony in this case. No portion of my compensation is
dependent upon the results of this lawsuit or the substance of my testimony.
7. A true and correct copy of my curriculum vitae is attached as Exhibit 1;
it includes all publications I have authored within the preceding ten years.
B. Materials Reviewed
8. In preparing this declaration, I reviewed materials including the
following: Plaintiffs Motion for a Preliminary Injunction and accompanying
exhibits, Plaintiffs First Amended Complaint, and the order of this Court
concerning CBSIs motion to dismiss. I have also examined CBSIs publicly
available Web sites to understand their functionality and operation. In conducting
my analysis and researching my conclusions, I also relied on my education and
experience.
II. Benefits Of The BitTorrent Protocol
9. BitTorrent is a well-known protocol for providing fast, reliable
communication through distributed downloads. But in addition to providing
increased download speed and reliability, BitTorrent has a number of other
additional benefits.
A. BitTorrent Reduces Aggregate Internet Traffic
10. A critical concern for the efficient operation of the Internet is the
amount of Internet traffic. Many requests for files need to be routed over tens of
thousands of miles of electronic, wireless and optical communications
infrastructure. Because of the amount of data transferred over the Internet,
companies must make significant and costly investments in Internet infrastructure.
The investments include the laying of undersea cables, investing in routers, and
maintaining network equipment. Reducing the amount of data that is transmitted
over the Internet is a major concern to corporations, universities, and individuals.
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 3 of 163 Page ID
#:931
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 3
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

11. By distributing data across multiple sections of the Internet and
allowing users to download content from the closest location where the content is
stored, BitTorrent reduces the amount of aggregate data that is transferred over the
Internet. For example, in a scenario where a user wants to download a copy of the
Linux operating system, it is far more efficient to download a copy from a location
that is geographically close to the downloading user. Without P2P technology,
requests for data such as the Linux operating system often have to be made over
thousands or tens of thousands of miles of communications networks. With
BitTorrent, however, the transferring of files can be accomplished in a significantly
more efficient manner. BitTorrent technology allows users to download files from a
geographically close location, reducing the amount of overall network traffic.
12. For universities or other providers of internal networks, the use of
BitTorrent has the benefit of allowing for the efficient routing of data transfers. File
requests that might otherwise be routed outside a universitys network can be
accomplished within a universitys network. This reduces costs to a university
which might otherwise need to expend resources to purchase additional bandwidth
to connect the university to a larger network. For example, if a user requests a copy
of the Linux operating system and another user within the university already has a
copy available, the file can be transferred within the universitys network without
the request being made outside the universitys network. Attached hereto as Exhibit
2 is a true and correct copy of an academic article, Stutzbach, D., Zappala, D., Rejai,
R., Swarming: Scalable Content Delivery for the Masses, Technical Report UO-
TR-2004-01, University of Oregon (2004).
B. BitTorrent Provides Low Cost Distribution
13. Academics, non-profit organizations, and aspiring musicians (among
others) often have content to distribute, but lack the funding to afford the servers
and/or bandwidth to distribute their content using the Internet. For example,
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 4 of 163 Page ID
#:932
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 4
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

important collaborative software projects such as Apache and Linux, which I discuss
further below, are not funded by private industry, and it is expensive to distribute
these software projects to the public. Without BitTorrent technologies, a party
would have to shoulder the costs of maintaining a server and/or purchasing network
bandwidth to provide these programs to the public. However, with BitTorrent these
programs can be shared by users without a single party bearing the costs of
distributing the software. Allowing projects that lack funding to be distributed in
mass quantities is a major benefit to the public arising from the BitTorrent protocol.
C. BitTorrent Provides A Valuable Tool To Circumvent Repressive
Regimes
14. BitTorrent Technology is a valuable tool to combat censorship by
repressive regimes. Repressive countries are able to block access to specific Web
sites using firewalls and other technologies. Where information is hosted on a
single Web site or server it can be effectively blocked by governments.
15. By distributing files across many machines, BitTorrent makes the
blocking of content much more difficult to accomplish. For example, TED is a
nonprofit devoted to Ideas Worth Spreading, which distributes videos for talks by
many of the leading thinkers in the world, from Bill Gates to Bobby McFerrin. If a
user wants to download a TED Talk which is only available on the TED Conference
Web site, a government can block access to that Web site. If the TED Talk is
available to be downloaded via the BitTorrent protocol, blocking the content is
much more challenging since the video resides on many computers and thus cannot
be effectively blocked. The inherently decentralized nature of the BitTorrent
protocol prevents single source censorship for the very same reasons that the
protocol is extremely reliable: redundant, distributed downloads where any system
on the network with the requested data may act as a server.
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 5 of 163 Page ID
#:933
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 5
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

16. As one example, in October 2012, the New Statesman magazine, in
response to Chinese censorship of its magazine (which contained articles on
dissident figures such as Ai Wei), made its magazine available for download via the
BitTorrent protocol. The use of BitTorrent was an effective means of disseminating
information past government censors. Attached hereto as Exhibit 3 is a true and
correct copy of a magazine article, Helen Lewis, Taking on the Great Firewall of
China, New Statesman (October 18, 2012).
III. BitTorrents Non-Infringing Uses
BitTorrent is agnostic to file type, meaning that it can be, and is, used to
transfer files of all kinds. The following are among the well known, documented,
and prominent uses of BitTorrent.
A. Distribution Of Software
17. BitTorrent is used to distribute widely used, complex and sophisticated
software such as Apache, Linux and OpenOffice that are not owned or funded by a
corporation. The Apache Software Foundation is a prominent non-profit developer
of software for the public good, with over a hundred current projects under way.
These include the well known Apache Web server, which is used by many
corporations and universities to operate their Web sites. Linux is an open source
computer operating system, founded in 1991, which originally competed with
Windows as the operating system for Intel x86 PCs. It has now been ported to more
computer hardware platforms than any other operating system, and currently drives
most of the fastest supercomputers in the world, as well as Googles Android
operating system which is used on mobile phone and tablet computers. OpenOffice
is a free software application for editing and creating documents and spreadsheets.
All of this software, and many others, are distributed via the BitTorrent protocol.
Attached hereto as Exhibit 4 is a true and correct copy of the OpenOffice.org site
containing links to download its software using BitTorrent, available at
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 6 of 163 Page ID
#:934
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 6
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

http://www.openoffice.org/distribution/p2p/magnet.html (last visited January 10,
2013).
18. Software products such as Apache Web server, Linux and OpenOffice
are comprised of many large files that would take large amounts of resources to
transfer between users. Reasons for using BitTorrent to distribute software include
(1) the low cost of distributing software, particularly where there is not a central
company standing to profit from the distribution and thus willing to shoulder the
cost of making such free software available; and (2) the reliability of using
BitTorrent, particularly where the files are very large and downloading from one
source could present a high likelihood of the download failing.
B. Distribution of Large Datasets
19. BitTorrent is used to distribute large datasets, particularly for academic
research. Distributing datasets through BitTorrent can be done at limited cost. In
addition, the reliability of BitTorrent allows for a greater likelihood the very large
data sets will be downloaded successfully. For example, the Harvard Universitys
Personal Genome Project has made genomic data available for download using the
BitTorrent protocol. Attached hereto as Exhibit 5 is a true and correct copy of an
article announcing that genome data files will be available via BitTorrent, Annalee
Newitz, Download Your Genomes on BitTorrent, i09 News (March 11, 2009),
available at: http://io9.com/5168176/file+sharing-your-genome-with-the-world (last
visited January 10, 2013).
20. Research studies have found that the BitTorrent protocol achieves
speeds four times faster when distributing large biological datasets when compared
to non-P2P technologies. Attached hereto as Exhibit 6 is a true and correct copy of
an academic journal article, Sikander Azam, Shamshad Zarina, Distribution of
biological databases over low-bandwidth networks, Bioinformation 8(5): 239-242
(2012).
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 7 of 163 Page ID
#:935
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 7
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

21. BitTorrent provides an important tool for the distribution of scientific
datasets. Biomedical data, seismic data, population data, and climate data have all
been made available to researchers using BitTorrent technology. Attached hereto as
Exhibit 7 is a true and correct copy of the academic journal article, Morgan Langille
and Jonthan Eisen, BioTorrents: A File Sharing Service for Scientific Data, PLoS
ONE 5(4): e10071.
C. Software Patches
22. BitTorrent is a valuable tool for distributing software patches.
Software patches are pieces of software that are designed to fix problems, including
security threats, for an existing piece of software, or to update a piece of software
that is outdated. The rapid and efficient deployment of software patches can be
extremely important to protecting users of software and has been an ongoing
challenge to academics and private industry. BitTorrent, by minimizing the amount
of network traffic and allowing for the speedy download of data, is a valuable tool
for rapidly deploying software patches, particularly where there might be hundreds
or thousands of computers that require an update of software. For example,
Blizzard Entertainment, Inc. (Blizzard), which makes a variety of computer
games, uses BitTorrent to distribute software updates to its millions of users.
BitTorrent allows Blizzard to quickly distribute software updates. Attached hereto
as Exhibit 8 is a true and correct copy of the Blizzard Frequently Asked Questions
(addressing Blizzards use of BitTorrent), available at: http://us.blizzard.com/en-
us/company/about/legal-faq.html (last visited January 10, 2013).
D. Online Courses
23. BitTorrent is a valuable tool for making online courses available to a
wide audience, quickly, reliably and at a low cost. In addition, using BitTorrent
allows academic institutions to make courses available in regions where they might
otherwise be subject to censorship. Among the institutions that use BitTorrent to
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 8 of 163 Page ID
#:936
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 8
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

distribute their courses are Stanford University, Khan Academy, and the University
of Missouri. Attached hereto as Exhibit 9 is a true and correct copy of the article,
Dan Stober, Free Online Engineering Courses Prove a Big Hit, Stanford Report
(October 15, 2008), available at:
http://news.stanford.edu/news/2008/october15/online-101508.html (last visited
January 10, 2013).
24. The TED Conference has made many of its talks available for
download via the BitTorrent protocol. Making TED Talks available via BitTorrent
allows them to be widely distributed and reduces the costs to TED, as it no longer
has to purchase the bandwidth to distribute its content to each user who wants to
download a copy of the talk. Attached hereto as Exhibit 10 is a true and correct
copy of the TED Conference article, TEDTalks BitTorrent app, (September 27,
2010), available at: http://blog.ted.com/2010/09/27/new-tedtalks-bittorrent-app/ (last
visited January 10, 2013).
E. Music And Video Content
25. BitTorrent has been used by many artists as the means of choice to
distribute their music and video content. BitTorrent allows artists to distribute
content widely without having to incur the significant costs of finding a distribution
channel and paying for the bandwidth when users want to download a recording.
Content producers that desire to make their works freely available often designate
their works as available for copying through a Creative Commons License. Some
examples of artists who have chosen BitTorrent to freely distribute their songs
(under a Creative Commons License) include the bands Nine Inch Nails and
LoveDrug, whose music appears in video reviews on CBSI to which Plaintiffs have
pointed as purported encouragement of infringement. In fact, both of these bands
have chosen to distribute their works using the BitTorrent protocol.
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 9 of 163 Page ID
#:937
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 9
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

26. Nine Inch Nails uses BitTorrent to distribute its music. Further, Nine
Inch Nails distributes its songs on BitTorrent free of charge. Attached hereto as
Exhibit 11 is a true and correct copy of the article, Peter Nowak, Nine Inch Nails
Releases Album On BitTorrent, CBC News (March 3, 2008), available at:
http://www.cbc.ca/news/story/2008/03/03/tech-nineinchnails.html (last visited
January 4, 2013). Attached hereto as Exhibit 12 is a true and correct copy of the
Nine Inch Nails Web site, available at
http://dl.nin.com/theslip/download?token=z2jn4ptk&submit.x=70&submit.y=18.
This Web site (associated with Nine Inch Nails) offers torrent files that enable users
to download Nine Inch Nails songs.
27. The Band LoveDrug has released its music on BitTorrent through
Bands Under The Radar (a company that seeks to get exposure for small and up-
and-coming bands). Attached hereto as Exhibit 13 is a true and correct copy of the
article, BUTR [Bands Under The Radar] Debuts In BitTorrent App Store:
Download Free Music From 13 Artists, (June 14, 2011), available at:
http://www.bandsundertheradar.com/2011/06/ (last visited January 8, 2013).
IV. Examination Of Plaintiffs Works Availability Online
28. I conducted a search of the twenty-two works that Plaintiffs have
identified in their preliminary injunction motion to see if they were available on
YouTube.com. My investigation found that these works were all available for free
on YouTube and had been available on YouTube.com for at least year.
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 10 of 163 Page ID
#:938
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 10
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION


Artist Song Title Data Uploaded To
YouTube
Jalil Hutchins It's all in Mister Magic's Wand November 23, 20007
Jalil Hutchins Yours for a Night October 4, 2008
Jalil Hutchins Rap Machine January 8, 2008
Jalil Hutchins Magic's Wand November 8, 2007
Jalil Hutchins The Haunted House of Rock October 2, 2009
Jalil Hutchins Funky Beat March 2, 2008
Jalil Hutchins Echo Scratch November 5, 2008
Jalil Hutchins One Love May 30, 2008
Jalil Hutchins I'm a Ho September 19, 2008
Jalil Hutchins Fugitive November 8, 2007
Jalil Hutchins The Good Part February 15, 2011
Jalil Hutchins Nasty Lady June 13, 2010
Jalil Hutchins Last Night (I had a long talk with
myself)
January 7, 2012
Jalil Hutchins Friends December 23, 2006
Jalil Hutchins Five Minutes of Funk February 14, 2008
Douglas Davis La Di Da Di May 4, 2009
Douglas Davis The Show June 3, 2008
Douglas Davis Play This Only At Night April 1, 2008
Douglas Davis All the Way to Heaven July 19, 2009
Douglas Davis Chill Will:Cuttin' It Up February 3, 2010
Douglas Davis Leave it to the Cut Professor February 1, 2010
Douglas Davis Lovin Every Minute of It June 30, 2008
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 11 of 163 Page ID
#:939
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 11
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION


V. Plaintiffs Works Comprise A Small Fraction Of The Music Available
29. I conducted a review to determine if the twenty-two works asserted by
the Plaintiffs are a significant portion of the market for musical recordings and
compositions. The twenty-four works at issue represent a de minimis part of the
music market. On Amazon, the albums for Douglas Davis and Jalil Hutchins have a
sales rank of 34,934 and 42,231, respectively. Attached hereto as Exhibit 14 is a
true and correct copy of the sales rankings of the Douglas Davis and Jalil Hutchins
works from Amazon.com (January 10, 2013).
VI. PirateBay Torrent Tracking Reports Are Unreliable
30. I have reviewed the PirateBay torrent tracking reports that are attached
to Plaintiffs motion for a preliminary injunction. Attached hereto as Exhibit 15 is a
true and correct copy of the Declaration of Christian A. Anstett in Support of
Plaintiffs Motion for a Preliminary Injunction at 42-44. It is my understanding that
screenshots of torrent tracking reports from the PirateBay Web site are offered as
evidence of the direct infringement of Plaintiffs works. My review of the PirateBay
materials identified in the motion for a preliminary injunction (including visiting the
PirateBay Web site repeatedly over several weeks) has found that the PirateBay
materials cited by Plaintiffs are likely to be unreliable, inaccurate, and out-of-date.
31. First, the PirateBay Web site provides no documentation or
substantiation regarding how PirateBay tracks torrent files. Because of the lack of
documentation, it is not possible to determine the accuracy of the PirateBay tracking
reports.
32. Second, it is likely that the information on the PirateBay Web site is
out-of-date. The torrent tracking reports on the PirateBay Web site do not contain
any information identifying the date on which the tracking reports were run.
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 12 of 163 Page ID
#:940
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

125471.3 12
DECLARATION OF GLENN REINMAN, PH.D.
IN OPPOSITION TO MOTION FOR PRELIMINARY INJUNCTION

Torrents for both artists works were uploaded in 2008. The PirateBay torrent
tracking reports might date from that period.
33. Third, I have visited the PirateBay Web site containing the torrent
tracking reports repeatedly over several weeks. The number of alleged seeders
and leechers has been constant during that time. A seeder is a person offering
the file for sharing and a leecher is a person who might be downloading the file.
The fact the number of seeders and leechers has remained unchanged is
indicative that the PirateBay reports do not reflect actual ongoing downloading of
the works at issue in this case.
34. Fourth, the PirateBay tracking reports do not specify the location of the
seeders or the leechers. Accordingly, it is not possible to determine from these
reports that any alleged acts of copying have occurred within the United States.
35. Fifth, the PirateBay tracking reports do not provide any indication that
any alleged files were downloaded using client software obtained by users of CBSI
Web sites.
VII. CBSI Is Not A Distributor Of BitTorrent
36. CBSI does not host or develop BitTorrent software. CBSI only
provides links to external sites where BitTorrent clients can be downloaded. In the
computer industry the term distributor refers to a party that hosts or develops or
sells physical copies of software. For example, a distributor would be a company
like BitTorrent Inc. that develops BitTorrent software. Merely providing links and
descriptions of software does not qualify CBSI as a distributor as the term is used in
the software industry. Nor is CBSI a distributor as the term is used in business.
Attached hereto as Exhibit 16 is a true and correct copy of the definition of
Distributor from the Dictionary of Business, Taylor & Francis, (1998) at 86;
attached hereto as Exhibit 17 is a true and correct copy of the definition of
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 13 of 163 Page ID
#:941
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 14 of 163 Page ID
#:942




EXHIBIT 1
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 15 of 163 Page ID
#:943
Glenn D Reinman
UCLA Computer Science Department
reinman@cs.ucla.edu
Research Interests
Computer architecture, augmented reality, parallel programming, compiler optimizations, and systems.
Education
University of California - San Diego (San Diego, CA)
Doctor of Philosophy degree in Computer Science, June 2001
Advisor: Professor Brad Calder.
Master of Science degree in Computer Science, March 1999.
Massachusetts Institute of Technology (Cambridge, MA)
Bachelor of Science degree in Computer Science and Engineering, June 1996.
Recent Research Highlights
Accelerator-Rich Chip Multiprocessors (CMPs) energy-efficient high-performance SoC platforms that
features both application-specific accelerators and heterogeneous cores.
RF Interconnect a promising alternative interconnect for both on-chip and off-chip communication for
future CMPs. It can be adaptively tuned to the communication needs of an individual application. We have
also explored wireless RF interconnect and RF-integrated memory technology.
Mobile Augmented Reality sensing and guidance framework for real-time critical situations. We are
leveraging our work on automated planning engines and our work to accelerate computer vision as the basis for
this line of research.
Real-Time Physics we have proposed a novel physics processor and explored dynamically trading accuracy
for improved performance while maintaining believability.
Dynamically Leveraging Statically Partitioned Resources CMPs statically partition resources for
scalability and performance/energy efficiency. We look at dynamically composing these static resources into
more powerful components.
Work Experience
University of California Los Angeles (Los Angeles, CA)
Assistant Professor (2001-2007)
Associate Professor (2007-Present)
Expert Witness Experience
Available upon request
University of California - San Diego, Research Assistant (San Diego, CA)
Implemented a profile-based approach to classifying loads for memory renaming, value prediction, and
dependence prediction using SimpleScalar and ATOM. Created an aggressive fetch unit using a two-level
branch prediction structure called an FTB. Worked with SimpleScalar to implement a hybrid load prediction
mechanism, combining renaming, value prediction, address prediction, and dependence prediction.
Explored importance of confidence in value prediction. Used C and C++. (Fall 1997-Spring 2001)
Implemented a contention resolution scheme for embarassingly parallel applications (such as the DOT
project at the San Diego Supercomputing Center). Worked in MPICH and C. (Spring 1997-Fall 1997)
COMPAQ (now HP) - Western Research Lab, Summer Internship 1999 (Palo Alto, CA)
Expanded the CACTI cache compiler (CACTI 2.0). Enhancements include fully associative cache model, power
modeling, multiple port models, transistor tuning, and tag path balancing.
Intel Corporation - Microprocessor Research Lab, Summer Intern 1998 (Hillsboro, OR)
Studied the viability of caching state from the branch predictor, TLB, and BTB in the second level data cache.
Exhibit 1 Page14
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 16 of 163 Page ID
#:944
Modified SimpleScalar to use ITR traces for Win95 applications for initial predictability experiments. Used out-
of-order simulation with SimpleScalar to determine the effectiveness of this technique.
Teaching Experience
University of California Los Angeles, Assistant Professor (Los Angeles, CA)
Computer Systems Architecture (CSM151B - Upper Division Undergraduate class) - I have taught this
class since Winter 2003, covering instruction set architecture design, ALU design, processor datapath and
control design, pipelining, caches, virtual memory, IO devices, multithreading, multiprocessors, and
multicore architectures.
Advanced Topics in Microprocessor Design (CS259 - Graduate class) - I introduced this class in Spring
2002, covering cutting edge research in general purpose microarchitecture. The processor pipeline is
explored in detail, with attention to performance, complexity, cycle time, power, and area. Recent real world
architectures are used for illustration, along with on-going research efforts in topics that includes multicore
processors, NoC design, cache coherence mechanisms, GPU design and programming, branch prediction,
load speculation, simultaneous multithreading, cache design/prefetching, register file design, and various
techniques to combat processor scaling trends. Introduction to cycle-accurate microprocessor simulation.
Lab intensive class designed to give students practical experience with simulation techniques and tricks. On-
going work in architecture and compilers is discussed during class and then integrated into lab assignments
using the simulation infrastructure.
Microprocessor Simulation (CS259 - Graduate class) - I introduced this class in Winter 2003, providing a
practical application of my Advanced Topics class students make use of execution-driven cycle-accurate
processor simulators.
Parallel and Distributed Systems (CS133 - Upper Division Undergraduate class) - I have completely
reorganized this class in Winter 2007 to focus on programming in OpenMP, POSIX threads, MPI, and
CUDA for both shared and distributed memory multiprocessors. The class also has a component on next
generation chip multiprocessors, including design tradeoffs and un-core optimizations.
Computer Organization (CS33 - Lower Division Undergraduate class) - I completely reorganized this
class in Fall 2009 to make it a gateway systems class using low-level C programming and x86 assembly. It is
a practical class, with several labs including an introduction to parallel programming with CUDA as the
demonstration vehicle.
Computer Science Seminar Series (CS201 - Graduate class)
University of California - San Diego, Teaching Assistant (San Diego, CA)
Teaching Assistant - taught discussion sections for classes on data structures, artificial intelligence, and
compilers. Recipient of 1996/97 TA Excellence Award.
Publications
Refereed Conference and Workshop Publications:
1. Hao Wu, Lan Nan, Sai-Wang Tam, Hsieh-Hung Hsieh, Chewpu Jou, Glenn Reinman, Jason Cong, and
Mau-Chung Frank Chang. A 60GHz On-Chip RF-Interconnect with /4 Coupler for 5Gbps Bi-
Directional Communication and Multi-Drop Arbitration. IEEE Custom Integrated Circuits Conference
(CICC), Sep 2012
2. Yu-Ting Chen, Jason Cong, Hui Huang, Chunyue Liu, Raghu Prabhakar and Glenn Reinman. Static and
Dynamic Co-Optimizations for Blocks Mapping in Hybrid Caches. International Symposium on Low Power
Electronics and Design (ISLPED), Jul/Aug 2012.
3. Jason Cong, Mohammad Ali Ghodrat, Michael Gill, Beayna Grigorian and Glenn Reinman. CHARM: A
Composable Heterogeneous Accelerator-Rich Microprocessor. International Symposium on Low Power
Electronics and Design (ISLPED), Jul/Aug 2012.
4. Jason Cong, Mohammad Ali Ghodrat, Michael Gill, Chunyue Liu and Glenn Reinman. BiN: A Buffer-
in-NUCA Scheme for Accelerator-Rich CMPs. International Symposium on Low Power Electronics and Design
(ISLPED), Jul/Aug 2012.
Exhibit 1 Page15
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 17 of 163 Page ID
#:945
5. Jason Cong, Mohammad Ali Ghodrat, Michael Gill, Beayna Grigorian, and Glenn Reinman.
Accelerator-Rich Architecture for Power-Constrained CMPs. Dark Silicon Workshop (DaSi - held in
conjunction with ISCA), Jun 2012
6. Jason Cong, Mohammad Ali Ghodrat, Michael Gill, Beayna Grigorian, and Glenn Reinman.
Architecture Support for Accelerator-Rich CMPs. Design Automation Conference (DAC), Jun 2012
7. Yu-Ting Chen, Jason Cong, Hui Huang, Bin Liu, Chunyue Liu, Miodrag Potkonjak and Glenn Reinman.
Dynamically Reconfigurable Hybrid Cache: An Energy-Efficient Last-Level Cache Design. Conference on
Design, Automation, and Test in Europe (DATE), Mar 2012.
8. Yangkyo Kim, Gyungsu Byun, Adrian Tang, Jason Cong, Glenn Reinman, and M. F. Chang. An
8Gb/s/pin 4pJ/b/pin Single-T-Line Dual (Base+RF) Band Simulataneous Bidirectional Mobile Memory
I/O Interface with Inter-Channel Interference Suppression. International Solid-State Circuits Conference
(ISSCC), Feb 2012.
9. Jason Cong, Mohammad Ali Ghodrat, Michael Gill, Hui Huang, Bin Liu, Raghu Prabhakar, Glenn
Reinman, and Marco Vitanza. Compilation and Architecture Support for Customized Vector Instruction
Extension. Asia and South Pacific Design Automation Conference (ASP-DAC), Jan/Feb 2012.
10. Mubbasir Kapadia, Matthew Wang, Glenn Reinman, and Petros Faloutsos. Improved Benchmarking for
Crowd Simulations. Motion In Games (MIG), Nov 2011
11. Kanit Therdsteerasukdi, Gyungsu Byun, Jeremy Ir, Glenn Reinman, Jason Cong, and Frank Chang. The
DIMM Tree Architecture: A High Bandwidth and Scalable Memory System. IEEE International Conference
on Computer Design (ICCD), Oct 2011.
12. Yu-Ting Chen, Jason Cong and Glenn Reinman. HC-Sim: A Fast and Exact L1 Cache Simulator with
Scratchpad Memory Co-simulation Support. International Conference on Hardware/Software Co-Design and
System Synthesis (CODES+ISSS), Oct 2011.
13. Beayna Grigorian, Marco Vitanza, Jason Cong, and Glenn Reinman. Accelerating Vision and Navigation
Applications on a Customizable Platform. International Conference on Application-specific Systems, Architectures
and Processors (ASAP), Sep 2011.
14. Mubbasir Kapadia, Matthew Wang, Shawn Singh, Glenn Reinman, and Petros Faloutsos. Scenario
Space: Characterizing Coverage, Quality, and Failure of Steering Algorithms. Symposium on Computer
Animation (SCA), Aug 2011.
15. Jason Cong, Karthik Gururaj, Hui Huang, Chunyue Liu, Glenn Reinman and Yi Zou. An Energy-
Efficient Adaptive Hybrid Cache. International Symposium on Low Power Electronics and Design (ISLPED),
Aug 2011.
16. Mubbasir Kapadia, Shawn Singh, Glenn Reinman, and Petros Faloutsos. Multi-Actor Planning for
Directable Simulations. Workshop on Digital Media and Digital Content Management, May 2011.
17. Gyungsu Byun, Yangkyo Kim, Jongsun Kim, Sai-Wang Tam, Jason Cong, Glenn Reinman, and M. F.
Chang. An 8.4Gb/s 2.5pJ/b Mobile Memory I/O Interface Using Bi-directional and Simultaneous Dual
(Base+RF)-Band Signaling. International Solid-State Circuits Conference (ISSCC), Feb 2011.
18. Jason Cong, Mohammadali Ghodrat, Michael Gill, Chunyue Liu, Glenn Reinman and Yi Zou. AXR-
CMP: Architecture Support in Accelerator-Rich CMPs. Workshop on SoC Architecture, Accelerators and
Workloads (SAW-2), Feb 2011.
19. Shawn Singh, Mubbasir Kapadia, Billy Hewlett, Glenn Reinman and Petros Faloutsos. A Modular
Framework for Adaptive Agent-Based Steering. Symposium on Interactive 3D Graphics and Games (I3D), Feb
2011.
20. Zoran Budimlic, Alex Bui, Jason Cong, Glenn Reinman, Vivek Sarkar. Modeling and Mapping for
Customizable Domain-Specific Computing. Workshop on Concurrency for the Application
Programmer (CAP), co-located with SPLASH 2010, Oct 2010.
21. Jason Cong, Chunyue Liu, and Glenn Reinman. ACES: Application-specific cycle elimination and
splitting for deadlock-free routing on irregular network-on-chip. Design Automation Conference (DAC), Jun
2010.
22. Shawn Singh, Mubbasir Kapadia, Petros Faloutsos, and Glenn Reinman. On the Interface Between
Steering and Animation for Autonomous Characters. Workshop on Crowd Simulation held in conjunction with
the 23
rd
Annual Conference on Computer Animation and Social Agents, May 2010.
Exhibit 1 Page16
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 18 of 163 Page ID
#:946
23. Shawn Singh, Mubbasir Kapadia, Glenn Reinman and Petros Faloutsos. An Open Framework for
Developing, Evaluating, and Sharing Steering Algorithms. Motion In Games (MIG), Nov 2009.
24. Suk-Bok Lee, Sai-Wang Tam, Ioannis Pefkianakis, Songwu Lu, M. Frank Chang, Chuanxiong Guo,
Glenn Reinman, Chunyi Peng, Mishali Naik, Lixia Zhang, and Jason Cong. A Scalable Micro Wireless
Interconnect Structure for CMPs. International Conference on Mobile Computing and Networking, Sept 2009.
25. Mubbasir Kapadia, Shawn Singh, Brian Allen, Glenn Reinman, and Petros Faloutsos. An Interactive
Framework for Specifying and Detecting Steering Behaviors. Symposium on Computer Animation (SCA),
Aug 2009.
26. Jason Cong, M. Frank Chang, Glenn Reinman, and Sai-Wang Tam, Multiband RF-Interconnect for
Reconfigurable Network-on-Chip Communications, System Level Interconnect Prediction (SLIP 2009), July
2009.
27. M. Frank Chang, Jason Cong, Adam Kaplan, Mishali Naik, Jagannath Premkumar, Glenn Reinman, Eran
Socher, and Sai-Wang Tam. Power Reduction of CMP Communication Networks via RF-Interconnects.
International Symposium on Microarchitecture (MICRO), Nov 2008.
28. Jason Cong, Karthik Gururaj, Guoling Han, Adam Kaplan, Mishali Naik, and Glenn Reinman. MC-Sim:
An Efficient Simulation Tool for MPSoC Designs. International Conference on Computer-Aided Design
(ICCAD), Nov 2008.
29. Shawn Singh, Mubbasir Kapadia, Mishali Naik, Petros Faloutsos, and Glenn Reinman. Watch Out! A
Framework for Evaluating Steering Behaviors. Proceedings of Motion In Games (MIG), June 2008.
30. M. Frank Chang, Eran Socher, Sai-Wang Tam, Jason Cong, and Glenn Reinman. RF Interconnects for
Communications On-Chip. International Symposium on Physical Design (ISPD), Apr 2008.
31. M. Frank Chang, Jason Cong, Adam Kaplan, Mishali Naik, Glenn Reinman, Eran Socher, and Sai-Wang
Tam. CMP Network-on-Chip Overlaid With Multi-Band RF-Interconnect. International Symposium on
High-Performance Computer Architecture (HPCA), Feb 2008. BEST PAPER AWARD
32. Tom Yeh, Petros Faloutsos, Sanjay Patel, Milos Ercegovac, and Glenn Reinman. The Art of Deception:
Adaptive Precision Reduction for Area Efficient Physics Acceleration. International Symposium on
Microarchitecture (MICRO), Dec 2007.
33. Yongxiang Liu, Yuchun Ma, Eren Kursun, Jason Cong, and Glenn Reinman. Fine Grain 3D Integration
for Microarchitecture Design Through Cube Packing Exploration. IEEE International Conference on
Computer Design (ICCD), Oct 2007.
34. Yongxiang Liu, Yuchun Ma, Eren Kursun, Jason Cong, and Glenn Reinman. 3D Architecture Modeling
and Exploration. VLSI/ULSI Multilevel Interconnection Conference, Sept 2007.
35. Tom Yeh, Petros Faloutsos, Sanjay Patel, and Glenn Reinman. ParallAX: An Architecture for Real-Time
Physics. In 34th Annual International Symposium on Computer Architecture (ISCA), June 2007
36. Yuchun Ma, Zhuoyuan Li, Jason Cong, Xianlong Hong, Glenn Reinman, Sheqin Dong, and Qian Zhou.
Micro-architecture Pipelining Optimization with Throughput-Aware Floorplanning. 12th Asia and South
Pacific Design Automation Conference (ASPDAC), Jan 2007.
37. Vasily G. Moshnyaga, Hua Vo, Glenn Reinman, and Miodrag Potkonjak. Reducing Energy of
DRAM/Flash Memory System by OS-Controlled Data Refresh. In International Symposium on Circuits and
Systems (ISCAS), May 2007.
38. Anahita Shayesteh, Glenn Reinman, Norm Jouppi, Suleyman Sair, and Tim Sherwood. Improving the
Performance and Power Efficiency of Shared Helpers in CMPs. International Conference on Compilers,
Architecture, and Synthesis for Embedded Systems (CASES), Oct 2006.
39. Vasily Moshnyaga, Hoa Vo, Glenn Reinman, and Miodrag Potkonjak. Handheld System Energy
Reduction by OS-Driven Refresh. Power and Timing Modeling, Optimization, and Simulation (PATMOS),
September 2006.
40. Tom Yeh, Petros Faloutsos, and Glenn Reinman. Enabling Real-Time Physics Simulation in Future
Interactive Entertainment. ACM SIGGRAPH Video Game Symposium, Aug 2006.
41. Jason Cong, Ashok Jagannathan, Yuchun Ma, Glenn Reinman, Jie Wei, and Yan Zhang. An Automated
Design Flow for 3D Microarchitecture Evaluation. 11th Asia and South Pacific Design Automation Conference
(ASPDAC), Jan 2006.
Exhibit 1 Page17
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 19 of 163 Page ID
#:947
42. Anahita Shayesteh, Eren Kursun, Tim Sherwood, Suleyman Sair, and Glenn Reinman. Reducing the
Latency and Area Cost of Core Swapping through Shared Helper Engines. IEEE International Conference
on Computer Design (ICCD), Oct 2005.
43. Yongxiang Liu, Gokhan Memik, and Glenn Reinman. Reducing the Energy of Speculative Instruction
Schedulers. IEEE International Conference on Computer Design (ICCD), Oct 2005.
44. Tom Yeh and Glenn Reinman. Fast and Fair: Data-stream Quality of Service. International Conference on
Compilers, Architecture, and Synthesis for Embedded Systems (CASES), Sep 2005.
45. Jason Cong, Ashok Jagannathan, Glenn Reinman, and Yuval Tamir. Understanding The Energy
Efficiency of SMT and CMP with Multi-clustering. IEEE/ACM International Symposium on Low Power
Electronics and Design (ISLPED), Aug 2005.
46. Yongxiang Liu, Anahita Shayesteh, Gokhan Memik, and Glenn Reinman. Tornado Warning: the Perils
of Selective Replay in Multithreaded Processors. International Conference on Supercomputing (ICS), June 2005.
47. Jason Cong, Yiping Fan, Guoling Han, Ashok Jagannathan, Glenn Reinman, and Zhiru Zhang.
Instruction Set Extension with Shadow Registers for Configurable Processors. 13th ACM International
Symposium on Field-Programmable Gate Arrays, Feb 2005.
48. Ashok Jagannathan, Hannah Honghua Yang, Kris Konigsfeld, Dan Milliron, Mosur Mohan, Michail
Romesis, Glenn Reinman, and Jason Cong. Microarchitecture Evaluation with Floorplanning and
Interconnect Pipelining. Asia South Pacific Design Automation Conference (ASPDAC), Jan 2005.
49. Eren Kursun, Glenn Reinman, Suleyman Sair, Anahita Shayesteh, and Tim Sherwood. Low-Overhead
Core Swapping for Thermal Management. Workshop on Power-Aware Computer Systems (PACS'04) held in
conjunction with the 37
th
Annual International Symposium on Microarchitecture, December 2004.
50. Yongxiang Liu, Anahita Shayesteh, Gokhan Memik, and Glenn Reinman. The Calm Before the Storm:
Reducing Replays in the Cyclone Scheduler. IBM T.J. Watson Conference on Interaction between Architecture,
Circuits, and Compilers, Oct 2004.
51. Jason Cong, Ashok Jagannathan, Glenn Reinman, and Yuval Tamir. A Communication-Centric
Approach to Instruction Steering for Future Clustered Processors. IBM T.J. Watson Conference on
Interaction between Architecture, Circuits, and Compilers, Oct 2004.
52. Yongxiang Liu, Anahita Shayesteh, Gokhan Memik, and Glenn Reinman. Scaling the Issue Window
with Look-Ahead Latency Prediction. International Conference on Supercomputing (ICS), June 2004.
53. Fang-Chung Chen, Foad Dabiri, Roozbeh Jafari, Eren Kursun, Vijay Raghunathan, Thomas
Schoellhammer, Doug Sievers, Deborah Estrin, Glenn Reinman, Majid Sarrafzadeh, Mani Srivastava,
Ben Wu, Yang Yang. Reconfigurable Fabric: An enabling technology for pervasive medical monitoring.
Communication Networks and Distributed Systems Modeling and Simulation Conference, Jan 2004.
54. Jason Cong, Ashok Jagannathan, Glenn Reinman, and Michail Romesis. Microarchitecture Evaluation
with Physical Planning. Design Automation Conference (DAC), 2003.
55. Gokhan Memik, Glenn Reinman, and William H. Mangione-Smith. Reducing Energy and Delay Using
Efficient Victim Caches. IEEE/ACM International Symposium on Low Power Electronics and Design
(ISLPED), Aug. 2003.
56. Gokhan Memik, Glenn Reinman, and William H. Mangione-Smith. Just Say No: Benefits of Early
Cache Miss Determination. In the proceedings of the 9th IEEE/ACM International Symposium on High
Performance Computer Architecture (HPCA), Feb. 2003.
57. Glenn Reinman, Brad Calder and Todd Austin. High Performance and Energy Efficient Serial Prefetch
Architecture. In the proceedings of the 4th International Symposium on High Performance Computing, May 2002, (c)
Springer-Verlag.
58. Glenn Reinman, Brad Calder, and Todd Austin. Fetch Directed Instruction Prefetching. In 32nd
International Symposium on Microarchitecture (MICRO), November 1999.
59. Glenn Reinman, Brad Calder, Dean Tullsen, Gary Tyson, and Todd Austin. Classifying Load and Store
Instructions for Memory Renaming. In ACM International Conference on Supercomputing (ICS), June 1999.
60. Glenn Reinman, Todd Austin, and Brad Calder. A Scalable Front-End Architecture for Fast Instruction
Delivery. In 26th Annual International Symposium on Computer Architecture (ISCA), May 1999.
61. Brad Calder, Glenn Reinman, and Dean Tullsen. Selective Value Prediction. In 26th Annual International
Symposium on Computer Architecture (ISCA), May 1999.
Exhibit 1 Page18
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 20 of 163 Page ID
#:948
62. Glenn Reinman and Brad Calder. Predictive Techniques for Aggressive Load Speculation. In 31st Annual
International Symposium on Microarchitecture (MICRO), December 1998.
Refereed Journal Publications:
63. Kanit Therdsteerasukdi, Gyung-Su Byun, Jeremy Ir, Glenn Reinman, Jason Cong, and M.F. Chang.
Utilizing Radio Frequency Interconnect for a Many-DIMM DRAM System. IEEE Journal on Emerging
and Selected Topics in Circuits and Systems, 2012.
64. Mubbasir Kapadia, Shawn Singh, Wiliam Hewlett, Glenn Reinman, and Petros Faloutsos. Parallelized
Egocentric Fields for Autonomous Navigation. The Visual Computer, 2012.
65. Yanghyo Kim, Sai-Wang Tam, Gyung-Su Byun, Hao Wu, Lan Nan, Glenn Reinman, Jason Cong, and
Mau-Chung Frank Chang. Analysis of Non-Coherent ASK Modulation Based RF-Interconnect for
Memory Interface. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, Jun 2012.
66. Kanit Therdsteerasukdi, Gyungsu Byun, Jason Cong, Frank Chang, and Glenn Reinman. Utilizing RF-I
and Intelligent Scheduling for Better Throughput/Watt in a Mobile GPU Memory System. ACM
Transactions on Architecture and Code Optimization (TACO), Jan 2012.
67. Mubbasir Kapadia, Shawn Singh, Glenn Reinman, and Petros Faloutsos. A Behavior Authoring
Framework for Multi-Actor Simulations. IEEE Computer Graphics and Applications: Special Issue on Digital
Content Authoring, December 2011
68. Shawn Singh, Mubbasir Kapadia, Glenn Reinman and Petros Faloutsos. Footstep Navigation for
Dynamic Crowds. Computer Animation and Virtual Worlds, April 2011.
69. Jason Cong, Vivek Sarkar, Glenn Reinman, and Alex Bui. Customizable Domain-Specific Computing.
IEEE Design & Test, March/April 2011.
70. Tom Yeh, Glenn Reinman, Sanjay Patel, and Petros Faloutsos. Fool me twice: Exploring and exploiting
error tolerance in physics-based animation. ACM Transactions on Graphics (TOG), December 2009.
71. Shawn Singh, Mubbasir Kapadia, Petros Faloutsos, and Glenn Reinman. SteerBench: A Benchmark Suite
for Evaluating Steering Behaviors. Journal of Computer Animation and Virtual Worlds, Feb 2009.
72. Yuchun Ma, Yongxiang Liu, Eren Kursun, Glenn Reinman, and Jason Cong. Investigating the Effects
of Fine-Grain Three-Dimensional Integration on Microarchitecture Design. ACM Journal on Emerging
Technologies in Computing Systems (JETC), Oct 2008.
73. Jason Cong, Guoling Han, Ashok Jagannathan, Glenn Reinman, and Krzysztof Rutkowski. Accelerating
Sequential Applications on CMPs Using Core Spilling. In IEEE Transactions on Parallel and Distributed
Systems (TPDS), August 2007.
74. Glenn Reinman and Gruia Pitigoi-Aron. Trace Cache Miss Tolerance for Deeply Pipelined Superscalar
Processors. In IEE Proceedings on Computers and Digital Techniques, September 2006.
75. Eren Kursun, Anahita Shayesteh, Suleyman Sair, Tim Sherwood, and Glenn Reinman. An Evaluation of
Deeply Decoupled Cores. In the Journal of Instruction Level Parallelism (JILP), February 2006.
76. Anahita Shayesteh, Glenn Reinman, Norm Jouppi, Suleyman Sair, and Tim Sherwood. Dynamically
Configurable Shared CMP Helper Engines for Improved Performance. In SIGARCH Computer
Architecture News, November 2005.
77. Gokhan Memik, Glenn Reinman, and Bill Mangione-Smith. Precise Instruction Scheduling. In the
Journal of Instruction Level Parallelism (JILP), January 2005.
78. Glenn Reinman. Using an Operand File to Save Energy and to Decouple Commit Resources. In the
IEE Proceedings on Computers and Digital Techniques, Vol 152, Issue 5, September 2005.
79. Glenn Reinman and Brad Calder. Using a Serial Cache for Energy Efficient Instruction Fetching. In the
Journal of Systems Architecture (JSA), 2004.
80. Brad Calder and Glenn Reinman. A Comparative Survey of Load Speculation Architectures. In the
Journal of Instruction Level Parallelism (JILP), May 2000.
81. Glenn Reinman, Brad Calder, and Todd Austin. Optimizations Enabled by a Decoupled Front-End
Architecture. IEEE Transactions on Computing (TOC), Vol 50, No 4, February 2000.
Patents:
82. M. Frank Chang, Jason Cong, Adam Kaplan, Mishali Naik, Glenn Reinman, Eran Socher, and Sai-Wang
Tam. On-Chip Radio Frequency (RF) Interconnects for Network-On-Chip Designs. US 8,270,316.
Filing date: Jan. 30, 2009. Publication date: Sep. 18, 2012.
Textbook Chapters:
Exhibit 1 Page19
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 21 of 163 Page ID
#:949
83. Glenn Reinman. Chapter 2: Instruction Cache Prefetching. Speculative Execution in High Performance
Computer Architectures. Edited by David Kaeli and Pen Yew. CRC Press, 2005.
Technical Reports:
84. Glenn Reinman and Norm Jouppi. CACTI version 2.0: An Integrated Cache Timing and Power Model.
WRL Research Report, 2000/7.
Awards and Grants
NSF Expedition Grant (CO-PI) to establish the Center for Domain Specific Computing (CDSC), 8/2009-7/2014
Architecture Thrust Leader (one of four members of the Executive Committee for the Center)
Semiconductor Research Corp 2008-HJ-1796 (PI) - Network-On-Chip Design with RF-Interconnects for Future Chip
Multiprocessors 4/2008-5/2011
Best Paper Award, International Symposium on High-Performance Computer Architecture, Feb 2008.
Voted Professor of the Year by the Engineering Society of the University of California, 2006
UCLA Faculty Career Development Award 2004
Northrop Grumman Excellence in Teaching Award 2004
DARPA SA5430-79952 (CO-PI) - GSRC-MARCO, 9/2006-8/2007
Semiconductor Research Corp 2005-TJ-1317 (CO-PI) - Design and Evaluation of Power-Efficient High-Performance
Heterogeneous Multi-Core Processors w/Programmable Fabric, 6/2005-5/2008
UC MICRO Program (CO-PI) MEVA: Microarchitectural Evaluation with Physical Planning, 7/2003-12/2004
NSF ITR (CO-PI) Reconfigurable Fabric, 9/01/2002-8/31/2005
NSF CAREER Award (PI) The Evaluation and Design of an Scalable, High-Performance, and Energy-Efficient
Microprocessor Architecture, 9/01/2001-8/31/2006
References
Available upon request.

Exhibit 1 Page20
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 22 of 163 Page ID
#:950




EXHIBIT 2
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 23 of 163 Page ID
#:951
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
Swarming: Scalable Content Delivery for the
Masses
Daniel Stutzbach
Computer and Information Science
University of Oregon
Eugene, Oregon 97403-1202
a gt hor r @c s . uor e gon. e du
Daniel Zappala
Computer and Information Science
University of Oregon
Eugene, Oregon 97403-1202
z a ppa l a @c s . uor e gon. e du
Reza Rejaie
Computer and Information Science
University of Oregon
Eugene, Oregon 97403-1202
r e z a @c s . uor e gon. e du
AbstractDue to the high cost of a Content Distribution
Network, most Internet users are not able to scalably deliver
content to large audiences. In this paper we study swarming,
a scalable and economic content delivery mechanism that
combines peer-to-peer networking with parallel download. First,
we dene a swarming architecture that generalizes the basic
delivery mechanism in popular swarming protocols such as
Gnutella and BitTorrent. We then conduct a comprehensive
performance study of swarming delivery, using a variety of
workloads. Our results show that swarming scales with offered
load up to several orders of magnitude beyond what a basic
web server can manage. Most impressively, swarming enables a
web server to gracefully cope with a ash crowd, with minimal
effect on client performance. During the course of our study
we illustrate the benets and limitations of a basic swarming
protocol and identify several key opportunities for performance
improvements.
I. INTRODUCTION
One of the most compelling and unique aspects of the
web as a communications medium is that any person has the
potential to provide content to a global audience. However,
the web has been only partly successful in realizing this ideal.
Although the web has been incredibly successful with regard
to access for a small hosting fee, anyone can create a web
site the overwhelming majority of Internet users can not
provide scalable delivery of their content to a large audience.
The main impediment to scalable content delivery is the
webs dependence on a client-server model, which is in-
herently limited in its ability to scale to large numbers of
clients. As the load on a web server increases, it must either
begin refusing clients or else all clients will suffer from long
download times. This makes it difcult for a website with
limited bandwidth to serve large les or a large audience.
Of particular concern in recent years is a ash crowd event,
a phenomenon in which the client arrival rate at a web site
grows by several orders of magnitude in a short time.
1
The masses ordinary users and small or medium-sized
organizations lack an effective means to deal with this
scalability problem. Buying more bandwidth helps a site
to serve a larger audience, but it takes a proportionately
larger amount of bandwidth to serve a larger audience, and
ultimately most users are limited in the amount they can pay.
1
This phenomenon is also termed the Slashdot Effect because sites are
often overrun with load when a story on the Slashdot web site links to an
under-provisioned server.
A more effective mechanism for dealing with this scaling
limitation is a Content Distribution Network (CDN). A CDN
improves scalability by distributing a given providers content
to a set of servers, splitting the load among them. The high
cost of a CDN, however, makes this mechanism infeasible for
all but the largest organizations. Another potential solution
is proxy caching [2], [17], but this is useful primarily from
the perspective of an individual client for whom the cache
is available. From the perspective of the web server, caching
must be deployed at a wide number of sites in order to be
effective at reducing load. One last alternative is to multicast
content from the web server to a group of clients [4], but this
requires loose synchronization among the clients and mech-
anisms to accommodate heterogeneous client bandwidths.
Moreover, multicast is not yet widely deployed.
Recently, peer-to-peer systems have emerged as an alter-
native paradigm for content delivery, addressing the scaling
limitation of the client-server architecture by distributing the
burden of content distribution among a large set of clients.
Pure peer-to-peer applications, however, introduce two key
problems: peer location and peer instability. With a web
server, the location of the content is always known, whereas
a peer-to-peer application must locate peers with the desired
content. In addition, a web server typically stays connected
to the network (unless of course it becomes overloaded),
whereas in a peer-to-peer system peers may abruptly leave
the network at any time.
In this paper, we study swarming, a peer-to-peer content
delivery mechanism that utilizes parallel download among a
mesh of cooperating peers. We integrate swarming with a
standard web server to form a hybrid solution that combines
the simplicity and stability of client-server delivery with the
scaling benets of a peer-to-peer network. For les that
are small or not popular, the web server delivers content
directly to clients (Figure 1a). However, as the popularity
of a le increases, the server initiates swarming by giving
clients only a block of the desired content, along with a
list of peers that can provide other blocks of the same le.
Swarming clients perform two basic functions. First, they
gossip with their peers in order to progressively nd other
peers with the content they need. Second, as clients discover
suitable peers, they begin to download the content from
them in parallel (Figure 1b). Overall, the more loaded the
web server becomes, the less content it serves directly to
27 January 2004 1 Exhibit 2 Page21
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 24 of 163 Page ID
#:952
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
(b) Swarming delivery: clients
download blocks of the file in
parallel from the server and peers
(a) Client-server delivery:
clients download the entire
file from the server
Block 3
Web Server
Block 1
Block 2
Web Server
File
Fig. 1. Swarming Delivery
clients and the more it redirects them to peers. During heavy
load, the system generates swarms of peers that cooperatively
download content in parallel from each other and from the
server.
Swarming is a viable content delivery mechanism for the
masses because it is both scalable and economical. Swarming
actually uses scale to its advantage system capacity in-
creases with the number of peers participating in the system.
Peers spread the load of content delivery over the entire
network and share the burden of peer identication with the
web server; this prevents server overload and avoids network
congestion. Clients utilize parallel download to protect them-
selves against peer instability, which would otherwise hinder
a peer-to-peer application. Swarming is clearly an economical
solution for the web server because it does not have to pay
for the bandwidth used for peer-to-peer communication. This
cost is marginal for the peers because each peer serves only a
small number of clients; the servers cost is spread among a
large user population. In addition, clients have an incentive to
participate in swarming because they will receive the help of
other peers in return (for the same or different content). This
improves the average performance of all users that otherwise
would suffer from the congested server or network.
While we advocate swarming as a standalone solution, we
note that swarming is complementary to both CDNs and
proxies. Swarming can enable a CDN server to handle a
higher load, and proxies can use swarming to reduce the
load on a standard web server. Furthermore, organizations
that dont use a CDN can utilize swarming as an alternative
to provisioning their network for peak load.
Swarming delivery has been popularized by several propri-
etary and open-source software projects [1], [12], [8], [21],
[7], of which BitTorrent is perhaps the most well known.
While these systems serve as a proof-of-concept for swarm-
ing delivery, few provide a technical description of their
swarming protocol and, more importantly, no performance
evaluation studies have yet been published. Although swarm-
ing seems intuitive, the design of a swarming protocol is not
trivial because the design space is large and there are many
dynamics involved. Some challenges include (a) nding peers
with the desired content, (b) choosing peers that are likely
to provide good performance, and (c) managing parallel
download while coping with partially available content at
each peer and the dynamics of peer participation.
In this paper, we make the following contributions. First,
we present a comprehensive swarming architecture and ex-
plore the design space of its key components. Second, we
conduct the rst comprehensive performance evaluation of
swarming delivery, using a simulation that examines a variety
of workloads and swarming parameters. Our results show
that swarming can scalably deliver content under loads that
are several orders of magnitude beyond what a client-server
architecture can handle. Most impressively, swarming enables
a web server to gracefully cope with a ash crowd, with
minimal effect on client performance. In addition, our results
indicate that swarming spreads the load of content delivery
evenly among the peers. We conclude by providing insight
concerning the dynamic performance of the system and the
impact of several key swarming parameters.
II. RELATED SYSTEMS
BitTorrent is notable as a swarming system because it is
currently used to transfer large les, such as new software
releases, to hundreds of peers. With BitTorrent, a centralized
host called a tracker is responsible for storing the identities
of all peers and their performance. When clients contact the
tracker, they report their status and in return receive a random
list of peers. Clients try to download rare blocks rst (based
on the blocks their peers have) and download one block at
a time by requesting sub-blocks of their current block from
selected peers. Once a client obtains a complete block, it can
share that block with its peers.
One of the unique features of BitTorrent is its notion of
fairness. Each connection between peers represents a two-
way data transfer, with each peer expected to upload as much
as it downloads. At any given time, each client allows a xed
number of connections to be actively uploading, with the
goal of obtaining good download performance from those
same peers. If a given peer does not provide good download
performance (i.e. the other side is not sharing equally), then
the client will stop uploading to that peer and try a different
peer.
A number of other peer-to-peer systems attempt to address
the same problem of serving web content to a large audience.
The systems that are most related to swarming are CoopNet
[13] and Pseudoserving [10]. Both of these systems use
collaborative delivery an overloaded web server gives
clients a list of possible peers, and the client chooses a single
peer from which it downloads the entire content. CoopNet
in particular provides a proof-of-concept for collaborative
delivery that serves as a foundation for swarming. First,
the authors make a convincing argument that bottleneck
bandwidth at the server, rather than the server CPU or
disk speed, is the limiting factor in client-server content
distribution. Second, this work demonstrates that clients are
able to nd content using the list of peers, that load is
distributed across the peers, and that clients can nd peers
that are close (in the same BGP prex cluster). Finally,
27 January 2004 2 Exhibit 2 Page22
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 25 of 163 Page ID
#:953
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
CoopNet has also been extended to deliver streaming content
[14].
Swarming differs from these systems in several important
ways. First, it uses parallel download, which balances the
load among peers and provides robustness against peers
that leave the system. Second, swarming allows clients to
act as peers even if they only have partial content, which
further increases system capacity. This also allows swarming
to respond more quickly to a ash crowd, especially for
large les. Finally, swarming uses gossiping to progressively
discover available peers, which better distributes control
overhead.
The Backslash system [18] helps a web server cope with
high load by forming a collaborative network of web mirrors.
An overloaded web server then redirects clients to a cached
copy of the content located at one of the collaborating sites.
While this type of system is economical and can improve
the ability of a web server to respond to high loads, it scales
with the number of participating servers, whereas swarming
scales with the number of clients. Moreover, the network
of cooperating sites must be established ahead of time and
benets only the participating servers.
PROOFS [19] uses a peer-to-peer network of clients to
cache popular content. When a client is unable to download
content from a web server, it queries the peer-to-peer network
to see if any other user has a copy of the desired content.
Like swarming, this type of system scales with the number of
participating clients. However, the peer-to-peer network does
not prevent the web server or the network from becoming
overloaded; rather it serves as a backup after a web server
(or the network) becomes congested. This approach is thus
complimentary to any other content-delivery system, includ-
ing swarming.
All of these systems are affordable alternatives for the sites
that cannot pay for a Content Distribution Network (CDN).
However, even for sites that can afford a CDN, we argue
that swarming provides some advantages. From a content
providers point of view, swarming provides automatic and
dynamic content management, whereas a CDN needs addi-
tional mechanisms to manage replication and consistency.
Swarming also has the potential for better load balancing, due
to parallel download. When swarming uses proximity-based
peer selection, it has the potential to further reduce network
load by serving content from peers that are likely to be much
closer than a CDN server. Finally, where a CDN is available,
swarming is complementary in that users can swarm to the
set of CDN mirrors, further improving the scalability of the
overall system.
Our design of swarming draws on two well-known tech-
niques parallel download and gossiping. Downloading from
multiple web mirrors in parallel [3], [16] has been shown to
reduce client download time while also spreading load among
the mirrors. We borrow this technique to allow clients to
download content from multiple peers in parallel. While the
concept is similar, for swarming the peers may have only
partial content, may disconnect abruptly, and are not as fully
provisioned as a dedicated server. These complications add
signicantly to the dynamics of swarming compared to down-
loading from mirrors. The second well-known technique
gossiping has primarily been used to maintain consistency,
for example in distributed databases [6], [9] and Peer-to-Peer
storage networks [5]. For swarming we use gossiping as a
scalable method for a client to explore existing peers in a
demand-driven fashion.
III. SWARMING ARCHITECTURE
In order to study swarming performance, we have designed
a swarming architecture consisting of four key components:
swarming initiation, peer identication, peer selection, and
parallel download. While we have not made an explicit
attempt to base this architecture on any existing swarming
protocol, we believe it generalizes the basic content delivery
mechanism used by protocols such as Gnutella and BitTor-
rent.
We note that there are many design choices for each
swarming component, as well as many parameters for the
system. Our goal is to use a simple yet effective design for
each component. This enables us to study the performance of
swarming delivery while minimizing complex dynamics and
interactions among the components of the system. This makes
it easier to correlate an observed behavior to a particular
mechanism or parameter.
Before describing each component in detail, we provide
an overview of the swarming architecture. Our integration
of swarming with a web server can be viewed as a hybrid
between client-server and peer-to-peer content distribution.
The system uses client-server communication to deliver small
or unpopular les, to bootstrap peer location, and to serve as a
fallback in case a clients known peers all leave the network.
The system uses peer-to-peer networking to scalably deliver
large or popular les and to discover additional peers through
gossiping.
In our architecture we describe swarming as a protocol that
is implemented on top of HTTP, providing backward com-
patibility and allowing for incremental deployment. Because
peers can act as both clients and servers, we dene some
basic terminology. We use the term root server to refer to
the server that is the content owner. This could be a regular
HTTP server or a CDN mirror. We refer to a client as a
client peer when it acts as a client and a server peer when
it acts as a server. To participate as a server peer, a node
runs a lightweight HTTP server. It is important to note that
the client may be either a web browser or proxy server. It
should be simple to integrate swarming into existing proxies
because they already include server functionality.
A. Overview
Swarming clients send regular HTTP requests to web
servers, along with two additional headers. The SWARM
header indicates that a client is willing to use swarming, and
the SERVER PEER header indicates that a client is willing
to act as a server peer for the requested le (Figure 2(a)).
For clients that are willing to swarm, the root server may
respond either with the entire le when load is light or
27 January 2004 3 Exhibit 2 Page23
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 26 of 163 Page ID
#:954
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
New Client
Web Server
HTTP/Swarming
Request
Swarm: Yes
ServerPeer: Yes
Server
Peers
(a) A new client contacts the root
server.
New Client
Web Server
HTTP/Swarming
Response
Block
Gossip Message
Server
Peers
(b) The root server responds with
partial content and a gossip message.
New Client
Web Server
HTTP/Swarming
Request
Swarm: Yes
ServerPeer: Yes
Gossip Message
Server
Peers
(c) The client requests blocks from sev-
eral server peers.
New Client
Web Server
HTTP/Swarming
Response
Swarm: Yes
ServerPeer: Yes
Block
Gossip Message
Server
Peers
(d) The client downloads blocks in par-
allel.
Root Server
Swarming Clients
(e) A swarm of peers eventually forms a
mesh.
Fig. 2. Protocol Example
may initiate swarming if needed. Servers that are not capable
of swarming will simply ignore the unrecognized headers.
To initiate swarming, a root server gives some clients a
single block of the le and a Gossip Message (Figure 2(b)).
A block is a portion of the le, typically on the order of tens
or hundreds of kilobytes. The server determines the block
size on a per-le basis; our performance evaluation looks at
a range of le and block sizes. A gossip message contains a
list of peers that are willing to serve portions of the same le.
For each server peer, the message lists the peers IP address,
a list of blocks the peer is known to have, and a time stamp
indicating the freshness of this information.
When a client receives a swarming response (partial con-
tent plus a gossip message), it invokes a peer selection
strategy to determine the subset of server peers from which it
will download content. A clients primary concern is to locate
blocks of the le that it has not yet received. The client then
begins downloading blocks from both the root server and the
server peers in parallel. We note that in this study we are
not concerned with fairness. Thus, unlike BitTorrent, data
transfer over a connection is one-way, and we do not attempt
to ensure that a client downloads only as much as it uploads.
Each transaction between a client and the root server or a
server peer includes a single block and a two-way exchange
of gossip messages (Figure 2(c)-(d)). Requesting a single
block at a time will naturally lead to faster peers delivering
more blocks, resulting in proportional load balancing. Ex-
changing gossip messages with server peers enables progres-
sive peer identication clients gradually learn about other
peers in the peer-to-peer network. This is needed because the
initial pool of peers identied by the root server may refuse
to serve the client, may disconnect from the network, may
have low bandwidth connectivity, or may simply not have
all of the content the client needs. The client uses gossiping
to distribute the overhead of peer identication, rather than
relying on the root server for this functionality. In addition,
since a client is conducting a transaction with a server peer
anyway (to download a block), the additional overhead of
including a gossip message in the transaction is marginal.
When a server peer receives a request for a block, it
determines whether it will accept the connection based on
its conguration or capabilities. The server peer then delivers
the requested block and exchanges gossip messages with the
client. Once the server peer has itself downloaded the entire
le, it may decide to leave the system immediately or it may
choose to linger and help additional client peers. In our study
we use a lingering time of zero, in order to test swarming
under pessimistic conditions; in practice the user may specify
a lingering time or swarming may continue as long as the
browser is left open. When a server peer disconnects from the
system, it does not wait for any ongoing block downloads to
nish. To reduce the amount of bookkeeping that is required,
27 January 2004 4 Exhibit 2 Page24
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 27 of 163 Page ID
#:955
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
clients discard partially downloaded blocks.
As large numbers of clients attempt to download the
same content, they form a dynamic mesh or swarm of
peers. We can view this mesh as a collaborative delivery
system, where server peers with larger portions of the le
or higher bandwidth will tend to serve greater numbers of
clients. We illustrate this in Figure 2(e) by depicting the
download process at each node as a pie with pieces that
are being lled. Downstream peers will generally get pieces
from upstream peers who have received the content earlier.
Similar to application-layer multicast, collaborative delivery
spreads the load of transferring content among the clients
and eliminates bottlenecks at the source or other points in the
network. We note that while cycles may form in the delivery
mesh, individual blocks are propagated along a tree that starts
at the root server.
B. Swarming Initiation
A root server needs to decide when to initiate (or cease)
swarming for a particular le based on its current per-
formance and the popularity of the le. Moreover, while
swarming, the server needs to decide what portion of clients
to give just a single block and how many to serve with the
entire le. The server must balance its desire to reduce load
via redirection with the need to keep enough content available
for swarming delivery to be effective.
For our performance evaluation we have chosen the con-
servative approach of swarming at all times. This enables
the root server to be proactive with regard to load, so that it
doesnt react too late to a sudden increase in client arrivals.
During a ash crowd, it is conceivable that load on the
server (and its access link) can increase several orders of
magnitude, swamping a server that is slow to respond. The
cost of swarming at all times is that when load is low clients
see increased delay compared to downloading the entire
content from the root server. This occurs because clients may
download from peers that are slower than the root server.
In keeping with this conservative approach, we also have
the root server send a swarming response (partial content
plus a gossip message) to all clients. This represents a
balance between giving clients too little or too much. During
high load, it may be desirable to only give clients a gossip
message, because this will reduce the load on the server to
just redirection overhead. On the other hand, this runs the risk
of not giving the clients enough to share, at which point they
must return to the root server for content anyway. Another
alternative is to give some clients the entire le so that they
can act as server peers for all blocks. However, this benet is
lost if the lingering time is very short, or in other words if the
peers act selshly. By giving all clients only partial content,
we force them to cooperate with each other. The content will
naturally diffuse as clients exchange blocks with each other
and with the root server.
C. Peer Identication
A client needs to locate other peers who have its desired
content so that it can use them as server peers. Identifying
potential peers is difcult because the set of peers interested
in the same content is not known ahead of time and can be
highly dynamic (due to client disconnections). This means
we cannot use a distributed hash table [20], [15], [11] to
locate content, but must instead use a more dynamic peer
discovery mechanism. Fortunately, a client does not need to
know about all peers with the content; instead, a client needs
only a few peers with whom it can perform swarming.
For our performance evaluation we use a combination of
server-based identication and gossiping. Having the root
server supply an initial set of peers is a simple method for
bootstrapping the peer identication process. However, we do
not want to rely on this mechanism for all peer identication
(as is done with BitTorrent, CoopNet, and Pseudoserving)
because in a very large scale system even this redirection load
may overwhelm the server. Instead, clients and server peers
gossip during each transaction, allowing clients to quickly
discover a small fraction of the peers and their available
content. This mechanism provides both scalability the
number of peers discovered is small relative to the total
number and robustness a failure or disconnection of one
peer does not affect the ability of a client to discover other
peers. We also note that nding suitable peers becomes easier
(and hence consumes less overhead) as the number of active
peers increases. Of course, if a client is unable to nd suitable
peers, it may always return to the server to ask for additional
peers. Finally, it is easy to limit the overhead of gossiping by
limiting the frequency with which nodes exchange messages
and by limiting the size of the gossip message.
Our emphasis in designing the gossip component is to
discover recent peers, since peers may leave the system at any
time. We consider the dynamics of peer participation to be
our primary challenge, even more important than optimizing
bandwidth. Hence, we specify that each client caches a record
for the N
c
peers with the most recent time stamp, then
includes in its gossip messages the most recent N
g
peers,
where N
g
N
c
. In keeping with our philosophy, freshness
takes priority over other concerns (such as caching peers with
large numbers of blocks), since a peer with the entire le is
not useful if it leaves the system. Moreover, the client has
an incentive to exchange fresh peers to ensure that gossiping
diffuses information about peers effectively.
In our design of the gossip component, we are also careful
to share information about peers that are no longer available.
If a client does not do this, then the bad information effec-
tively pollutes the peer identication mechanism, causing
many peers to attempt to contact the same disconnected
peer. In our study a client indicates a peer is disconnected
by modifying the peers gossip record to indicate the peer
does not have any blocks of the le. The client then shares
this record in its gossip transactions, so that other clients do
not attempt to contact this peer. Eventually the record of a
disconnected peer is discarded because its time stamp will
never be renewed.
Finally, we note that our design uses passive gossiping,
in which clients exchanges gossip messages with server
peers during each transaction. An alternative is to use active
27 January 2004 5 Exhibit 2 Page25
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 28 of 163 Page ID
#:956
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
gossiping, which requires the client to choose a gossip
frequency and then continually contact a new peer during
each round. While information diffuses more slowly with
passive gossiping, the overhead is also much lower, since the
amount of information in the gossip message is usually small
compared to the data that is exchanged.
D. Peer Selection
Once a client has located potential peers, it needs to decide
which peers and how many peers it should use for parallel
download. These are difcult choices because the client does
not know ahead of time the average bandwidth available from
each server peer. In particular, the client does not know if the
bottleneck of the connection will be local or remote.
We do not focus in this study on an optimal peer se-
lection strategy, since there are so many other factors that
affect swarming performance. Rather, we formulate a simple
strategy that is based on content availability. First, each
client limits itself to N
d
concurrent downloads. Second, when
choosing a new peer, clients choose the peer that has the
most blocks that it still needs. For example, suppose a client
is downloading a le composed of 4 blocks and the it has
previously downloaded just the 3rd block. It knows about
Peer A who has the 1st and 4th block, and about Peer B
who has the 3rd and 4th block. In this case, the client will
choose Peer A since it has two blocks that the client needs,
while Peer B has only one.
We use this simple strategy because the most important
factor in selecting a peer is the available content at that peer.
This is particularly important for swarming because each peer
has only a part of the le; a client must select among peers
that have blocks the client is currently lacking. It only makes
sense to consider other criteria such as distance or available
bandwidth if multiple peers can provide the same content.
When choosing the limit of N
d
parallel downloads, clients
must balance several factors. If the client uses a small number
of peers, then it may not fully utilize its incoming link
capacity. If the client uses a large number of peers, then
the extra peers may not necessarily improve performance,
due to a bottleneck near the client. In this latter case,
block download times will increase; due to the instability of
peers, this in turn increases the probability of downloading
incomplete blocks. Because we discard incomplete blocks,
this results in useless work and reduces performance. We
study a range of settings for N
d
to determine the impact of
this parameter on performance.
E. Parallel Download
The heart of the swarming architecture is the parallel
download of different blocks from server peers. While
swarming, the client must deal with long-term dynamics and
must determine when to add or drop a peer. In addition,
because the client is using parallel download, it must decide
which blocks to download from which server peer, while
coping with the fact that each peer may potentially have a
different set of blocks.
In our study, we use a relatively simple strategy for
adaptive delivery in order to simplify the analysis of our
results. Each client chooses N
d
peers for parallel download,
using the peer selection component, then continues to use this
set unless a server peer disconnects or runs out of blocks that
the client needs. In either of these cases, the client drops
the peer and then immediately invokes the peer selection
component to choose a replacement. If none of the peers
in the clients gossip cache have blocks that the client needs,
then the client contacts the root server for some additional
peers.
In keeping with our goal of simplicity, we do not monitor
the performance of a server peer to determine whether to
continue using it. Because this could lead to instability, we
instead rely on the benet of parallel download faster peers
will naturally serve more blocks to a given client. We also do
not enforce any limit on the number of clients that can use a
particular server peer. Our results indicate that our simple
delivery mechanism spreads load evenly among peers, so
overload of a given peer is not yet a concern. This is likely to
be more important if clients begin using more sophisticated
peer selection mechanisms.
While downloading content, the client would like to keep
its current peers busy to ensure that its throughput is high.
In our study we try to ensure that this is the case by having
the root server choose a relatively large block size. We note
that this choice has several drawbacks. Using too large of a
block size will reduce performance because it increases the
chance of a client getting a partially-downloaded block. We
study the effect of block size in our performance evaluation.
Finally, we note that it is important for the root server to
ensure that a variety of blocks are diffused to clients. If all
clients have the same blocks, then they will all need to return
to the root server for additional data. To mitigate this concern,
in our performance study a client selects a block randomly
from those available when connecting to both the root server
and server peers. This ensures some amount of diversity in
the content that is available, and increases the chance that a
client can nd a peer with content that it needs.
IV. PERFORMANCE EVALUATION
We have conducted a simulation-based evaluation of
swarming to study its performance under a variety of work-
loads. Our simulation implements a swarming protocol that
follows the architecture described in the previous section.
In particular, swarming initiation is conservative swarming
is enabled at all times and the root server sends at least
one block to each client. This rst choice ensures that
swarming is able to react to a ash crowd when it appears; the
second choice ensures that the clients have enough content
to share with each other. Peer identication, both by the root
server and through gossiping, is based on freshness and peer
selection is based only on available content. We use a simple
adaptive delivery component in order to avoid introducing
further dynamics in the system; a client stops using a peer
only if it disconnects or runs out of blocks that it needs.
We rst describe our simulation methodology. We then
begin our study by demonstrating the scalability of swarming
as compared to a standard web server under a steady-state
27 January 2004 6 Exhibit 2 Page26
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 29 of 163 Page ID
#:957
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
Parameter Value
N
d
(Concurrent downloads) 4
Nc (Size of gossip cache) 64
Ng (Peers in gossip message) 10
Block Size 32 KB
TABLE I
DEFAULT SWARMING PARAMETERS
Internet
Root
Server
Clients/Peers
1 Mbps
1 ms
1 ms
Fig. 3. Simulation Topology
load. Following this, we illustrate how swarming enables a
web server to handle a ash crowd without a signicant
performance hit. Next, we examine the performance of
swarming in detail under high load, then study the dynamics
of peer selection. We conclude by investigating the impact
of a variety of parameters: le size, block size, number of
concurrent downloads, and client distribution.
Unless otherwise mentioned, we use the default swarming
parameters given in Table I.
A. Methodology
Evaluating the performance of swarming can be complex
because of the many dynamics involved, such as peer partic-
ipation, partially available content, and changes in available
bandwidth. Moreover, the components of a swarming proto-
col are inter-related; for example, the information carried in
gossip messages affects the peer selection component, which
in turn affects the performance of adaptive delivery. Where
possible we try to study the effect of a parameter in isolation,
for example we investigate the effect of client bandwidth
without dealing with network congestion in the backbone.
For this study we use a simulator based upon some of the
original ns 1.4 code. We built an HTTP server and client on
top of TCP, with swarming integrated into both the server
and client. We have tuned this simulator for scalability, since
we need to evaluate swarming under extremely high loads.
1) Topology: Similar to congestion control studies, we use
a simplied topology in which we model the Internet as a
single router, as shown in Figure 3. This abstraction enables
us to focus on the bottlenecks at the root server and client
peers. The servers access link is likely to be a bottleneck
under high loads [13] the very loads for which we are
designing swarming and peer links are likely to be the
bottleneck when a peer with limited bandwidth acts as a
server peer.
Most of our simulations use the basic scenario shown in
Table II, in which the root server has a 1 Mbps access link
Parameter Value
File size 1 Megabyte
Server bandwidth 1Mbps
Client bandwidth (down/up) 1536Kbps / 128Kbps
TABLE II
BASIC SWARMING SCENARIO
and serves a 1 MB le. In most simulations we model the
clients as broadband users, with a download bandwidth of
1536 Kbps and an upload bandwidth of 128 Kbps. Using a
higher download bandwidth is interesting because this makes
parallel download attractive for a client. In addition, using
a homogeneous set of clients allows us to focus on other
dynamics in the system. In the latter part of this section we
explore the effects of adding some low-bandwidth and some
high-bandwidth clients into the system. In order to focus on
transmission delay, we set the propagation delay of all links
to 1 ms.
2) Workload: We control workloads for our simulations
by varying the arrival rate of clients requesting the same le
from the root server. For a given arrival rate, we randomly
generate client inter-arrival times using an exponential distri-
bution. We also simulate a ash crowd by abruptly increasing
the arrival rate for a given period of time.
When a client arrival occurs, we create a new client and it
immediately begins its download. During the download, the
client acts as a server peer, then it leaves the system once its
download is complete. While in the real world clients may
be somewhat more polite, we opt for a conservative approach
and hence underestimate the benets of swarming.
Another key factor in determining workload is the le
size. We study various le sizes as well as various block
sizes (given a xed le size) to determine the effect of these
parameters on system performance.
3) Metrics: Our primary performance metric is client
download time. We also measure the packet loss rate, the
number of clients served by each peer, the number of blocks
served by each peer, and a variety of other swarming-related
metrics.
Unless otherwise indicated, we begin each simulation with
a warm-up period of 500 download completions. During this
time we do not collect measurements; this allows the system
to reach steady state behavior. We then collect data for 5500
download completions.
For each experiment we conduct multiple runs of our simu-
lations, average the results, and compute the 95% condence
interval. We do not include condence intervals here because
in all cases they are very small.
B. Scalability
We begin our study by showing that swarming has excel-
lent scalability. In Figure 4, we plot the mean time a client
takes to fully download a le versus the client arrival rate
on a log-log scale. Swarming can clearly handle a much
larger load than a basic web server. As the load increases,
swarming exhibits a linear increase in delay, whereas client-
27 January 2004 7 Exhibit 2 Page27
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 30 of 163 Page ID
#:958
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
1s
10s
1m
10m
1h
1
4
1 4 16 64 256
M
e
a
n
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
Mean Arrival Rate (clients/minute)
Client-Server



Swarming

Fig. 4. Impact of arrival rate on performance


server transfer experiences super-exponential growth. Client-
server has a vertical asymptote at about 7 clients per minute,
beyond which it utterly fails to handle the load. Past this
point, the arrival rate exceeds the departure rate, and the client
download time continues to increase indenitely. Naturally,
the point at which the client-server protocol is unable to
respond will depend on server bandwidth, le size, and load.
We were unable to nd any bound for swarming, due to
memory limitations (the largest arrival rate we are able to
simulate is 192 clients per minute). Inevitably, an asymptotic
bound for swarming must exist; at some point the load will
be large enough to prevent the root server from providing
referrals. A back-of-the-envelope calculation suggests this
will not occur for at least an order of magnitude further
increase in arrival rate.
2
This limitation exists for any scheme
that relies on contacting a known, central point to initiate a
download. At extremely high loads, swarming can incorpo-
rate a decentralized method for locating peers, such as the
Gnutella search mechanism or PROOFS [19].
From this result we can see that swarming dramatically
increases the steady-state load that a web server can handle.
Serving 192 clients per minute translates to serving the one
megabyte le to more than a quarter million people per
day. This is an impressive feat for a 1Mbps access link.
To serve an equivalent load using a client-server protocol
would require, at a bare minimum, 28Mbps. This would cost
thousands, perhaps tens of thousands, of dollars per month!
3
Our conservative approach to swarming does impose a
slight performance penalty under light load. When there are
not many peers to share with, the client ends up getting most
blocks from the root server, but with the added overhead of
2
We assume a single 1500-byte packet is used to transmit the referral
information. The 1Mbps server can transmit 2
20
/(1500 8) of these per
second, or 5242 per minute.
3
ht t p: / / www. ba ndwi dt hs a vi ngs . c om/ s e r vi c e s de t a i l .
c f m
1s
10s
1m
10m
1h
10h
0 2 4 6 8 10 12 14
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
Simulated Arrival Time (hours)
Increased Arrivals

Fig. 5. Client-server reaction to a ash crowd


gossip messages. This seems a small price to pay for such a
signicant increase in capacity during high loads. Moreover,
it is likely that we can eliminate this problem by designing
a dynamic server initiation component that uses swarming
only when needed.
C. Flash Crowd
While good steady-state behavior is important, web servers
must also be able to cope with extreme bursts of activity
called ash crowds. We simulate the effect of a ash crowd
by abruptly increasing the arrival rate for a xed period of
time. We begin by using a one-hour steady state load of 6
clients per minute. For client-server transfer we introduce an
impulse of 12 clients per minute, lasting for one hour. After
the ash crowd passes, the arrival rate returns to its original
level, and we simulate this load until the web server is able
to recover. For swarming, we provide a more challenging
ash crowd by increasing the ash crowd rate to 120 clients
per minute!
4
Aside from the load function, we use the same
swarming scenario given in Table II. The results are presented
in Figure 5 and Figure 6, where each data point represents
the mean download time for all downloads nishing in the
previous 1000 seconds.
As can be seen from these gures, swarming enables a
web server to smoothly handle large ash crowds that would
otherwise bring content delivery to a crawl. It maintains
reasonable response times as the crowd arrives, and dissipates
the crowd quickly. With the traditional client-server approach,
the crowd swells due to an inability to service the requests.
This causes a death spiral, as the larger the crowd, the more
difcult it is to service any requests at all. The server will
not recover until long after the arrival rate decreases.
It is particularly impressive that we achieve this result
using a conservative and unoptimized swarming protocol as
4
In order to ll out the graph, we ran this simulation for 12,000
completions instead of the usual 6000.
27 January 2004 8 Exhibit 2 Page28
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 31 of 163 Page ID
#:959
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
1s
10s
1m
10m
1h
10h
0 2 4 6 8 10 12 14
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
Simulated Arrival Time (hours)
Increased Arrivals

Fig. 6. Swarming reaction to a ash crowd


detailed in Section III. In particular, the simulated protocol
has swarming enabled at all times and the server continually
delivers blocks to clients as requested. With additional opti-
mization, particularly in the swarming initiation component,
the server should be able to handle even larger ash crowds.
In fact, by utilizing Gnutella or PROOFS to locate peers un-
der extremely high loads, swarming can be made effectively
immune to ash crowds.
D. High Load
Now that we have shown that swarming can provide
greatly increased system capacity, we must examine what
sort of burden it places on the server and the peers. We focus
on a very high load of 192 clients per minute, since this is
the region where packet loss and load imbalance can be the
worst. We again use the scenario from Table II.
With swarming at this heavy load, packet loss at the root
server is quite severe. We stress that the client-server protocol
has virtually 100% packet loss with an order of magnitude
less load. Impressively, swarming still manages to get the le
delivered to clients. Even at this high load, the congestion at
the root server can be relieved by using a server initiation
component that only delivers blocks of the le to a subset of
the clients. This would allow the server to primarily perform
redirection, while serving enough content to ensure it is
available to the clients.
Unlike the server, the peers experience very little packet
loss, even at high load. This is illustrated by the histogram of
peer packet loss rates shown in Figure 7, using a logarithmic
scale.
5
The low packet loss rates at the peers can be attributed to
the burden of content delivery being spread evenly among
the peers. In this same high load scenario, roughly 60% of
5
Only outbound packet loss is shown in the gure; negligible inbound
packet losses occurred. This is not surprising given the highly asymmetric
bandwidths of the peers.
0
10
20
30
40
50
60
0
1
4
1
2
1 2 4 8 16 32 100
P
e
e
r
s
(
%
)
Outbound Packets Dropped (%)
Fig. 7. Histogram of packet loss rates - 192 clients per minute
0
20
40
60
80
100
0 1 2 3 4
P
e
e
r
s
S
e
r
v
i
n
g
L
e
s
s
(
%
)
Megabytes Served
Fig. 8. CDF of megabytes served at 192 clients per minute
the clients serve less than one megabyte. Nearly all of the
clients upload less than two megabytes. Re-serving the le
once or twice is fair, so this behavior is quite good. This result
is shown in Figure 8, which plots a cumulative distribution
function megabytes served by peers. Even if a peer has served
a whole megabyte, it may not have served the whole le,
since it may simply have served the same block many times.
This is one of the strengths of swarming; even a peer with a
small portion of the le can be quite helpful.
The time for a client to complete its download is less
evenly distributed than the amount of data served. For the
high load scenario, the download times are spread mostly
over a range between 60 seconds and 300 seconds. However,
more notably, a disproportionate number of download times
are close to multiples of 60 seconds. This is shown as a
27 January 2004 9 Exhibit 2 Page29
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 32 of 163 Page ID
#:960
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9 10 11
P
e
e
r
s
(
%
)
Download Duration (minutes)
Fig. 9. Histogram of download times at 192 arrivals per minute
histogram in Figure 9. This behavior did not manifest at
lower loads such as 16 clients per minute. It is important
to note that the download times are measured individually
from the start of each client; thus, this pattern does not
indicate synchronization of ows within the network. After
some investigation, we were able to conrm that the uneven
distribution is caused by our use of a 60 second time out
to detect dead connections. Undoubtedly, these timeouts are
occurring due to the severe congestion at the server. This
suggests that alleviating the server congestion will also result
in signicant improvements for the peers.
E. Dynamics of Peer Selection
To better understand the dynamics of peer selection, we
examine several peer-related metrics under various loads.
We are interested in the number of concurrent downloads
that a client is able to perform, the number of unique peers
that a client downloads from, the number of unique peers
that a client serves, and the total number of peers that a
client attempts to contact. Figure 10 plots these metrics as
a function of increasing load, once again using the basic
scenario given in Table II.
Once the arrival rate reaches 8 clients per minute, the pool
of active clients is large enough that the average number of
concurrent downloads for a client is close to the maximum
of 4. This metric is time-averaged, so for example if a client
spends half the simulation downloading from 3 peers and
half of it downloading from 4, then the average for that peer
is 3.5.
This result also shows that the number of peers a client
attempts to contact increases as the load on the system
increases. At higher loads there are more active peers in the
system, but each peer is downloading at a slower rate. This
means that a client will need to contact more peers to nd
the blocks it needs. Accordingly, the number of peers a client
downloads from increases during the region of high load.
0
2
4
6
8
10
12
14
1
4
1 4 16 64 256
P
e
e
r
s
Mean Arrival Rate (clients/minute)
Peers Attempted
3 3 3
3
3
3
3
3
3
3
3
3
3
3
333
33
3
3 33
3
3
3
3
3
Peers Downloaded From
+ +
+
+
+
+
+
+
+
+
++
+
+
+
+
+
++
+
+ ++
+
+
+
+
+
Peers Uploaded To
2
2
2
2
2
2
2
2
2
2
22
2
2
2
2
2
22
2
2 22 2
2
2
2
2
Concurrent Downloads

Fig. 10. Dynamics of Peer Selection


Likewise, the number of peers a client uploads to shows the
same behavior.
F. File size
Swarming also scales well with the size of the le, allow-
ing a small user to easily serve large les (e.g. multimedia).
We demonstrate this result in Figure 11, which shows the
mean download time for both swarming and client-server
as a function of the le size. For this simulation we use
an arrival rate of 4 clients per minute, with the same basic
scenario given in Table II. Varying the le size is similar to
varying the arrival rate in that both cases increase the load
on the root server. Swarming again exhibits only a linear
performance hit under high load (large les), and for le
sizes of two megabytes or larger the client-server protocol is
unable to enter steady-state.
An interesting result from this simulation is that gossiping
can impose a signicant overhead when the block size is
small. For this simulation the number of blocks is 32,
regardless of le size. Thus as the le size decreases, the
gossip message becomes large relative to the data that is
transferred. This is shown in Figure 11, in the region where
le size is less than 256 KB; the mean download time never
goes below 4 seconds. Despite this overhead, swarming will
eventually outperform client-server for small les as the
arrival rate increases. Nevertheless, this is clear evidence
that swarming web servers can benet from dynamic server
initiation.
G. Block Size
As can be seen from our discussion of le size, block size
is a key parameter for swarming. To fully explore the effect of
block size on swarming performance we conducted a series of
simulations with varying block and le sizes, using an arrival
rate of 16 clients per minute. Other details of the simulation
27 January 2004 10 Exhibit 2 Page30
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 33 of 163 Page ID
#:961
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
100ms
1s
10s
1m
10m
1h
8KB 32KB 128KB 512KB 2MB 8MB
M
e
a
n
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
Filesize
Swarming

Client-Server

Fig. 11. Impact of lesize on performance


0
50
100
150
200
250
4 8 16 32 64 128 256
4 8 16 32 64 128
M
e
a
n
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
(
s
)
Block Size (kilobytes)
Number of Blocks (
1 Megabyte
Block Size
)

Fig. 12. Effects of block size for a 1 MB le at 16 clients per minute


are the same as in Table II. Figure 12 shows the results of
one these simulations, using a le size of 1 megabyte.
From these results we can identify two trends. First,
download time increases as the block size decreases. Recall
that the client rst exchanges gossip messages with the root
server or a server peer before requesting a block. The clients
upload bandwidth is the bottleneck in this exchange. Hence,
as the block size becomes smaller, the transmission delay
incurred by the client transmitting a gossip message becomes
a signicant part of the overall delay.
The second trend in these results is that as the block
size increases the download time increases slightly for large
blocks. This is a result of the last block problem, which
0
5
10
15
20
25
30
35
40
4 8 16 32 64 128 256
4 8 16 32 64 128
%
o
f
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
o
n
O
n
l
y
L
a
s
t
B
l
o
c
k
Block Size (kilobytes)
Number of Blocks (
1 Megabyte
Block Size
)

Fig. 13. The last block problem for a 1 MB le at 16 clients per minute
occurs when the last block to be downloaded is coming
from a slow source. This causes the download to take a
long time to fully complete, even if most of the le was
transferred quickly. Figure 13 illustrates this effect for a
1 MB le transfer, plotting the percentage of time spent
transferring only the last block. This graph shows that for
a block size of 256 KB the last block consumes 35%
of the download time. BitTorrent solves this problem by
simultaneously downloading the last block from multiple
sources. While this results in redundant data transmission,
it can potentially improve download times.
H. Concurrent Downloads
One interesting question for swarming is whether clients
are able to improve their performance by increasing the
number of concurrent downloads (N
d
). We investigate this
swarming parameter in Figure 14 by plotting the mean
download time as a function of N
d
, using two different client
arrival rates. The swarming scenario is the basic scenario
given in Table II
Figure 14 shows that increasing concurrency does improve
performance for higher loads because more clients are active.
Just as importantly, increasing N
d
does not adversely impact
performance for lower loads. Note that the increase in down-
load time due to the increase in load from 8 to 16 clients per
minute is consistent with Figure 4.
To explore this issue in more depth, we plot the dynamics
of peer selection for this same scenario at 16 clients per
minute. As Figure 15 illustrates, clients are able to download
from a maximum of about 5 peers at a time, despite raising
N
d
to 32. Clients do in fact download from (and serve) a
greater number of peers as the concurrency limit is increased;
however, because the number of active peers is generally
about 7 they are unable to nd enough active peers with
27 January 2004 11 Exhibit 2 Page31
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 34 of 163 Page ID
#:962
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
0
20
40
60
80
100
120
1 2 4 8 16 32 64 128
M
e
a
n
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
(
s
)
Maximum Concurrent Downloads
16 Arrivals/minute
3
3
3
333
3
3
3
3 3 3 3 3
3
8 Arrivals/minute
+
+ +
+
+
+
+
+ + + + + + +
+
Fig. 14. Impact of concurrent downloads (N
d
)
0
2
4
6
8
10
12
1 2 4 8 16 32 64 128
P
e
e
r
s
Maximum Concurrent Downloads
Peers Attempted
3
3
3
3
3
3
3
3
3 3 3 3 3 3
3
Peers Downloaded From
+
+
+
+
+
+
+
+
+ + + + + +
+
Active Peers
2
2 2
2
2
2
2
2 2 2 2 2 2 2
2
Peers Uploaded To

Concurrent Downloads

Fig. 15. Peer dynamics for concurrent downloads at 16 clients per minute
their desired content. This indicates that the load is not yet
high enough to fully exploit the level of concurrency we are
allowing. Furthermore, we should see additional concurrency
as we allow peers to have a non-zero lingering time.
I. Client Distribution
With swarming, as with any peer-to-peer system, it is
important to investigate the impact of low-bandwidth users
on client performance, since some peer-to-peer protocols col-
lapse when too many low-bandwidth users enter the system.
For example, the original Gnutella protocol had this aw. To
address this concern, we conducted a variety of simulations
using different mixtures of clients drawn from three classes:
modem, broadband, and ofce. Table III lists the bandwidth
Type Downstream Upstream
Of ce 43Mbps 43Mbps
Broadband 1536Kbps 128Kbps
Modem 56Kbps 33Kbps
TABLE III
CLASSES OF USERS
0
50
100
150
200
250
300
350
0 20 40 60 80 100
0 20 40 60 80 100
M
e
a
n
D
o
w
n
l
o
a
d
D
u
r
a
t
i
o
n
(
s
)
Broadband Peers (%)
Modem Peers (100% - Broadband Peers)
Modem
333
3
3
3
3
3
3
Broadband
+
++
+
+
+
+
+
+
Fig. 16. Impact of low-capacity clients
of each class of users. We assign each class of users a
different probability, then randomly assign new clients to
one of these classes according to these probabilities. Other
than client bandwidth, the rest of the scenario is the same as
Table II.
Our study shows that swarming behaves well when low-
bandwidth clients interact with higher speed clients. As an
example of our results, Figure 16 plots the mean download
time for broadband and modem users. In this gure, only
broadband and modem users are represented, so as the
percentage of broadband users goes down, the percentage
of modem users goes up.
From this gure, it is clear that broadband users continue
to obtain reasonable performance even as the mix of users is
adjusted. As the percentage of modem users increases from
10% to 99%, the download time for broadband users in-
creases by roughly a factor of two. While this is a signicant
increase, the system clearly continues to function well despite
an overwhelming number of low-bandwidth users. We see a
similar result for ofce users their mean download time
increases by a factor of 3 for the same changes in the mix
of broadband and modem users. The performance of modem
users is relatively unchanged by large numbers of higher-
speed users because their access link remains a bottleneck.
Our results also demonstrate that broadband users do not
see a signicant performance increase when small numbers
of ofce users participate in swarming. This is a side-effect
27 January 2004 12 Exhibit 2 Page32
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 35 of 163 Page ID
#:963
UNIVERSITY OF OREGON, COMPUTER AND INFORMATION SCIENCE TECHNICAL REPORT, CIS-TR-2004-1
of two aspects of our conservative swarming implementa-
tion. First, we are using a lingering time of zero, so that
ofce users do not stay around for long periods helping
slower users. Second, clients are not doing any kind of
bandwidth-based peer selection. Introducing this latter mech-
anism should enable clients to take advantage of friendly
ofce users. At the same time, faster users should be able
to place a cap on the amount of bandwidth they dedicate to
swarming in order to protect both their own performance and
the performance of their local network.
V. CONCLUSIONS & FUTURE WORK
Our results show that swarming scales with offered load
up to several orders of magnitude beyond what a basic web
server can manage. This is an important result, given swarm-
ings popularity in peer-to-peer le-sharing systems. Most
impressively, swarming responds quickly to ash crowds,
with only a slight increase in download time during the crowd
and a rapid return to lower download times once the system
returns to steady state. These results conrm that swarming is
an excellent choice for the distribution of multimedia content
and software updates.
A closer examination of swarming under heavy load
indicates that swarming evenly distributes load among the
peers and does not cause signicant packet loss at the peers.
Operating at high load can cause signicant packet loss
for the root server, but swarming is still able to operate
effectively during this time.
We have also examined a number of key swarming pa-
rameters. We nd that swarming is sensitive to block size,
with blocks on the order of 16 to 32 KB providing good
performance for larger le sizes. Swarming also performs
well across various combinations of client bandwidth. In
particular, low-speed users will naturally decrease swarming
performance for broadband users but will not introduce
signicant problems.
From a practical perspective, swarming does have some
drawbacks. Because it may potentially use many TCP con-
nections, swarming may steal bandwidth from regular client-
server applications. Creating a mechanism for swarming, and
other peer-to-peer applications, to share more evenly is an
open research problem. In addition, swarming, like many
peer-to-peer applications, faces deployment difculties when
users employ Network Address Translation (NAT), because
NAT does not allow for incoming connections without special
manual conguration. While various mechanisms can work
around this difculty, it becomes more difcult when two
peers use NAT.
Finally, our study lays the groundwork for future research
in many interesting areas. Of particular concern is reducing
congestion at the root server during high load. A server
should be able to switch almost completely to redirection dur-
ing high load, since many peers will have content to server.
Likewise, a dynamic server initiation component should be
able to decide when to use client-server transfer (for small
unpopular les) and when to use swarming (for large or
popular les). Other avenues of research include bandwidth-
based and distance-based peer selection, dynamic adjustment
of the number of concurrent downloads, peer performance
monitoring, and more efcient gossiping. We also plan to
explore additional scenarios for swarming, such as non-
cooperative peers and the effects of some peers lingering for
a long time after their download is complete.
REFERENCES
[1] Bit Torrent. http://bitconjurer.org/BitTorrent/.
[2] C. Mic Bowman, Peter B. Danzig, Darren R. Hardy, Udi Manber, and
Michael F. Schwartz. The Harvest Information Discovery and Access
System. Computer Networks and ISDN Systems, 28(12):119125 (or
119126??), 1995.
[3] J. Byers, M. Luby, and M. Mitzenmacher. Accessing Multiple Mirror
Sites in Parallel: Using Tornado Codes to Speed Up Downloads. In
IEEE INFOCOM, April 1999.
[4] Russell J. Clark and Mostafa H. Ammar. Providing Scalable Web
Services Using Multicast Communication. Computer Networks and
ISDN Systems, 29(7):841858, 1997.
[5] F. M. Cuenca-Acuna, R. P. Martin, and T. D. Nguyen. PlanetP:
Using Gossiping and Random Replication to Support Reliable Peer-
to-Peer Content Search and Retrieval. Technical Report DCS-TR-494,
Department of Computer Science, Rutgers University, 2002.
[6] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson, S. Shenker,
H. Sturgis, D. Swinehart, and D. Terry. Epidemic algorithms for
replicated database maintenance. In Proceedings of the Sixth Annual
ACM Symposium on Principles of Distributed Computing, pages 112,
1987.
[7] eDonkey2000. http://www.edonkey2000.com/.
[8] Gnutella. http://rfc-gnutella.sourceforge.net/.
[9] Richard M. Karp, Christian Schindelhauer, Scott Shenker, and Berthold
Vocking. Randomized rumor spreading. In IEEE Symposium on
Foundations of Computer Science, pages 565574, 2000.
[10] K. Kong and D. Ghosal. Mitigating Server-Side Congestion on
the Internet Through Pseudo-Serving. IEEE/ACM Transactions on
Networking, August 1999.
[11] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels,
R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, and
B. Zhao. OceanStore: An Architecture for Global-Scale Persistent
Storage. In ASPLOS, 2000.
[12] Open Content Network. http://www.open-content.net/.
[13] Venkata N. Padmanabhan and Kunwadee Sripanidkulchai. The Case
for Cooperative Networking. In 1st International Workshop on Peer-
to-Peer Systems (IPTPS), 2002.
[14] Venkata N. Padmanabhan, Helen J. Wang, Philip A. Chou, and Kun-
wadee Sripanidkulchai. Distributing Streaming Media Content Using
Cooperative Networking. In Proceedings of the NOSSDAV, 2002.
[15] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and s. Shenker. A
Scalable Content-Addressable Network. In ACM SIGCOMM, August
2001.
[16] P. Rodriguez and E. W. Biersack. Dynamic Parallel-Access to Repli-
cated Content in the Internet. IEEE/ACM Transactions on Networking,
August 2002.
[17] Squid Web Proxy Cache. http://www.squid-cache.org/.
[18] Tyron Stading, Petros Maniatis, and Mary Baker. Peer-to-Peer Caching
Schemes to Address Flash Crowds. In 1st International Workshop on
Peer-to-Peer Systems (IPTPS 2002), March 2002.
[19] Angelos Stavrou, Dan Rubenstein, and Sambit Sahu. A Lightweight,
Robust P2P System to Handle Flash Crowds. In IEEE ICNP,
November 2002.
[20] I. Stoica, R. Morris, D. Karger, M.F. Kaashoek, and H. Balakrishnan.
Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applica-
tions. In ACM SIGCOMM, August 2001.
[21] Swarmcast SourceForge Project. http://sourceforge.net/projects/
swarmcast/.
27 January 2004 13 Exhibit 2 Page33
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 36 of 163 Page ID
#:964




EXHIBIT 3
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 37 of 163 Page ID
#:965
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
BY HELEN LEWIS PUBLISHED 18 OCTOBER 2012 10:21

Like 3S0
24
Tweet Tweet 214
The Staggers
The New Statesman`s rolling politics blog
RSS
Taking on the "Great Firewall oI China"
This week we are producing a digital version oI the New Statesman in Mandarin, to evade China's internet censors. Here's why.
China has tried to obliterate the existence oI Ai Weiwei Irom the internet: search Ior his name there, and you'll Iind nothing. His blog has been shut down, his
passport was conIiscated, and his communication with the outside world Irom his studio near Beijing is monitored.
In a proIile oI the artist, written aIter a visit to China this summer, the NS's Features Editor Sophie Elmhirst wrote:
The issues on which Ai has spoken out are vital ones: the shoddy construction standards which led to needless deaths in the Sichuan earthquake; the censorship oI
the press; the limitations placed on the internet by the "Great Firewall oI China".
So the New Statesman decided to do what it could to help. This week, we have produced the magazine in Mandarin, in PDF Iormat, which we are uploading to Iile-
sharing sites (here's the .torrent Iile and here's the magnet link please share both widely). Internet-savvy people in China have learned how to get round the
censors using private networks and encryption, and they will be able to access the digital version oI the NS - and give it to their Iriends.
What will they Iind inside? A story very diIIerent to the one they are told by the state-controlled press. Inside the issue, the Iormer newspaper editor Cheng Yizhong
speaks about how the Southern Metropolis Daily exposed the brutal "custody and repatriation" procedure used by the government on those without the correct ID,
and the conIinement and Iatal beating oI Sun Zhigang in 2003 (and subsequent cover-up). In 2004, Cheng was detained in secret Ior more than Iive months by the
Guangdong authorities in 2004 Ior 'economic crimes, beIore being released.
In an exclusive essay, Cheng recounts the stiIling conditions oI media censorship in China, opening up about a media culture bombarded by 'prohibitions and riddled
Sear ch
Home The Staggers Politics Economics Business Sci-tech World aIIairs Culture Media LiIestyle Energy Blogs Events Jobs Magazine
Cultural Capital Mehdi Hasan Laurie Penny David Allen Green Helen Lewis Steven Baxter Gavin Kelly Rowenna Davis Nelson Jones
Star Spangled Staggers Current Account The Business End The V Spot Samira Shackle Alan White Martha Gill John Stoehr Alex Hern Juliet Jacques
Alex Andreou Voices Nicky WoolI Bim Adewunmi Glosswitch Kate Mossman Ryan Gilbey Martin Robbins
The Staggers
Exhibit 3 Page34
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 38 of 163 Page ID
#:966
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
2/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
Prot est s against t he det ent ion oI t he art ist Ai Weiwei. Phot o: Get t y
Ai might be celebrated in the west and a hero to his Ians in China those who are able to skirt the Great Firewall but the vast majority oI China`s 1.3
billion people, the ones living in the cities you`ve never heard oI, in the Iactory towns making our iPhones and in the remote rural villages with no access
to running water, have no idea who he is. And they have no means oI Iinding out.
with inIormers who report directly to the government, in which only a minority
oI journalists are brave enough to Iight the system. He writes:
AIter 2005, the system enacted the strategy oI 'demoralise, divide and
conquer. The central publicity department started sending censors
directly to major media organisations to carry out censorship prior to
publication. The central government was thereIore not only passing
comment on news aIter publication, but had a pre-publication
checkpoint. The dual system Iormed a pincer movement and provided a
double saIeguard.
Another policy was even more eIIective: the direct appointment oI
publicity department oIIicials to leadership positions in major media
organisations. Between 1996 and now, three news section directors in
Guangdong`s publicity department have been promoted to senior
positions in the Southern Newspaper Group. In other words, three news
police chieIs took up editor-in-chieI positions.
|..
Censorship happens secretly; it is silent and eIIective. By Iorbidding any paper evidence, and by phoning or sending text messages directly among
diIIerent levels, only one-way communication takes place between the publicity department and the media leadership, and between higher- and lower-level
media leaders. The only rule Ior subordinates is to be loyal to the higher leadership and not cause trouble Ior them.
China's government has been quick to exploit the latest soItware in order to repress Ireedom oI speech online, too. In the Observations section this week, Cheng Hua
notes that Ioreign media companies must have a licence to operate inside China, requiring "the State Council InIormation OIIice to evaluate their saIety". II they
criticise the government, they mysteriously become inaccessible in China, and disappear Irom Chinese Google results.
Internet comments are also censored. Cheng writes:
Exhibit 3 Page35
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 39 of 163 Page ID
#:967
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
3/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
Internet companies have developed soItware capable oI automatically Iiltering and censoring comments . . . they include words and phrase such as CCP,
Jiang, Li, Hu, Wen, central publicity department, democracy, Ireedom and multiparty system.
More Irom Helen Lewis
In the magazine, Ai Weiwei interviews a member oI the "50 cent party" - a commenter paid halI a dollar every time he derails an online debate in China. Essentially,
these people are paid internet trolls; their job is to stop any meaningIul discussion online about the government.
AIter we`ve Iound the relevant articles or news on a website, according to the overall direction given by our superiors we start to write articles, post or reply
to comments. This requires a lot oI skill. You can`t write in a very oIIicial manner, you must conceal your identity, write articles in many dif Ierent styles,
sometimes even have a dialogue with yourselI, argue, debate. In sum, you want to create illusions to attract the attention and comments oI netizens.
In a Iorum, there are three roles Ior you to play: the leader, the Iollower, the onlooker or unsuspecting member oI the public. The leader is the relatively
authoritative speaker, who usually appears aIter a controversy and speaks with powerIul evidence. The public usually Iinds such users very convincing. There
are two opposing groups oI Iollowers. The role they play is to continuously debate, argue, or even swear on the Iorum. This will attract attention Irom
observers. At the end oI the argument, the leader appears, brings out some powerIul evidence, makes public opinion align with him and the objective is
achieved.
Elsewhere in the issue, we hear about how Tibetans are routinely tr eate d as second-class citizens; how human rights lawyers are persecute d; and how
artists and Iilm-makers learn to selI-censor iI they want to be successIul.
Some bright spots exist. Although Ai's blog was shut down, he is a proliIic user oI Twitter. For his guest-edited issue oI the NS, he asked his 170,000 Iollowers Ior
their thoughts on the Iuture oI China, providing a unique portrait oI the country through the eyes oI its citizens.
There are also many in China who are dedicated to speaking the truth, despite the oIten-dire personal consequences. In the magazine, Tsering Woeser - whose 2003
collection oI essays was banned Ior being "politically erroneous" - writes about Tibet; the lawyer Li Fangping writes about "re-education through labour"; and political
lecturer Teng Biao writes about the death penalty. We also have lyrics by two dissident rock stars, and an interview with the artist Zhou Zhou, Ai Weiwei's protege,
who has also been arrested on trumped-up charges.
So there you have it. Most weeks we are very keen to have people pay Ior the magazine - it makes all our work possible. But this week, we want to give it away
Ior Iree.
Here is a direct link to the PDF, here is a link to the torrent Iile, here is a magnet link Ior the torrent, and here is a mirror oI the torrent on Kickass
Torrents. Please share.
Tags: internet China
Exhibit 3 Page36
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 40 of 163 Page ID
#:968
1/10/13
4/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na

Like 3S0
24
Tweet Tweet 214
SAT, 2012-10-20 09:24 STEEEVYO (NOT VERIFIED)
20 Comments
Some deluded Westerners playing rebels.
Ai Weiwei should start criticising the west again as he once did when he didn't misrepresent his own
country Ior a quick buck Irom so called progressives in Europe and the US.
By the way:
More Irom New Statesman

More Irom around the web
Provided by Outbrain |?|

John Bercow: 'I`ve
never liked little
cliques
Shouldn't Justine
Greening resign
over the West
Coast Iiasco?
Why don't wages
Iall?
Michigan
Democrat barred
Irom state
legislature Ior
saying "vagina"
Supermodels All
Grown Up
(Hollyscoop)
The 8 Most
Overpaid &
Underpaid Jobs
(Salary.com)
Not Everyone Is
An E.L. James
Fan
(Beyond the Book)
Celebrity Plastic
Surgery: Cosmetic
Surgery Gone
Wrong
(LiIe Goes Strong)
Exhibit 3 Page37
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 41 of 163 Page ID
#:969
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
WED, 2012-10-31 03:54 HELLO FROM SHANGHAI (NOT VERIFIED)
FRI, 2012-10-19 13:28 GEORGE HUMPHREY (NOT VERIFIED)
FRI, 2012-10-19 17:46 MINTER (NOT VERIFIED)
II the average Chinese person would not know how to circumvent censorship without
benevolent (read colonial attitude/arrogance) Western assistance, then how come that certain Japanese
pornstars are well known celebrities in China?
Greetings Irom Chengdu
Talking about the "China's internet censors", I just want you guys to know that I am sitting in
my oIIice in Shanghai China, having a cup oI coIIee, reading this great article! Well, when I go Ior
my coIIee break this aIternoon, I will read, on my mobile phone, The Guardian, New York
Times, and the Telegraph and BBC as usual. What a nice place here! And oI course, I will still
Iollow the stroy oI the ugly, disgusting "uncle Jammy" Irom BBC. :)
To Minter:
The diIIerence between you and those Iolks bashing each other (the commie hippies and bible bashing
nutcases you reIerence above) is that none oI them are being paid by their government to put posts up
on the internet, like you are. :-)
Change the record. See my post to New Stateswoman. It's become the standard response and thus
loses its integrity - to the Chinese making comments, why should they bother to respond when
all you say is "5o center"? Or maybe that's what Iolk like you want... a one sided conversation -
Iree speech, as long as we agree with your view.
Exhibit 3 Page38
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 42 of 163 Page ID
#:970
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
6/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
FRI, 2012-10-19 10:45 MINTER (NOT VERIFIED)
FRI, 2012-10-19 07:46 HASDRUBAL (NOT VERIFIED)
FRI, 2012-10-19 10:56 MINTER (NOT VERIFIED)
As Ior the Americans not doing it to each other, how would you know exactly? It has been
discovered that the FBI keep tabs on "the atmosphere" re: social networking sites. Israelis and
Muslims also have their "internet deIence league". It is entirely plausible that the Americans have
the same
No Chinese person in China gives a shit about this site... those that want to look up Ioreign views and
news oI the world and oI China, they report to CNN, BBC etc... do you really think they would
consider a niche site in Britain, let alone the world?
As Ior this comment
"the ones living in the cities you`ve never heard oI, "
whose Iault is that I wonder. Showing oII your ignorance is not a good trait.
Sitting in Shandong, China, I just downloaded the Iile Irom your direct link to my machine. Seems the
all-seeing eye is not alerted yet to this subversive act.
What a rebel you are! I sincerely hope Ior your sake you didn't Ieel a tinge oI excitement over
your super sneakiness.
Exhibit 3 Page39
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 43 of 163 Page ID
#:971
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
7/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
FRI, 2012-10-19 14:27 HASDRUBAL (NOT VERIFIED)
FRI, 2012-10-19 17:47 MINTER (NOT VERIFIED)
FRI, 2012-10-19 04:37 PERCYALPHA (NOT VERIFIED)
THU, 2012-10-18 15:13 HUGH C MARKEY (NOT VERIFIED)

Yet you replied...


Unencrypted version oI Dropbox is blocked in China. Please edit your hyperlink to
https://dl.dropbox.com/u/6048377/AWW20New20Statesman.pdI.torrent Ior "link to the torrent
Iile,"
Let's get real. The internet and its spin-oIIs are US owned. The US Free Enterprise system took over a
US state network and prettiIied it to merchandising standards. European research produced the 'Web'
and this new development was incorporated by global business into the greatest selling gimmick ever.
Yea, politics too, bud.
Nothing is new under the sun. It's sorta Iree - just as Commercial television is paid Ior by the
advertising industry disguising the Iact that it's cost is disguised by adding the expense to the product.
Stealth politics, baby.
Just as long as you bear this in mind, and don't think it's a product oI Iairy dust, you'll have some sort
oI perspective.
More importably it can be used as a 'regime changer'. This Iact hasn't been overlooked by the
Exhibit 3 Page40
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 44 of 163 Page ID
#:972
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
8/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
WED, 2012-10-31 03:53 HELLO FROM SHANGHAI (NOT VERIFIED)
THU, 2012-10-18 11:41 LEN (NOT VERIFIED)
subversive or the potential target.
Pretty bloodless Irom the point oI view oI the West. Overthrow China beIore it gains unstoppable
military power. Prime target.
Oh, and Mandarin is pretty saIe as it tells everybody to do as they're told. Nevertheless, how will a
logographic scrip Iit into the advertising domain? Mad Men will manage -don't you worry, sucker.
Avatar
Talking about the "China's internet censors", I just want you guys to know that I am sitting in
my oIIice in Shanghai China, having a cup oI coIIee, reading this great article! Well, when I go Ior
my coIIee break this aIternoon, I will read, on my mobile phone, The Guardian, New York
Times, and the Telegraph and BBC as usual. What a nice place here! And oI course, I will still
Iollow the stroy oI the ugly, disgusting "uncle Jammy" Irom BBC. :)
"What will they Iind inside? A story very diIIerent to the one they are told by the state-controlled
press."
No oIIence but there is already plenty oI dissenting inIormation around in China - just because their
government is trying to delete things it doesn't mean nobody has heard oI them - iI they're on the
internet getting political inIormation Irom torrents, chances are they already have heard oI Ai Weiwei.
Trying to stereotype 1.3bn Chinese as politically unaware drones who know nothing apart Irom what
state media tells them is kind oI wrong and kind oI racist.
This is the equivalent oI a Chinese magazine posting a link Ior UK users to a torrent Ior the Batman
Iilm or something. Those poor Brits have to pay to watch movies!
What's the Western obsession with Ai Weiwei anyway? I think a lot oI it comes Irom the Iact he's a
reIined arts guy who lived in the US Ior 12 years, rather than his standing as any kind oI
revolutionary.
Exhibit 3 Page41
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 45 of 163 Page ID
#:973
1/10/13
9/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
WED, 2012-10-31 03:57 HELLO FROM SHANGHAI (NOT VERIFIED)
THU, 2012-10-18 13:01 NEW STATESWOMAN (NOT VERIFIED)
FRI, 2012-10-19 10:49 MINTER (NOT VERIFIED)
FRI, 2012-10-19 10:48 MINTER (NOT VERIFIED)
Talking about the "China's internet censors", I just want you guys to know that I am sitting in
my oIIice in Shanghai China, having a cup oI coIIee, reading this great article! Well, when I go Ior
my coIIee break this aIternoon, I will read, on my mobile phone, The Guardian, New York
Times, and the Telegraph and BBC as usual. What a nice place here! And oI course, I will still
Iollow the stroy oI the ugly, disgusting "uncle Jammy" Irom BBC. :)
So, I'm assuming you're a paid-up member oI the 50 Cent Brigade then...
Ha, love it. Anything that is opposing the general western view oI "China bad, China red"
and you're called a 50center. Much like any Americans opposing the Republicans are commie
hippies and opposing the Democrats are bible bashing nutcases... oh wait, that doesn't
happen nearly as much as the blanket assumption oI this.
Who are the brainwashed Iolk, really?
Ha, love it. Anything that is opposing the general western view oI "China bad, China red"
Exhibit 3 Page42
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 46 of 163 Page ID
#:974
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
10/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
THU, 2012-10-18 10:48 DES DEMONA (NOT VERIFIED)
WED, 2012-10-31 03:52 HELLO FROM SHANGHAI (NOT VERIFIED)
WED, 2012-10-31 03:52 HELLO FROM SHANGHAI (NOT VERIFIED)
and you're called a 50center. Much like any Americans opposing the Republicans are commie
hippies and opposing the Democrats are bible bashing nutcases... oh wait, that doesn't
happen nearly as much as the blanket assumption oI this.
Who are the brainwashed Iolk, really?
Well done the NS.
and
'' Essentially, these people are paid internet trolls; their job is to stop any meaningIul discussion online
about the government''
I think there are a Iew oI those on these Iorums!
Talking about the "China's internet censors", I just want you guys to know that I am sitting in
my oIIice in Shanghai China, having a cup oI coIIee, reading this great article! Well, when I go Ior
my coIIee break this aIternoon, I will read, on my mobile phone, The Guardian, New York
Times, and the Telegraph and BBC as usual. What a nice place here! And oI course, I will still
Iollow the stroy oI the ugly, disgusting "uncle Jammy" Irom BBC. :)
Talking about the "China's internet censors", I just want you guys to know that I am sitting in
my oIIice in Shanghai China, having a cup oI coIIee, reading this great article! Well, when I go Ior
my coIIee break this aIternoon, I will read, on my mobile phone, The Guardian, New York
Exhibit 3 Page43
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 47 of 163 Page ID
#:975
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
11/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
Times, and the Telegraph and BBC as usual. What a nice place here! And oI course, I will still
Iollow the stroy oI the ugly, disgusting "uncle Jammy" Irom BBC. :)
0 comments
What's this?
Comments for this thread are now closed. Comments for this thread are now closed.
ALSO ON NEW STATESMAN
Coming soon to an angry dude near you - the " pro-men" party
Joe Arlaw - AII Lhe muIn oIILIcuI urLIes ure doIng u good job oI LuckIIng
Lhe robIems LhuL .
The welfare debate and the end of reason
Alan Matthews - "OI course Lhere Is no correIuLIon, dIrecL or IndIrecL, y ou
IuckIng numLy ." BesL urL .
We must fight the voice that says: stay home, keep your legs .
Peter Bensley - WhuL y ou're descrIbIng Lhere Isn'L u muLrIurchuI socIeLy
buL u uLrIurchuI one; one .
Seeing Red: the power of female anger
Miss Madrigal - QbuLLmun As u Lrunsgendered womun ugree wILh
ev ery LhIng LhuL Suzunne hus Lo .

Discussion Discussion Communit y Communit y Shar e Shar e


No one has commented yet.

335 comments 3 days ago 36 comments 19 hours ago


276 comments 7 days ago 69 comments 2 days ago
Comment feed Subs cri be vi a emai l
0
Comments
Exhibit 3 Page44
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 48 of 163 Page ID
#:976
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
12/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
People
Recent
Popular
Recent Comments
Johnhurt69
Artical was spot on, but immigration has deIinetly been a tool used by the bosses to keep wages low and Iurther reduce collective bargaining powers oI workers. The reIusal oI the so-called leIt to...
The welIare debate and the end oI reason 8 minutes ago
DJT1million
Some evidence oI that assertion would be most welcome. Where did you get the inIormation Irom please?
Will this be the coalition`s poll tax moment? 24 minutes ago
George Eaton
Indeed, although the exemption Ior pensioners makes it harder, particularly in those areas with a disproportionate number oI elderly people.
Will this be the coalition`s poll tax moment? 33 minutes ago
mike cobley
Duh, no surprise. Gove is merely puting Tory doctrine into eIIect, ie, shoehorning elitism and privilege into the education system at every opportunity. And since elitism is all about Ieeling...
How academies covertly select pupils 45 minutes ago
Exhibit 3 Page45
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 49 of 163 Page ID
#:977
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
13/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
StealthGirl
10 council tax cut, but most authorities are making those on beneIits pay 20. And using the extra revenue to Iund their own payrises.
Will this be the coalition`s poll tax moment? 55 minutes ago
NewStatesman
is tired oI clichs and stereotypes
in the VagendaMagazine
Christmas giIt guide:
http://t.co/eCHCIezK
Sex work isn`t stigmatised because
ccrampton
goldenlatrine So I see! People
seem to be reading it regardless,
though...
raIaelbehr
I hope whoever came up with 'The
Street' in StratIord WestIield was
paying deliberate homage to 'The
georgeeaton
http://t.co/ujDMrSl2
helenlewis
RT RyanLizza: O.M.G. RT
jonathanchait: From Newsweek,
single worst solution to any
Latest tweets
From the NS:
The Staggers
Cultural Capital
Star Spangled Staggers
The Business End
Current Account
Voices
Online writers:
Steven Baxter
Rowenna Davis
David Allen Green
Mehdi Hasan
Nelson Jones
Gavin Kelly
Helen Lewis
Laurie Penny
The V Spot
Columnists:
David BlanchIlower
RaIael Behr
Michael Brooks
Rachel Cooke
Hunter Davies
Sophie Elmhirst
Ryan Gilbey
Mehdi Hasan
Helen Lewis
Topics:
Business
Economics
Culture
LiIestyle
Energy
Media
Politics
Sci-tech
World AIIairs
Tools and services:
Polls
Predictions
Jobs
Archive
Magazine
PDF edition
RSS Ieeds
Advertising
Subscribe
More Irom New Statesman
Exhibit 3 Page46
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 50 of 163 Page ID
#:978
1/10/13 Taki ng on the "Great Fi rewal l of Chi na"
14/14 www.newstatesman.com/staggers/2012/10/taki ng-great-fi rewal l -chi na
Alex Hern
Martha Gill
Alan White
John Stoehr
Samira Shackle
Alex Andreou
Nicky WoolI in America
Bim Adewunmi
Glosswitch
Kate Mossman on pop
Ryan Gilbey on Film
Martin Robbins
Nicholas Lezard
Kevin Maguire
Laurie Penny
John Pilger
Antonia Quirke
Will SelI
Peter Wilby
Special supplements
Stockists
Twitter
Facebook
History About us Subscribe Advertising
RSS Ieeds Privacy policy
Terms and conditions Contact us
New Statesman 1913 - 2012
Exhibit 3 Page47
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 51 of 163 Page ID
#:979




EXHIBIT 4
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 52 of 163 Page ID
#:980
1/10/13 OpenOf f ice.org P2P Downloads
1/2 www.openof f ice.org/distribution/p2p/index.html
search
Product Download Support Extend Develop Focus Areas Native Language home distribution p2p
The Free and Open Productivity Suite
QA Vol unteers Needed -- Hel p us test OpenOffi ce
OpenOffice.org P2P Downloads
BitTorrent Links - Magnet & MetaLinks
Download OpenOffice.org using P2P Technology
To download OpenOffice.org using our BitTorrent servers,
simply select your download using these three simple
steps.
f you would prefer to use Magnet or Metalinks to download
OpenOffice.org, please visit our Magnet & Metalinks
Download Page.
To mirror all of the torrents, using a suitable client (e.g.
Azureus), please use our RSS torrent feed.

Choose platform


About BitTorrent
BitTorrent is a P2P method where a central 'tracker' keeps track of who is
downloading and sharing specific files.
When using BitTorrent to download OpenOffice.org, your computer
automatically uses spare bandwidth to help share the file with others, and
this means that you don't have to put up with slower downloads during
peak download times (such as just after a release), because the more
people downloading, the more people sharing.
Also, your download is automatically checked for integrity to make sure
that it is identical to the official version.
To use BitTorrent technology, you must have a BitTorrent "client" installed.
BitTorrent Clients
uTorrent (Wi ndows)
Official BitTorrent Client (Cross-Platform)
Azureus (Cross-Platform)
ABC (Windows, Linux)
Exhibit 4 Page48
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 53 of 163 Page ID
#:981

nf ormation on the OpenOf f ice.org P2P Project
Shareaza (Windows)
Tomato Torrent (Mac OS X)
BitComet (Windows)
aria2 (Linux)

Copyright & License | Privacy | Contact Us | Donate
Apache, the Apache f eather logo, and OpenOf f ice are trademarks of The Apache Sof tware Foundation. OpenOf f ice.org and the seagull
logo are registered trademarks of The Apache Sof tware Foundation. Other names appearing on the site may be trademarks of their
respective owners.
Exhibit 4 Page49
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 54 of 163 Page ID
#:982




EXHIBIT 5
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 55 of 163 Page ID
#:983
NLNMLNP a ~=v =d ==_q
NLQ VK LRNSUNTSLH~J J J JJ
EVOLUTION
DownIoud Your Genomes
on BILTorrenL
Toduy u grou oI reseurchers unnounced
Lhey wouId be reIeusIng severuI eoIe's
genome sequences onIIne. Anyone cun
downIoud or reurose Lhe duLu, whIch wIII
be shured on BILTorrenL und oLher IIIe-
shurIng neLworks.
ScIence Commons' John WIIIbunks mude Lhe
unnouncemenL Loduy uL O'ReIIIy's ELech
ConIerence In Sun Jose, exIuInIng LhuL Lhe
duLu wouId be rovIded by Hurvurd's
PersonuI Genome ProjecL, u grou whose
gouI Is Lo sequence Lhousunds oI eoIe's
genomes over Lhe nexL severuI yeurs. They're
commILLed Lo mukIng uII Lhe duLu Irom Lhese
genomes ubIIc, In order Lo IosLer reseurch
InLo everyLhIng Irom dIseuse Lo evoIuLIon.
The IIrsL Iew genomes LhuL Lhe PersonuI
Genome ProjecL hus sequenced wIII soon be
uvuIIubIe vIu BILTorrenL, usIng u sysLem
deveIoed by ProLeomeCommons. The
genomes ure beIng reIeused under u CreuLIve
Commons Zero (CCo) ugreemenL, whIch
Iuces zero resLrIcLIons on how eoIe use Lhe
duLu.
MAR 11, 2009 2:42 PM BY ANNALEE NEWITZ
Share
TOP STORIES
WE COME FROM THE FUTURE
THURSDAY, JAN 10, 2013 LATEST STORIES
35,488 SUPERLIST
The 1z cognILIve bIuses LhuL
revenL you Irom beIng ruLIonuI
5,618 IO9 2013 PREVIEW
AII Lhe EssenLIuI ScIence IcLIon
und unLusy Books ThuL Are
ComIng In zo1
2,662 DAILY EXPLAINER
Hus HumunILy`s ExIosIon
Become u PouIuLIon Bomb?
3,204 THIS IS AWESOME
You wunL more IemuIe ceIebs us
Avengers? Here ure more IemuIe
ceIebs us Avengers
3,796 THIS IS AWESOME
An AerIuI VIew oI MunhuLLun
ThuL WIII BIow Your MInd
Like 18
Exhibit 5 Page50
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 56 of 163 Page ID
#:984
NLNMLNP a ~=v =d ==_q
OLQ VK LRNSUNTSLH~J J J JJ
WIIIbunks exIuIned LhuL Lhe genomes
LhemseIves wouId noL be CC IIcensed, sInce
you cun'L coyrIghL genomIc duLu - buL uII
Lhe noLes und InIormuLIon ubouL Lhe
genomes wouId be uvuIIubIe Lo Lhe ubIIc.
And Lhe ubIIc Is weIcome Lo use Lhe
genomIc duLu however Lhey wIsh.
UILImuLeIy, he suggesLed, mukIng Lhe duLu
uvuIIubIe In such un uccessIbIe munner
encouruges scIenLIsLs Lo shure LheIr reseurch
IIndIngs und muy dIscouruge comunIes
Irom IockIng u Lhe InIormuLIon In dubIous
uLenLs.
46,699 GOOFBALLERY
#overIyhonesLmeLhods Is Lhe
PosLSecreL oI Lhe scIence worId,
und IL Is umuzIng
1,068 HOLY CRAP WTF
The mysLerIous Areu 1
gruveyurds where secreL mIIILury
LechnoIogIes go Lo dIe
1,617 SPACE PORN
ThIs Is how Lhe rImordIuI EurLh
Iooked - rIghL uILer Lhe megu-
coIIIsIon LhuL Iormed Lhe Moon
1,314 WINDS OF WINTER
GRRM hus u new WInds oI WInLer
revIew chuLer u!
3,272 WALKING DEAD
AurenLIy GIen Muzzuru IeIL The
WuIkIng Deud TV show becuuse
RoberL KIrkmun mude hIm
293 BOOKS
How ScIence IcLIon Book Covers
Moved rom Cheese Lo HIgh
ArL. und Buck AguIn
262 TRAILER FRENZY
New LruIIer Ior Lhe LeIekIneLIc
IosL dog movIe Wrong Is so
weIrd, IL`s rIghL
407 PHYSICS
Why does reIuLIvILy meun LhuL
muss chunges wILh seed?
STAR TREK INTO DAR
Exhibit 5 Page51
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 57 of 163 Page ID
#:985
NLNMLNP a ~=v =d ==_q
PLQ VK LRNSUNTSLH~J J J JJ
ConLucL AnnuIee NewILz: EMAIL THE AUTHOR COMMENT FACEBOOK TWITTER
AII Lhe EssenLIuI
ScIence IcLIon und
unLusy Books ThuL
Are ComIng In zo1
You WIII NoL Be AbIe
Lo Escue Lhe WorsL
Iu Seuson In 1o
Yeurs (UnIess You
Move Lo
ConnecLIcuL)
reshNeck s NeLIIIx
Ior TIes, Mukes Sure
Your ormuIweur s
Never BorIng
The HeuILh OI An
N PIuyer BeIongs
To Everyone BuL The
PIuyer HImseII
SHOW EARLIER DISCUSSIONS
DIscussIon now cIosed.
DISCUSSIONS FEATURED ALL
492 VIdeo Lukes you dee
InsIde Lhe osL-roducLIon
rocess oI SLur Trek nLo
Durkness |WurnIng: AuLoIuy|
12,711 RANT
Cun we brIng Lhe Greek Gods
buck, Ieuse?
382 ARCHITECTURE
A House Shued Ike u IyIng
Suucer, Perched ALo u VoIcuno
1,928 MORNING SPOILERS
Jumes MungoId Lukes us InsIde
The WoIverIne`s shuLLered
syche. PIus Lhe Lrue meunIng oI
SLur Trek nLo Durkness!
215 TRAILER FRENZY
IrsL LruIIer Ior EII RoLh`s sexy
monsLer serIes HemIock Grove
1,130 HUNGER GAMES
IrsL oIIIcIuI CuLchIng Ire Imuges
reveuI InnIck`s modIIIed, kId-
kIIIIng LrIdenL
362 DYSTOPIA
The comIng AusLruIIun wuLer
wurs
65 FOUND FOOTAGE
Never geL Loo uLLuched Lo u cuLe
dog In u horror movIe
7,165 IO9 2013 PREVIEW
The UILImuLe GuIde Lo zo1`s
ScIence IcLIon und unLusy TV
Exhibit 5 Page52
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 58 of 163 Page ID
#:986
NLNMLNP a ~=v =d ==_q
QLQ VK LRNSUNTSLH~J J J JJ
AbouL HeI Jobs eguI Pr Iv ucy Per mIssIons Adv er L IsIng Subscr Ibe Send u L I
Juun's NewesL
McDonuId's Meme s
DeIIcIous on TwILLer
Indsuy ohun MIghL
Huve SLoIen
EIIzubeLh TuyIor's
Treusured MugIc
BruceIeL oI
rIendshI
How To Be The
usLesL
MoLherIucker AL The
Go-KurL Truck
MIcrosoIL SurIuce
Pro Hunds-On: ThIs
s WhuL L ShouId
Huve Been AII AIong
3,123 POSTAL APOCALYPSE
On Lhe NuLure oI EvII (AIso,
ZombIe Sex)
1,969 GAME OF THRONES
Oh, IL`s jusL Lhe SLurk kIds ruIng
Lhe Gume oI Thrones oenIng
credILs
1,568 NEUROSCIENCE
InuIIy, un exIunuLIon Ior why
our IIngers und Loes geL uII runy
when Lhey`re weL
1,453 THIS IS AWESOME
Huve you ever seen u doIhIn
sLumede? You huve now.
Exhibit 5 Page53
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 59 of 163 Page ID
#:987




EXHIBIT 6
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 60 of 163 Page ID
#:988
open access
www.bioinformation.net
Database
Volume 8(5)
ISSN 0973-2063 (online) 0973-8894 (print)
Bioinformation 8(5): 239-242 (2012) 239 2012 Biomedical Informatics

Distribution of biological databases over low-


bandwidth networks


S Sikander Azam
1, 2
, Shamshad Zarina
1
*


1
National Center for Proteomics, University of Karachi, Karachi-75270, Pakistan;
2
National Center for Bioinformatics, Quaid-e-
Azam University, Islamabad; Shamshad Zarina Email: szarina@uok.edu.pk; Phone: 0092-21-34656511; Fax: 0092-21-34650726;
*Corresponding author


Received February 07, 2012; Accepted March 03, 2012; Published March 17, 2012


Abstract:
Databases are integral part of bioinformatics and need to be accessed most frequently, thus downloading and updating them on a
regular basis is very critical. The establishment of bioinformatics research facility is a challenge for developing countries as they
suffer from inherent low-bandwidth and unreliable internet connections. Therefore, the identification of techniques supporting
download and automatic synchronization of large biological database at low bandwidth is of utmost importance. In current study,
two protocols (FTP and Bit Torrent) were evaluated and the utility of a BitTorren based peer-to-peer (btP2P) file distribution model
for automatic synchronization and distribution of large dataset at our facility in Pakistan have been discussed.



Background:
During last couple of years, scientific community among
developing countries including Pakistan has developed interest
in bioinformatics research [1, 3]. Most of the bioinformatics
applications are database dependant and to access, search and
retrieve data, reliable internet connections are required. To
facilitate maximum utilization, bioinformatics resources and
facilities around the globe prefer to download these databases
on their local servers. There has been an exponential growth in
database records as a consequence of major advances in
genomics and proteomics technologies, stressing need of
frequent updates with the latest releases. Many developing
countries face a major problem in regular update of databases
due to lack of infrastructure, slow/unreliable internet
connectivity and low bandwidth. It is expected that in future,
databases size would outgrow existing rate of transfer at
current bandwidth, thus it is imperative to develop efficient
tools for obtaining automatic updates on a regular basis. To
address such issues, a Bio-Mirror project was also launched
which uses FTP mode for data transfer [4].

Updates are usually managed by client server approach (FTP or
WWW) or P2P (Peer-to-Peer) file sharing applications. FTP has
been a traditional method for file sharing and downloading
from remote server and is very popular for downloading large
files. However, it requires large network bandwidths and
suffers from scalability bottleneck. As an alternative, P2P
applications have become immensely popular for fast and
efficient distribution of files in recent years. P2P architecture
operates in a distributed autonomous system mode that does
not rely on a specific server system. Torrent protocol working
environment is based on peer-to-peer (P2P) technique in which
every user is connected to each other with mesh technology. On
the other hand, FTP protocol working environment is
completely dependent on a single server which means it may
create single point of failure. The performance of traditional
FTP file sharing applications deteriorates rapidly as the number
of clients increase while in P2P module, more peers means
better performance. There are many P2P file sharing
applications such as Kazza, Gnutella, Napster, BitTorrent to
name a few. Among these applications, BitTorrent P2P file
sharing system has been analyzed in many studies [5, 6].

Considering the existing scenario and future difficulties,
techniques supporting automatic synchronization of databases
at low bandwidth are of utmost importance. In current study,
efficiency of FTP and BitTorrent applications are compared in
order to download large sized database (Gigabyte) and using
Exhibit 6 Page54
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 61 of 163 Page ID
#:989
BIOINFORMATION
open access
ISSN 0973-2063 (online) 0973-8894 (print)
Bioinformation 8(5): 239-242 (2012) 240 2012 Biomedical Informatics

them through local servers without delay and response time out
message. With the help of btP2P protocol, the problems of
updating the enormous data of biological databases and at the
same time, avoiding the network connection issues have been
addressed.

Methodology:
Computational Resources
Two Ultra20M2 of Sun Microsystems based nodes with dual
core processor were utilized for current study. These servers
were selected as they are stable, reliable and provided
maximum uptime [7].

Database Selection
NCBI database website [8] was used to download databases.
NCBI website supports FTP protocol and all the databases such
as PubMed, Nucleotide, EST, Protein, Structure, SNPs,
conserved Protein Databases, etc are available in FTP servers.

Selection of Application for database downloads
Multiple applications for FTP and torrent protocols are
available. Filezilla [9] and Bitcomet [10] were selected as
representatives of FTP and BitTorrent procedures, respectively.
These programs are among the best clients, having the ability to
download data at the same interval of time. Filezilla is a single
server based solution that does not support torrent file,
becomes slower with increasing number of users and lacks
resume facility after internet link failure. Bit Torrent on the
other hand is a peer based solution that uses mesh technology
and supports resume facility as well as both torrent and FTP
files.

Performance Evaluation
Most of the bioinformatics databases are usually uploaded on
FTP server. Downloading of database was performed using
both applications and was monitored for the span of fifteen
hours. In order to make sure that the load on the network
should be same for both of the methods during the test period;
the whole procedure was carried out on different machines
with similar specifications, and same network. Both the btP2P
and FTP performances were evaluated.


Figure 1: Comparison between BitTorrent () and FTP ()
protocol indicating speed (A) and downloaded data (B) in 15
hours
Discussion:
Databases are usually downloaded using client-server
architecture like FTP. If server becomes overloaded, response
time might increase. P2P file sharing protocols have gained
popularity as an alternative procedure to FTP. In current
communication, both the applications Filezilla and Bitcomet
were compared as representatives of FTP and Bit Torrent (P2P)
protocols. Our results indicate that BitTorrent protocol is more
efficient in downloading large data (GB) in less time period
Table 1 (see supplementary material). In first hour,
downloading speed of Bitcomet was 87 KB with 234 Mb while
downloading speed using Filezilla was 21 KB with 66 MB. In
successive hours, torrent downloading speed kept on
improving than that of FTP and by the end of fifteenth hour,
torrent downloading speed was at least four times higher than
the FTP downloading speed.

The speed comparison of Torrent and FTP protocols with
respect to time is shown in (Figure 1A). The results show the
slow speed of FTP as compared to the torrent speed. Figure 1B
represents the downloaded data comparison between the two
protocols in specified time limit. This further demonstrates that
the torrent is more reliable as compare to FTP protocol. In
recent years, a significant part of internet bandwidth is being
used by P2P traffic. BitTorrent is a popular P2P application that
aims to avoid bottleneck of FTP servers while delivering large
and popular files [11]. An earlier communication has clearly
shown the better performance of btP2P protocol than traditional
FTP for automatically synchronizing large amounts of
biological databases across the three countries of Asia-Pacific
region [12]. However, they have compared FTP and P2P file
sharing applications using Azureus as a Bit Torrent
representative. For current study, Bitcomet was selected, which
is a client written in C++. Bitcomet can run in windows
environment and offers a preview download mode so that users
can preview download content although the file has not been
completely finished. It allows users to create their own torrents
and can be used for HTTP/FTP download, a format usually
used for most of the bioinformatics database download. The
results obtained from our study demonstrate that BtP2P
techniques can be applied to scale database servers and can
outperform client-server based applications. With two available
nodes, it is concurred that the performance using btP2P is better
than that of FTP. The results of our study showed significant
improvement in download performance using btP2P than
conventional File Transfer Protocol (FTP). Our study has
exhibited the reliability of btP2P in the transmission of
continuously growing multi-gigabyte biological databases
without failure. Furthermore, the download performance for
btP2P can be further intensified by including more nodes from
various parts within the country. This study suggests that the
btP2P technology is highly appropriate for file sharing
applications as this is effective, viable and self scalable.

Conclusion:
Based on above mentioned observations, it can be concluded
that the Torrent protocol is almost four times faster than FTP
protocol. Hence torrent protocol is recommended as a better
tool for updating and synchronization of the biological data sets
using low bandwidth. Results obtained from this study support
the findings of Sangket et al. [12] who compared the
downloading performance between FTP and btP2P on different
Exhibit 6 Page55
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 62 of 163 Page ID
#:990
BIOINFORMATION
open access
ISSN 0973-2063 (online) 0973-8894 (print)
Bioinformation 8(5): 239-242 (2012) 241 2012 Biomedical Informatics

subnets among developing countries. Most of the databases use


FTP protocol and as Bitcomet client supports both FTP and
torrent, it may offer a better choice. The download performance
for btP2P can be improved further by including more nodes
from other institutes and Research and Development (R&D)
organizations. It is suggested that btP2P technology may be an
appropriate application for file sharing, automatic
synchronization and distribution of biological databases and
software over low-bandwidth networks.

Acknowledgments:
Authors are grateful to the Higher Education Commission,
Pakistan for the financial support for this work (grant no: 20-
752).

References:
[1] Ilyas M et al. PLoS Comput Biol. 2011 7: e1001135 [PMID:
21750669]
[2] Ranganathan S et al. Appl Bioinformatics. 2002 1: 101 [PMID:
15130849]
[3] Ranganathan S et al. BMC Bioinformatics. 2008 9: SI [PMID:
18315840]
[4] Gilbert D et al. Bioinformatics. 2004 20: 3238 [PMID:
15059839]
[5] Pouwelse J et al. Peer-to-Peer Systems1V. 2005 3640: 205
[6] Guo L et al. IEEE J Selected Areas Commun. 2005 25: 155
[7] Garud R & Kumaraswamy A, Strategic Management
Journal. 2006 14: 351
[8] http://www.ncbi.nlm.nih.gov
[9] http://filezilla-project.org/
[10] www.bitcomet.com/
[11] Wei et al. Future Generation Computer systems 2007 23: 983
[12] Sangket U et al. Bioinformatics. 2008 24: 299 [PMID:
18037613]
Edited by P Kangueane
Citation: Azam & Zarina, Bioinformation 8(5): 239-242 (2012)
License statement: This is an open-access article, which permits unrestricted use, distribution, and reproduction in any medium,
for non-commercial purposes, provided the original author and source are credited.







































Exhibit 6 Page56
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 63 of 163 Page ID
#:991
BIOINFORMATION
open access
ISSN 0973-2063 (online) 0973-8894 (print)
Bioinformation 8(5): 239-242 (2012) 242 2012 Biomedical Informatics

Supplementary material:

Table 1: Time and speed comparison between BitTorrent and FTP software
Bit Torrent Software FTP Software
S. No.
Torrent
Download
Speed (KB)
Torrent
Downloaded
Data (Mb)
Time in
Hours
S. No.
FTP
Download
Speed (KB)
FTP
Downloaded
Data (Mb)
Time in
Hours
1 87 234 1 1 21 66 1
2 82 522 2 2 20 150 2
3 67 774 3 3 20 216 3
4 68 1020 4 4 20 288 4
5 83 1302 5 5 21 372 5
6 88 1596 6 6 21 456 6
7 90 1884 7 7 22 546 7
8 88 2202 8 8 22 636 8
9 86 2466 9 9 22 720 9
10 82 2742 10 10 22 798 10
11 84 3030 11 11 21 870 11
12 88 3300 12 12 23 978 12
13 90 3588 13 13 23 1062 13
14 90 3876 14 14 23 1146 14
15 95 4164 15 15 23 1236 15



































Exhibit 6 Page57
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 64 of 163 Page ID
#:992




EXHIBIT 7
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 65 of 163 Page ID
#:993
BioTorrents: A File Sharing Service for Scientific Data
Morgan G. I. Langille*, Jonathan A. Eisen
Genome Center, University of California Davis, Davis, California, United States of America
Abstract
The transfer of scientific data has emerged as a significant challenge, as datasets continue to grow in size and demand for
open access sharing increases. Current methods for file transfer do not scale well for large files and can cause long transfer
times. In this study we present BioTorrents, a website that allows open access sharing of scientific data and uses the popular
BitTorrent peer-to-peer file sharing technology. BioTorrents allows files to be transferred rapidly due to the sharing of
bandwidth across multiple institutions and provides more reliable file transfers due to the built-in error checking of the file
sharing technology. BioTorrents contains multiple features, including keyword searching, category browsing, RSS feeds,
torrent comments, and a discussion forum. BioTorrents is available at http://www.biotorrents.net.
Citation: Langille MGI, Eisen JA (2010) BioTorrents: A File Sharing Service for Scientific Data. PLoS ONE 5(4): e10071. doi:10.1371/journal.pone.0010071
Editor: Jason E. Stajich, University of California Riverside, United States of America
Received January 16, 2010; Accepted March 17, 2010; Published April 14, 2010
Copyright: 2010 Langille, Eisen. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This research was funded by a grant from the Gordon and Betty Moore Foundation (http://www.moore.org/) #1660. The funders had no role in study
design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail: mlangille@ucdavis.edu
Introduction
The amount of data being produced in the sciences continues to
expand at a tremendous rate[1]. In parallel, and also at an
increasing rate, is the demand to make this data openly available
to other researchers, both pre-publication[2] and post-publica-
tion[3]. Considerable effort and attention has been given to
improving the portability of data by developing data format
standards[4], minimal information for experiment reporting[58],
data sharing polices[9], and data management[1013]. However,
the practical aspect of moving data from one location to another
has relatively stayed the same; that being the use of Hypertext
Transfer Protocol (HTTP) [14] or File Transfer Protocol (FTP)
[15]. These protocols require that a single server be the source of
the data and that all requests for data be handled from that single
location (Fig. 1A). In addition, the server of the data has to have a
large amount of bandwidth to provide adequate download speeds
for all data requests. Unfortunately, as the number of requests for
data increases and the providers bandwidth becomes saturated,
the access time for each data request can increase rapidly. Even if
bandwidth limitations are very large, these file transfer methods
require that the data is centrally stored, making the data
inaccessible if the server malfunctions.
Many different solutions have been proposed to help with many
of the challenges of moving large amounts of data. Bio-Mirror
(http://www.bio-mirror.net/) was started in 1999 and consists of
several servers sharing the same identical datasets in various
countries. Bio-mirror improves on download speeds, but requires
that the data be replicated across all servers, is restricted to only
very popular genomic datasets, and does not include the fast
growing datasets such as the Sequence Read Archive (SRA)
(http://www.ncbi.nlm.nih.gov/sra). The Tranche Project
(https://trancheproject.org/) is the software behind the Proteome
Commons (https://proteomecommons.org/) proteomics reposito-
ry. The focus of the Tranche Project is to provide a secure
repository that can be shared across multiple servers. Considering
that all bandwidth is provided by these dedicated Tranche servers,
considerable administration and funding is necessary in order to
maintain such a service. An alternative to these repository-like
resources is to use a peer-to-peer file transfer protocol. These peer-
to-peer networks allow the sharing of datasets directly with each
other without the need for a central repository to provide the data
hosting or bandwidth for downloading. One of the earliest and
most popular peer-to-peer protocols is Gnutella (http://rfc-
gnutella.sourceforge.net/) which is the protocol behind many
popular file sharing clients such as LimeWire (http://www.
limewire.com/), Shareaza (http://shareaza.sourceforge.net/),
and BearShare (http://www.bearshare.com/). Unfortunately, this
protocol was centered on sharing individual files and does scale
well for sharing very large files. In comparison, the BitTorrent
protocol [16] handles large files very well, is actively being
developed, and is a very popular method for data transfer. For
example, BitTorrent can be used to transfer data from the
Amazon Simple Storage Service (S3) (http://aws.amazon.com/
s3/), is used by Twitter (http://twitter.com/) as a method to
distribute files to a large number of servers (http://github.com/lg/
murder), and for distributing numerous types of media.
The BitTorrent protocol works by first splitting the data into small
pieces (usually 514 Kb to 2 Mb in size), allowing the large dataset to
be distributed in pieces and downloaded from various sources
(Fig. 1B). A checksum is created for each file piece to verify the
integrity of the data being received and these are stored within a small
torrent file. The torrent file also contains the address of one or more
trackers. The tracker is responsible for maintaining a list of clients
that are currently sharing the torrent, so that clients can make direct
connections with other clients to obtain the data. A BitTorrent
software client (see Table 1) uses the data in the torrent file to contact
the tracker and allow transferring of the data between computers
containing either full or partial copies of the dataset. Therefore,
bandwidth is shared and distributed among all computers in the
transaction instead of a single source providing all of the required
bandwidth. The sum of available bandwidth grows as the number of
PLoS ONE | www.plosone.org 1 April 2010 | Volume 5 | Issue 4 | e10071
Exhibit 7 Page58
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 66 of 163 Page ID
#:994
BioTorrents
PLoS ONE | www.plosone.org 2 April 2010 | Volume 5 | Issue 4 | e10071
Exhibit 7 Page59
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 67 of 163 Page ID
#:995
file transfers increases, and thus scales indefinitely. The end result is
faster transfer times, less bandwidth requirements from a single
source, and decentralization of the data.
Torrent files have been hosted on numerous websites and in
theory scientific data can be currently transferred using any one of
these BitTorrent trackers. However, many of these websites
contain materials that violate copyright laws and are prone to
being shut down due to copyright infringement. In addition, the
vast majority of data on these trackers is non-science related and
makes searching or browsing for legitimate scientific data nearly
impossible. Therefore, to improve upon the open sharing of
scientific data we created BioTorrents, a legal BitTorrent tracker
that hosts scientific data and software.
Results
Tracker and Reliability of Service
The most basic requirement of any torrent server software is the
actual tracker that individual torrent clients interact with to
obtain information about where to download pieces of data for a
particular torrent. In order to minimize any possible transfer
disruptions arising from the BioTorrents tracker not being
accessible, a secondary tracker is added automatically to all new
torrents uploaded to BioTorrents. Currently this backup tracker is
set to use the Open BitTorrent Tracker (http://openbittorrent.
com/). Also, many BitTorrent clients support a distributed hash
table (DHT) for peer discovery, which often allows data transfer to
continue in the absence of a tracker, further enhancing the
reliability over traditional client-server file transfers.
Obtaining Data
In addition to the basic tracker, BioTorrents has several features
supporting the finding, sharing, and commenting of torrents.
Relevant torrents can be found by browsing categories (genomics,
transcriptomics, papers, etc.), license types (Public Domain,
Creative Commons, GNU General Public License, etc.) and by
using the provided text search. Also, torrents are indexed by
Google (http://www.google.com) allowing users searching for
datasets, but unaware of BioTorrents existence, to be directed to
their availability on BioTorrents. Information about each dataset
on BioTorrents is supplied on a details page giving a description of
the data, number of files, date added, user name of the person who
created the dataset, and various other details including a link to the
actual torrent file. To begin downloading of a dataset, the user
downloads and opens the torrent file in the users previously
installed BitTorrent client software (Table 1). The user can then
control many aspects of their download (stopping, starting,
download limits, etc.) through their client software without any
further need to visit the BioTorrents webpage. The BitTorrent
client will automatically connect with other clients sharing the
same torrent and begin to download pieces in a non-random
order. The integrity of each data piece is verified using the original
file hash provided in the downloaded torrent ensuring that the
completed download is an exact copy. The BitTorrent client
contacts the BioTorrents tracker frequently (approximately every
30 minutes) to obtain the addresses of other clients and also to
report statistics of how much data they have downloaded and
uploaded. These statistics are linked to the users profile (default is
the guest account), to allow real-time display on BioTorrents of
who is sharing a particular dataset.
The choice of BitTorrent client will depend on the operating
system and options that the user requires. For example, some
BitTorrent clients (see Table 1) have a feature called Local Peer
Discovery (LPD), that searches for other computers sharing the
same data on their local area network (LAN), and allows rapid
direct transfer of data over the shared network instead of over the
internet. This situation may arise often in research institutions
where LANs are often quite large and multiple researchers are
working on similar datasets. Another significant feature of the
BitTorrent client, uTorrent, is the addition of a newly designed
transfer protocol called uTP[17], that is able to monitor and adapt
to network congestion by limiting its transfer speeds when other
network traffic is detected. This functionality is important for
system administrators and internet service providers (ISPs) that
Table 1. Comparison of several popular BitTorrent software clients and their features.
BitTorrent Client Name Operating System
1
Interface
2
RSS
3
LPD
4
DHT
5
Win. Mac. Linux GUI Web CLI
uTorrent X X X X X X X
Deluge X X X X X X X
Vuze X X X X X X X
Transmission X X X X X X
rTorrent X X X X
kTorrent X X X X X X X
1
Win:Microsoft Windows, Mac: Mac OSX.
2
GUI: Graphical User Interface, Web: built-in web server interface, CLI: command line interface.
3
RSS download can be obtained for all clients by using RSSDler (http://code.google.com/p/rssdler/).
4
LPD: Local Peer Discovery.
5
DHT: Distributed Hash Table.
doi:10.1371/journal.pone.0010071.t001
Figure 1. Illustration of differences between traditional and peer to peer file transfer protocols. A) Traditional file transfer protocols such
as HTTP and FTP use a single host for obtaining a dataset (grey filled black box), even though other computers contain the same file or partial copies
while downloading (partially filled black box). This can cause transfers to be slow due to bandwidth limitations or if the host fails. B) The peer-to-peer
file transfer protocol, BitTorrent, breaks up the dataset into small pieces (shown as pattern blocks within black box), and allows sharing among
computers with full copies or partial copies of the dataset. This allows faster transfer times and decentralization of the data.
doi:10.1371/journal.pone.0010071.g001
BioTorrents
PLoS ONE | www.plosone.org 3 April 2010 | Volume 5 | Issue 4 | e10071
Exhibit 7 Page60
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 68 of 163 Page ID
#:996
may have previously attempted to block or hinder BitTorrent
activity due its bandwidth saturating effects.
Sharing Data
Sharing data on BioTorrents is a simple three step process. First,
the user creates a torrent file on their personal computer using the
same BitTorrent client software that is used for downloading
(Table 1). The only piece of information the user needs to create
the torrent, is the BioTorrents tracker announce URL which is
personalized for each user (see below), and is located on the
BioTorrents upload page. Second, this newly created torrent file is
uploaded on the BioTorrents - Upload page along with a user
description, category, and license type for the data. Third, the user
leaves their computer/server on with their BitTorrent client
running so that other users can download the data from them.
It should be noted that only users that have created a free
account with BioTorrents are able to upload new torrents. This is
to limit any possible spamming of the website as well as provide
accountability for the data being shared. BioTorrents enforces this
and tracks users by giving each user a passkey. This passkey is
automatically embedded within each torrent file that is down-
loaded from BioTorrents and is appended to the BioTorrents
trackers announce URL. Although, we would hope that most
users create an account on BioTorrents, we still allow anyone to
download torrents without doing so.
An alternative upload method is provided for more advanced
users that have many datasets to share and/or are sharing data
from a remote Linux based server. This method uses a Perl
(http://www.perl.org) script that takes the dataset to be shared as
input and returns a link to the dataset on BioTorrents along with
the torrent file; therefore, allowing torrents to be created for
numerous datasets automatically. This feature would be useful for
institutions or data providers that would like to add a BitTorrent
download option for their datasets.
Considering that many datasets in science are often updated,
BioTorrents allows torrents to be optionally grouped into versions.
This functionality allows improved browsing of BioTorrents by
providing links between torrents. More importantly, this versioning
classification allows users interested in certain software or datasets to
be notified via a Really Simple Syndication (RSS) feed that a new
version is available on BioTorrents. In addition, this RSS feed can be
used to obtain automated updates for datasets that are often
changing, such as genomic and protein databases. For example, a
user could copy the RSS feed for a dataset that is being updated often
on BioTorrents (weekly, monthly, etc.) into their BitTorrent RSS
capable client. When a new version is released on BioTorrents the
BitTorrent client automatically downloads the torrent file, checks to
see what parts of the data have changed, and downloads only pieces
that have been updated.
The speed and effectiveness of the BitTorrent protocol depends on
the number of peers; in particular, those peers that have a complete
copy of the file and can act as seeds. Therefore, it is important that
individuals or institutions act as seeds to achieve full potential.
Currently, all newly added data is automatically downloaded and
shared from the BioTorrents server. This is to ensure that each
dataset always has at least one server available for downloading. As
the number of datasets and users of BioTorrents increases, and to
improve on transfer speeds on a geospatial scale (i.e. across countries
and continents), we would encourage other institutions to automat-
ically download and share all or some of the data on BioTorrents.
Discussion Forum, Comments, RSS, and FAQ
Any logged in BioTorrents user can write comments or
questions about a particular torrent directly on its details page.
This can provide useful feedback both to the creator of the
dataset as well as to other users downloading it. Alternatively,
researchers wanting to discuss more general questions about
BioTorrents, particular datasets, or science, can use the provided
BioTorrents - Forums. Comments and discussion posts can be
read by all visitors, but a free account is necessary to post to
either of these. Users that would like to be updated on newly
uploaded datasets can use the BioTorrents RSS web feed. The
RSS feeds can be configured for certain categories, license types,
users, and search terms, and can also be used with many
BitTorrent clients to automatically download all or some of the
datasets on BioTorrents without human intervention. Finally, the
BioTorrents FAQ (Frequently Asked Questions) page
provides users with information about BitTorrent technology
and general help for using BioTorrents for both downloading and
sharing of data.
Discussion
BitTorrent technology can supplement and extend current
methods for transferring and publishing of scientific data on
various scales. Large institutions and data repositories such as
GenBank[18], could offer their popular or larger datasets via
BioTorrents as an alternative method for download with minimal
effort. The amount of data being transferred by these large
institutions should not be underestimated. For example, in a single
month NCBI users downloaded the 1000 Genomes (8981 GB),
Bacteria Genomes (52 GB), Taxonomy (1GB), GenBank
(233 GB), and Blast Non-Redundant (NR) (3 GB) datasets;
100000, 30000, 15000, 10000, and 7000 times, respectively
(personal correspondence). If BitTorrent technology was imple-
mented for these datasets then the data supplier would benefit
from decreased bandwidth use, while researchers downloading the
data, especially those not on the same continent as the data
supplier, enjoy faster transfer times.
Small groups or individual researchers can also benefit from
using BioTorrents as their primary method for publishing data.
Although, these less popular datasets may not enjoy the same
speed benefits from using the BitTorrent protocol due to the lack
of data exchange among simultaneous downloads, the lower
barrier of entry to providing data compared with running a
personal web server, and the ability to operate behind routers
employing network address translation (NAT) makes the use of
BioTorrents for less popular datasets still beneficial. In addition,
BioTorrents allows researchers to make their data, software, and
analyses available instantly, without the requirement of an official
submission process or accompanying manuscript. This form of
data publishing allows open and rapid access to information that
would expedite science, especially for time-sensitive events such as
the recent outbreaks of influenza H1N1[19] or severe acute
respiratory syndrome (SARS)[20]. No matter what the circum-
stance, BioTorrents provides a useful resource for advancing the
sharing of open scientific information.
Implementation
The source code for BioTorrents.net was derived from the
TBDev.net (http://tbdev.net) GNU General Public Licensed
(GPL) project. The dynamic web pages are coded in PHP with
some features being implemented with JavaScript. All information,
including information about users, torrents, and discussion forums
are stored in a MySQL database. The original source code was
altered in various ways to allow easier use of BioTorrents by
scientists; the most significant being, that anyone can download
torrents without signing up for an account. In addition, torrents
BioTorrents
PLoS ONE | www.plosone.org 4 April 2010 | Volume 5 | Issue 4 | e10071
Exhibit 7 Page61
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 69 of 163 Page ID
#:997
can be classified by various categories and license types, and
grouped with other alternative versions of torrents.
Availability
The BioTorrents web server along with the source code is
available freely under the GNU General Public License at http://
www.biotorrents.net.
Acknowledgments
The authors would like to thank Elizabeth Wilbanks and Aaron Darling for
reading and editing of the manuscript, and Mike Chelen and reviewer
Andrew Perry for suggesting several improvements for the BioTorrents
website.
Author Contributions
Conceived and designed the experiments: ML JAE. Performed the
experiments: ML. Wrote the paper: ML JAE.
References
1. Lynch C (2008) Big data: How do your data grow? Nature 455: 289. Available:
http://www.ncbi.nlm.nih.gov/pubmed/18769419.
2. Birney E, Hudson TJ, Green ED, Gunter C, Eddy S, et al. (2009) Prepublication
data sharing. Nature 461: 16870. Available: http://www.ncbi.nlm.nih.gov/
pubmed/19741685.
3. Schofield PN, Bubela T, Weaver T, Portilla L, Brown SD, et al. (2009) Post-
publication sharing of data and tools. Nature 461: 1713. Available: http://
www.ncbi.nlm.nih.gov/pubmed/19741686.
4. Jones AR, Miller M, Aebersold R, Apweiler R, Ball CA, et al. (2007) The
Functional Genomics Experiment model (FuGE): an extensible framework for
standards in functional genomics. Nature Biotechnology 25: 112733. Available:
http://www.ncbi.nlm.nih.gov/pubmed/17921998.
5. Orchard S, Salwinski L, Kerrien S, Montecchi-Palazzi L, Oesterheld M, et al.
(2007) The minimum information required for reporting a molecular interaction
experiment (MIMIx). Nature Biotechnology 25: 8948. Available: http://www.
ncbi.nlm.nih.gov/pubmed/17687370.
6. Taylor CF, Paton NW, Lilley KS, Binz P, Binz P, et al. (2007) The minimum
information about a proteomics experiment (MIAPE). Nature Biotechnology 25:
88793. Available: http://www.ncbi.nlm.nih.gov/pubmed/17687369.
7. Leebens-Mack J, Vision T, Brenner E, Bowers JE, Cannon S, et al. (2006)
Taking the first steps towards a standard for reporting on phylogenies: Minimum
Information About a Phylogenetic Analysis (MIAPA). OMICS 10: 2317.
Available: http://www.ncbi.nlm.nih.gov/pubmed/16901231.
8. Brazma A, Hingamp P, Quackenbush J, Sherlock G, Spellman P, et al. (2001)
Minimum information about a microarray experiment (MIAME)-toward
standards for microarray data. Nature Genetics 29: 36571. Available: http://
www.ncbi.nlm.nih.gov/pubmed/11726920.
9. Field D, Sansone S, Collis A, Booth T, Dukes P, et al. (2009) Megascience.
Omics data sharing. Science 326: 2346. Available: http://www.ncbi.nlm.nih.
gov/pubmed/19815759.
10. Krestyaninova M, Zarins A, Viksna J, Kurbatova N, Rucevskis P, et al. (2009) A
System for Information Management in BioMedical StudiesSIMBioMS.
Bioinformatics 25: 27689. Available: http://www.ncbi.nlm.nih.gov/pubmed/
19633095.
11. Keator DB (2009) Management of information in distributed biomedical
collaboratories. Methods in Molecular Biology 569: 123. Available: http://
www.ncbi.nlm.nih.gov/pubmed/19623483.
12. Frazier Z, McDermott J, Guerquin M, Samudrala R (2009) Computational
representation of biological systems. Methods in Molecular Biology 541: 53549.
Available: http://www.ncbi.nlm.nih.gov/pubmed/19381532.
13. Gattiker A, Hermida L, Liechti R, Xenarios I, Collin O, et al. (2009) MIMAS
3.0 is a Multiomics Information Management and Annotation System. BMC
Bioinformatics 10: 151. Available: http://www.ncbi.nlm.nih.gov/pubmed/
19450266.
14. Fielding R, Getty J, Mogul J, Frystyk H, Masinter L, et al. (1999) RFC2616,
Hypertext Transfer Protocol HTTP/1.1. Internet Engineering Task Force.
Available: http://tools.ietf.org/html/rfc2616.
15. Postel J, Reynolds J (1985) RFC959, File Transfer Protocol (FTP). Internet
Engineering Task Force. Available: http://tools.ietf.org/html/rfc959.
16. Cohen B (2003) Incentives build robustness in bittorrent. In Workshop on
Economics of Peer-to-Peer Systems. Berkley, CA, USA. Available: http://www.
bittorrent.org/bittorrentecon.pdf.
17. Shalunov S (2009) Low Extra Delay Background Transport (LEDBAT). Internet
Engineering Task Force. Available: http://tools.ietf.org/html/draft-ietf-ledbat-
congestion-00.
18. Benson DA, Karsch-Mizrachi I, Lipman DJ, Ostell J, Wheeler DL (2008)
GenBank. Nucleic Acids Research 36: D2530. Available: http://www.ncbi.
nlm.nih.gov/pubmed/18073190.
19. Neumann G, Noda T, Kawaoka Y (2009) Emergence and pandemic potential of
swine-origin H1N1 influenza virus. Nature 459: 9319. Available: http://www.
ncbi.nlm.nih.gov/pubmed/19525932.
20. Holt RA, Marra Ma, Barber SA, Jones SJ, Astell CR, et al. (2003) The Genome
sequence of the SARS-associated coronavirus. Science 300: 1399404.
Available: http://www.ncbi.nlm.nih.gov/pubmed/12730501.
BioTorrents
PLoS ONE | www.plosone.org 5 April 2010 | Volume 5 | Issue 4 | e10071
Exhibit 7 Page62
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 70 of 163 Page ID
#:998




EXHIBIT 8
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 71 of 163 Page ID
#:999
1
/
1
0
/
1
3
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
:
B
l
i
z
z
a
r
d

F
A
Q
1
/
6
u
s
.
b
l
i
z
z
a
r
d
.
c
o
m
/
e
n
-
u
s
/
c
o
m
p
a
n
y
/
a
b
o
u
t
/
l
e
g
a
l
-
f
a
q
.
h
t
m
l
H
o
m
e
C
o
m
p
a
n
y
A
b
o
u
t

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
B
l
i
z
z
a
r
d

F
A
Q
W
h
e
r
e

c
a
n


s
e
n
d

s
u
g
g
e
s
t
I
o
n
s

I
o
r

g
a
m
e
s

C
a
n


w
r
I
t
e

n
o
v
e
I
s
,

s
c
r
e
e
n
p
I
a
y
s
,

t
h
e
a
t
r
I
c
a
I

p
r
o
d
u
c
t
I
o
n
s

o
r
o
t
h
e
r

a
d
a
p
t
a
t
I
o
n
s

b
a
s
e
d

o
n

y
o
u
r

g
a
m
e
s

N
o
.

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


r
e
s
e
r
v
e
s

t
h
e

r
i
g
h
t

t
o

e
x
t
e
n
d

a
n
d

e
x
p
a
n
d

o
u
r
p
r
o
p
e
r
t
i
e
s

t
o

o
t
h
e
r

m
e
d
i
a
.

W
e

w
a
n
t

t
o

p
r
o
v
i
d
e

a

c
o
n
s
i
s
t
e
n
t

s
t
o
r
y

a
n
d

u
n
i
v
e
r
s
e
f
o
r

o
u
r

c
u
s
t
o
m
e
r
s
,

a
n
d

w
a
n
t

t
o

e
n
s
u
r
e

t
h
a
t

o
n
l
y

t
h
e

h
i
g
h
e
s
t

q
u
a
l
i
t
y
,

o
f
f
i
c
i
a
l
l
y
l
i
c
e
n
s
e
d

a
n
d

a
p
p
r
o
v
e
d

m
a
t
e
r
i
a
l

i
s

c
r
e
a
t
e
d

b
a
s
e
d

o
n

o
u
r

c
h
a
r
a
c
t
e
r
s

a
n
d

o
t
h
e
r
c
r
e
a
t
i
v
e

p
r
o
p
e
r
t
i
e
s
.
&
R
P
S
D
Q
\

)
$
4

/
H
J
D
O

)
$
4

:
H
E

)
$
4

&
D
U
H
H
U
V

)
$
4

&
R
S
\
U
L
J
K
W

1
R
W
L
F
H
V
S
e
a
r
c
h

B
l
i
z
z
a
r
d
.
c
o
m
Exhibit 8
Page63
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 72 of 163 Page ID
#:1000
1
/
1
0
/
1
3
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
:
B
l
i
z
z
a
r
d

F
A
Q
2
/
6
u
s
.
b
l
i
z
z
a
r
d
.
c
o
m
/
e
n
-
u
s
/
c
o
m
p
a
n
y
/
a
b
o
u
t
/
l
e
g
a
l
-
f
a
q
.
h
t
m
l
W
e

a
p
p
r
e
c
i
a
t
e

t
h
e

t
i
m
e

a
n
d

e
n
e
r
g
y

p
e
o
p
l
e

p
u
t

i
n
t
o

m
a
k
i
n
g

s
u
g
g
e
s
t
i
o
n
s

f
o
r
o
u
r

c
u
r
r
e
n
t

a
n
d

f
u
t
u
r
e

g
a
m
e
s
.

O
u
r

c
o
m
p
a
n
y

p
o
l
i
c
y
,

h
o
w
e
v
e
r
,

p
r
e
v
e
n
t
s

u
s

f
r
o
m
a
c
c
e
p
t
i
n
g

f
o
r

r
e
v
i
e
w

a
n
y

u
n
s
o
l
i
c
i
t
e
d

i
d
e
a
s
.

O
f
t
e
n

i
n

o
u
r

i
n
d
u
s
t
r
y
,

a
n

i
d
e
a

b
e
i
n
g
s
u
b
m
i
t
t
e
d

w
i
l
l

b
e

i
d
e
n
t
i
c
a
l

o
r

s
i
m
i
l
a
r

t
o

o
n
e

a
l
r
e
a
d
y

u
s
e
d

b
y

o
t
h
e
r

c
o
m
p
a
n
i
e
s
,
o
r

a
l
r
e
a
d
y

b
e
i
n
g

i
n
d
e
p
e
n
d
e
n
t
l
y

d
e
v
e
l
o
p
e
d

b
y

o
r

f
o
r

a

c
o
m
p
a
n
y
.

T
h
e
r
e
f
o
r
e
,
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


h
a
s

a
d
o
p
t
e
d

t
h
e

u
n
a
l
t
e
r
a
b
l
e

p
o
l
i
c
y

o
f

r
e
f
u
s
i
n
g

t
o
a
c
c
e
p
t

o
r

l
o
o
k

a
t

a
n
y

u
n
s
o
l
i
c
i
t
e
d

s
u
b
m
i
s
s
i
o
n
s

o
r

i
d
e
a
s
.

Y
o
u

c
a
n

p
o
s
t

y
o
u
r
s
u
g
g
e
s
t
i
o
n
s

u
n
d
e
r

t
h
e

a
p
p
r
o
p
r
i
a
t
e

c
a
t
e
g
o
r
y

i
n

t
h
e

m
o
s
t

r
e
l
e
v
a
n
t

g
a
m
e

f
o
r
u
m
:
L
e
g
a
c
y
,

S
t
a
r
C
r
a
f
t

,

o
r

W
o
r
l
d

o
f

W
a
r
c
r
a
f
t
.
W
h
a
t

I
s

8
I
I
z
z
a
r
d

E
n
t
e
r
t
a
I
n
m
e
n
t
'
s


c
o
p
y
r
I
g
h
t
]
t
r
a
d
e
m
a
r
k

p
o
I
I
c
y
I
o
r

t
h
e

n
t
e
r
n
e
t
,

s
p
e
c
I
I
I
c
a
I
I
y

I
o
r

I
a
n
s
I
t
e
s


C
a
n


u
s
e

8
I
I
z
z
a
r
d
E
n
t
e
r
t
a
I
n
m
e
n
t
'
s


I
m
a
g
e
s
,

t
e
x
t

o
r

s
o
u
n
d

o
n

m
y

w
e
b

p
a
g
e

s

I
t
o
k

I
I


u
s
e

s
c
r
e
e
n
s
h
o
t
s


t
a
k
e

I
n
-
g
a
m
e

o
n

m
y

w
e
b

p
a
g
e

C
a
n


g
e
t

a
u
t
h
o
r
I
z
a
t
I
o
n

t
o

d
o

a
n

e
x
p
a
n
s
I
o
n

p
a
c
k
,

n
o
v
e
I
,
s
c
r
e
e
n
p
I
a
y
,

t
h
e
a
t
r
I
c
a
I

p
r
o
d
u
c
t
I
o
n
,

o
r

o
t
h
e
r

a
d
a
p
t
a
t
I
o
n

b
a
s
e
d
o
n

y
o
u
r

g
a
m
e
s

U
n
f
o
r
t
u
n
a
t
e
l
y
,

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


c
a
n
n
o
t

a
c
c
e
p
t

u
n
s
o
l
i
c
i
t
e
d

i
d
e
a
s

o
r
s
u
b
m
i
s
s
i
o
n
s
.

A
s

a

r
e
s
u
l
t
,

w
e

w
i
l
l

n
o
t

b
e

a
b
l
e

t
o

r
e
v
i
e
w

o
r

a
u
t
h
o
r
i
z
e

a
n
y
s
u
b
m
i
s
s
i
o
n
s

r
e
l
a
t
e
d

t
o

e
x
p
a
n
s
i
o
n
s
,

n
o
v
e
l
s
,

s
c
r
e
e
n
p
l
a
y
s
,

p
r
o
d
u
c
t
i
o
n
s

o
r

o
t
h
e
r
a
d
a
p
t
a
t
i
o
n
s
.
C
a
n


w
r
I
t
e

m
u
s
I
c

w
I
t
h

s
a
m
p
I
e
s

o
r

c
h
a
r
a
c
t
e
r
s

I
r
o
m

y
o
u
r

g
a
m
e

W
e

a
p
p
r
e
c
i
a
t
e

t
h
e

c
r
e
a
t
i
v
e

e
n
e
r
g
i
e
s

o
f

c
u
s
t
o
m
e
r
s
.

U
s
e

o
f

o
u
r

m
u
s
i
c
,

s
o
u
n
d
s
a
m
p
l
e
s
,

o
r

c
h
a
r
a
c
t
e
r
s

f
r
o
m

o
u
r

g
a
m
e

i
n

y
o
u
r

o
r
i
g
i
n
a
l

c
o
m
p
o
s
i
t
i
o
n
s

i
s

l
i
m
i
t
e
d
f
o
r

y
o
u
r

o
w
n

p
e
r
s
o
n
a
l

u
s
e

a
n
d

c
r
e
a
t
i
v
e

e
x
p
l
o
r
a
t
i
o
n
.

Y
o
u

m
a
y

n
o
t

s
e
l
l

o
r
d
i
s
t
r
i
b
u
t
e

f
o
r

c
o
m
m
e
r
c
i
a
l

p
u
r
p
o
s
e

a
n
y

m
u
s
i
c

c
o
n
t
a
i
n
i
n
g

s
a
m
p
l
e
s

o
f

m
u
s
i
c
,
s
o
u
n
d

o
r

c
h
a
r
a
c
t
e
r
s

t
a
k
e
n

f
r
o
m

o
u
r

g
a
m
e
.

A
n
y

u
s
e

o
f

t
h
e
s
e

m
a
t
e
r
i
a
l
s

w
o
u
l
d
a
l
s
o

b
e

s
u
b
j
e
c
t

t
o

t
h
e

t
e
r
m
s

o
f

o
u
r

E
n
d

U
s
e
r

L
i
c
e
n
s
e

A
g
r
e
e
m
e
n
t

i
n
c
l
u
d
e
d
w
i
t
h

e
a
c
h

p
r
o
d
u
c
t
.
C
a
n


d
o

a

t
o
t
a
I

c
o
n
v
e
r
s
I
o
n

o
I

y
o
u
r

g
a
m
e
s

Y
e
s
.

W
e
'
v
e

s
e
e
n

s
o
m
e

v
e
r
y

p
o
l
i
s
h
e
d

a
n
d

f
u
n

c
o
n
v
e
r
s
i
o
n
s

f
o
r

o
u
r

g
a
m
e
s
,

a
n
d
h
a
v
e

n
o

p
r
o
b
l
e
m
s

w
i
t
h

t
o
t
a
l

c
o
n
v
e
r
s
i
o
n
s

s
o

l
o
n
g

a
s

t
h
e
y

a
r
e

f
o
r

p
e
r
s
o
n
a
l

u
s
e
a
n
d

d
o

n
o
t

i
n
f
r
i
n
g
e

o
u
r

E
n
d

U
s
e
r

L
i
c
e
n
s
e

A
g
r
e
e
m
e
n
t

i
n
c
l
u
d
e
d

i
n

o
u
r

g
a
m
e
s
,
n
o
r

t
h
e

r
i
g
h
t
s

o
f

a
n
y

o
t
h
e
r

p
a
r
t
i
e
s

i
n
c
l
u
d
i
n
g

c
o
p
y
r
i
g
h
t
s
,

t
r
a
d
e
m
a
r
k
s

o
r

o
t
h
e
r
r
i
g
h
t
s
.
C
a
n


c
r
e
a
t
e

a
n
d
]
o
r

d
I
s
t
r
I
b
u
t
e

h
a
c
k

a
n
d

c
h
e
a
t
s

I
o
r

y
o
u
r
g
a
m
e
s

N
o
.

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


d
o
e
s

n
o
t

s
u
p
p
o
r
t

o
r

c
o
n
d
o
n
e

t
h
e

u
s
e

o
r
d
i
s
t
r
i
b
u
t
i
o
n

o
f

c
h
e
a
t
s

a
n
d
/
o
r

h
a
c
k
s

f
o
r

u
s
e

w
i
t
h

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t

g
a
m
e
s

u
n
d
e
r

a
n
y

c
i
r
c
u
m
s
t
a
n
c
e
.
C
a
n


h
o
s
t

a

8
a
t
t
I
e
.
n
e
t


s
e
r
v
e
r

Exhibit 8
Page64
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 73 of 163 Page ID
#:1001
1
/
1
0
/
1
3
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
:
B
l
i
z
z
a
r
d

F
A
Q
3
/
6
u
s
.
b
l
i
z
z
a
r
d
.
c
o
m
/
e
n
-
u
s
/
c
o
m
p
a
n
y
/
a
b
o
u
t
/
l
e
g
a
l
-
f
a
q
.
h
t
m
l
Y
e
s
,

w
i
t
h
i
n

c
e
r
t
a
i
n

l
i
m
i
t
s
.

W
e

a
s
k
e
d

o
u
r

l
e
g
a
l

d
e
p
a
r
t
m
e
n
t

t
o

p
r
o
v
i
d
e

s
o
m
e
g
u
i
d
e
l
i
n
e
s

f
o
r

y
o
u
,

a
n
d

h
e
r
e

i
s

w
h
a
t

t
h
e
y

s
a
i
d
:
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


h
e
r
e
b
y

g
r
a
n
t
s

y
o
u

a

p
e
r
s
o
n
a
l
,

n
o
n
-
e
x
c
l
u
s
i
v
e
,

n
o
n
-
t
r
a
n
s
f
e
r
a
b
l
e

a
n
d

n
o
n
-

a
s
s
i
g
n
a
b
l
e

l
i
c
e
n
s
e

t
o

u
s
e

a
n
d

d
i
s
p
l
a
y
,

f
o
r

h
o
m
e
,
n
o
n
c
o
m
m
e
r
c
i
a
l

a
n
d

p
e
r
s
o
n
a
l

u
s
e

o
n
l
y
,

o
n
e

c
o
p
y

o
f

a
n
y

m
a
t
e
r
i
a
l

a
n
d
/
o
r
s
o
f
t
w
a
r
e

t
h
a
t

y
o
u

m
a
y

d
o
w
n
l
o
a
d

f
r
o
m

t
h
i
s

s
i
t
e
,

i
n
c
l
u
d
i
n
g
,

b
u
t

n
o
t

l
i
m
i
t
e
d

t
o
,
a
n
y

f
i
l
e
s
,

c
o
d
e
s
,

a
u
d
i
o

o
r

i
m
a
g
e
s

i
n
c
o
r
p
o
r
a
t
e
d

i
n

o
r

g
e
n
e
r
a
t
e
d

b
y

t
h
e

s
o
f
t
w
a
r
e
(
c
o
l
l
e
c
t
i
v
e
l
y

t
h
e

"
D
o
w
n
l
o
a
d
e
d

C
o
n
t
e
n
t
"
)

p
r
o
v
i
d
e
d
,

h
o
w
e
v
e
r
,

t
h
a
t

y
o
u

m
u
s
t
i
n
c
l
u
d
e

o
r

m
a
i
n
t
a
i
n

a
l
l

c
o
p
y
r
i
g
h
t

a
n
d

o
t
h
e
r

n
o
t
i
c
e
s

c
o
n
t
a
i
n
e
d

o
r

a
s
s
o
c
i
a
t
e
d
w
i
t
h

s
u
c
h

D
o
w
n
l
o
a
d
e
d

C
o
n
t
e
n
t
.

Y
o
u

a
c
k
n
o
w
l
e
d
g
e

a
n
d

a
g
r
e
e

t
h
a
t

y
o
u

m
a
y

n
o
t
s
u
b
l
i
c
e
n
s
e
,

a
s
s
i
g
n

o
r

o
t
h
e
r
w
i
s
e

t
r
a
n
s
f
e
r

t
h
i
s

l
i
c
e
n
s
e

o
r

t
h
e

D
o
w
n
l
o
a
d
e
d
C
o
n
t
e
n
t

a
n
d

t
h
a
t

n
o

t
i
t
l
e

t
o

t
h
e

D
o
w
n
l
o
a
d
e
d

C
o
n
t
e
n
t

h
a
s

b
e
e
n

o
r

w
i
l
l

b
e
t
r
a
n
s
f
e
r
r
e
d

t
o

y
o
u

f
r
o
m

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


o
r

a
n
y
o
n
e

e
l
s
e
.

Y
o
u

a
l
s
o
a
g
r
e
e

t
h
a
t

y
o
u

w
i
l
l

n
o
t

a
l
t
e
r
,

d
i
s
a
s
s
e
m
b
l
e
,

d
e
c
o
m
p
i
l
e
,

r
e
v
e
r
s
e

e
n
g
i
n
e
e
r

o
r
o
t
h
e
r
w
i
s
e

m
o
d
i
f
y

t
h
e

D
o
w
n
l
o
a
d
e
d

C
o
n
t
e
n
t
.
A
l
s
o
,

w
e

r
e
s
e
r
v
e

t
h
e

r
i
g
h
t

t
o

r
e
v
o
k
e

t
h
i
s

l
i
m
i
t
e
d

u
s
e

l
i
c
e
n
s
e

a
t

a
n
y

t
i
m
e
,

f
o
r

a
n
y
r
e
a
s
o
n
,

a
n
d

a
t

t
h
e

s
o
l
e

d
i
s
c
r
e
t
i
o
n

o
f

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t

.

Y
o
u

m
a
y

n
o
t
u
s
e

o
u
r

m
a
t
e
r
i
a
l
s

o
n

s
i
t
e
s

t
h
a
t

f
e
a
t
u
r
e

d
e
f
a
m
a
t
o
r
y

p
o
r
n
o
g
r
a
p
h
i
c
,

o
r
i
n
f
l
a
m
m
a
t
o
r
y

c
o
n
t
e
n
t
,

i
n
c
l
u
d
i
n
g
,

b
u
t

n
o
t

l
i
m
i
t
e
d

t
o
,

h
a
c
k
s

a
n
d

c
h
e
a
t
s

f
o
r

a
n
y

o
f
o
u
r

g
a
m
e
s

o
r

a
n
y

o
t
h
e
r

c
o
n
t
e
n
t

t
h
a
t

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


f
i
n
d
o
b
j
e
c
t
i
o
n
a
b
l
e

o
r

u
n
l
a
w
f
u
l
.
A
r
e

t
h
e
r
e

a
n
y

I
e
g
a
I

n
o
t
I
c
e
s

a
n
d

d
I
s
c
I
a
I
m
e
r
s

t
h
a
t


n
e
e
d

t
o

h
a
v
e
o
n

m
y

s
I
t
e

w
h
e
n

t
a
I
k
I
n
g

a
b
o
u
t

y
o
u
r

p
r
o
d
u
c
t
s

Y
e
s
.

Y
o
u

m
u
s
t

i
n
c
l
u
d
e

a
l
l

c
o
p
y
r
i
g
h
t
,

t
r
a
d
e
m
a
r
k

a
n
d

o
t
h
e
r

n
o
t
i
c
e
s

a
s
a
p
p
r
o
p
r
i
a
t
e
.

A
p
p
r
o
p
r
i
a
t
e

n
o
t
i
c
e
s

c
a
n

b
e

a
c
q
u
i
r
e
d

f
r
o
m

h
e
r
e
.
C
a
n


r
e
g
I
s
t
e
r

a

d
o
m
a
I
n

n
a
m
e
s

c
o
n
t
a
I
n
I
n
g

s
o
m
e

p
o
r
t
I
o
n

o
I
y
o
u
r

p
r
o
d
u
c
t

n
a
m
e
s

s
u
c
h

a
s

"
s
t
a
r
-
c
r
a
I
t
.
c
o
m
"

o
r

"
w
a
r
-
c
r
a
I
t
.
c
o
m

"
N
o
.

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


i
s

n
o
t

a
c
c
e
p
t
i
n
g

r
e
q
u
e
s
t
s

t
o

h
o
s
t

B
a
t
t
l
e
.
n
e
t

s
e
r
v
e
r
s

a
t

t
h
i
s

t
i
m
e
.

S
h
o
u
l
d

t
h
i
s

p
o
l
i
c
y

c
h
a
n
g
e

i
n

t
h
e

f
u
t
u
r
e
,

w
e

w
i
l
l

m
a
k
e

t
h
e
n
e
c
e
s
s
a
r
y

i
n
f
o
r
m
a
t
i
o
n

a
v
a
i
l
a
b
l
e

o
n

o
u
r

w
e
b

s
i
t
e
.
D
o
e
s

8
I
I
z
z
a
r
d

E
n
t
e
r
t
a
I
n
m
e
n
t


a
I
I
o
w

o
r

s
u
p
p
o
r
t

o
t
h
e
r
8
a
t
t
I
e
.
n
e
t


I
I
k
e

o
r

e
m
u
I
a
t
I
o
n

s
e
r
v
e
r
s


C
a
n


h
o
s
t

o
n
e

o
I

t
h
e
s
e
r
o
g
u
e

s
e
r
v
e
r
s

N
o
.

E
x
c
e
p
t

a
s

s
e
t

f
o
r
t
h

i
n

t
h
e

n
e
x
t

p
a
r
a
g
r
a
p
h
,

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


d
o
e
s
n
o
t

s
u
p
p
o
r
t

o
r

c
o
n
d
o
n
e

n
e
t
w
o
r
k

p
l
a
y

o
f

i
t
s

g
a
m
e
s

a
n
y
w
h
e
r
e

b
u
t

B
a
t
t
l
e
.
n
e
t

.
S
p
e
c
i
f
i
c
a
l
l
y
,

y
o
u

m
a
y

n
o
t

h
o
s
t

o
r

p
r
o
v
i
d
e

m
a
t
c
h
m
a
k
i
n
g

s
e
r
v
i
c
e
s

f
o
r

a
n
y

o
f

o
u
r
g
a
m
e
s

o
r

e
m
u
l
a
t
e

o
r

r
e
d
i
r
e
c
t

t
h
e

c
o
m
m
u
n
i
c
a
t
i
o
n

p
r
o
t
o
c
o
l
s

u
s
e
d

b
y

B
l
i
z
z
a
r
d
E
n
t
e
r
t
a
i
n
m
e
n
t


i
n

t
h
e

n
e
t
w
o
r
k

f
e
a
t
u
r
e

o
f

i
t
s

g
a
m
e
s
,

t
h
r
o
u
g
h

p
r
o
t
o
c
o
l
e
m
u
l
a
t
i
o
n
,

t
u
n
n
e
l
i
n
g
,

m
o
d
i
f
y
i
n
g

o
r

a
d
d
i
n
g

c
o
m
p
o
n
e
n
t
s

t
o

t
h
e

g
a
m
e
(
s
)
,

u
s
e

o
f
a

u
t
i
l
i
t
y

p
r
o
g
r
a
m

o
r

a
n
y

o
t
h
e
r

t
e
c
h
n
i
q
u
e
s

n
o
w

k
n
o
w
n

o
r

h
e
r
e
a
f
t
e
r

d
e
v
e
l
o
p
e
d
,
f
o
r

a
n
y

p
u
r
p
o
s
e

i
n
c
l
u
d
i
n
g
,

b
u
t

n
o
t

l
i
m
i
t
e
d

t
o

n
e
t
w
o
r
k

p
l
a
y

o
v
e
r

t
h
e

n
t
e
r
n
e
t
,
n
e
t
w
o
r
k

p
l
a
y

u
t
i
l
i
z
i
n
g

c
o
m
m
e
r
c
i
a
l

o
r

n
o
n
-
c
o
m
m
e
r
c
i
a
l

g
a
m
i
n
g

n
e
t
w
o
r
k
s

o
r

a
s
p
a
r
t

o
f

c
o
n
t
e
n
t

a
g
g
r
e
g
a
t
i
o
n

n
e
t
w
o
r
k
s

w
i
t
h
o
u
t

t
h
e

p
r
i
o
r

w
r
i
t
t
e
n

c
o
n
s
e
n
t

o
f
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t

.
H
o
w

I
s

I
t

t
h
a
t

8
I
I
z
z
a
r
d

c
a
n

d
I
s
t
r
I
b
u
t
e

s
u
c
h

I
a
r
g
e

I
I
I
e
s

t
o

t
h
e
p
u
b
I
I
c

Exhibit 8
Page65
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 74 of 163 Page ID
#:1002
1
/
1
0
/
1
3
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
:
B
l
i
z
z
a
r
d

F
A
Q
4
/
6
u
s
.
b
l
i
z
z
a
r
d
.
c
o
m
/
e
n
-
u
s
/
c
o
m
p
a
n
y
/
a
b
o
u
t
/
l
e
g
a
l
-
f
a
q
.
h
t
m
l
N
o
.

W
e

a
r
e

c
o
n
c
e
r
n
e
d

t
h
a
t

s
u
c
h

u
s
e

c
o
u
l
d

c
a
u
s
e

c
o
n
f
u
s
i
o
n

f
o
r

o
u
r
c
u
s
t
o
m
e
r
s

w
h
o

m
a
y

a
s
s
u
m
e

t
h
a
t

t
h
e

d
o
m
a
i
n

i
s

a
s
s
o
c
i
a
t
e
d

w
i
t
h

B
l
i
z
z
a
r
d
E
n
t
e
r
t
a
i
n
m
e
n
t

.
C
a
n


p
u
t

y
o
u
r

p
a
t
c
h
e
s

a
n
d

d
e
m
o
s

o
n

m
y

w
e
b

s
I
t
e

I
o
r
d
o
w
n
I
o
a
d

Y
e
s
.

W
e

a
l
l
o
w

n
o
n
-
c
o
m
m
e
r
c
i
a
l

m
i
r
r
o
r
i
n
g

o
f

o
u
r

p
a
t
c
h
e
s

a
n
d

d
e
m
o
s
,

s
o

l
o
n
g
a
s

y
o
u

d
o

n
o
t

a
l
t
e
r

t
h
e

p
a
t
c
h
e
s

o
r

d
e
m
o
s

i
n

a
n
y

w
a
y
,

a
n
d

a
l
l

f
i
l
e
s

i
n
c
l
u
d
e
d
w
i
t
h

t
h
e

o
r
i
g
i
n
a
l

p
a
t
c
h

o
r

d
e
m
o

a
r
e

p
r
e
s
e
n
t

a
n
d

i
n
t
a
c
t
.

B
l
i
z
z
a
r
d
E
n
t
e
r
t
a
i
n
m
e
n
t


r
e
s
e
r
v
e
s

t
h
e

r
i
g
h
t

t
o

r
e
f
u
s
e

p
e
r
m
i
s
s
i
o
n

t
o

h
o
s
t

o
r

d
i
s
t
r
i
b
u
t
e
o
u
r

p
a
t
c
h
e
s

a
n
d

d
e
m
o
s

t
o

a
n
y
o
n
e
,

f
o
r

a
n
y

r
e
a
s
o
n
,

a
t

a
n
y

t
i
m
e
.
C
a
n


p
u
t

m
y

o
w
n

"
h
o
m
e
-
m
a
d
e
"

m
a
p
s

o
n

m
y

w
e
b

s
I
t
e

I
o
r
d
o
w
n
I
o
a
d

Y
e
s
,

w
e

e
n
c
o
u
r
a
g
e

p
l
a
y
e
r
s

t
o

c
r
e
a
t
e

m
a
p
s

a
n
d

t
r
a
d
e

t
h
e
m

o
n

t
h
e

n
t
e
r
n
e
t

s
o
l
o
n
g

a
s

t
h
e
y

a
r
e

n
o
t

f
o
r

s
a
l
e

o
r

p
r
o
f
i
t
,

n
o
r

a
n
y

o
t
h
e
r

c
o
m
m
e
r
c
i
a
l

p
u
r
p
o
s
e

a
s
d
e
f
i
n
e
d

s
o
l
e
l
y

b
y

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t

.
C
a
n


p
u
t

y
o
u
r

"
M
a
p
s

o
I

t
h
e

W
e
e
k
"

o
n

m
y

w
e
b

s
I
t
e

I
o
r
d
o
w
n
I
o
a
d

N
o
.

T
h
e
s
e

m
a
p
s

a
r
e

c
r
e
a
t
e
d

a
s

a

s
e
r
v
i
c
e

t
o

o
u
r

c
u
s
t
o
m
e
r
s
,

a
n
d

a
r
e

a
v
a
i
l
a
b
l
e
e
x
c
l
u
s
i
v
e
l
y

f
r
o
m

c
l
a
s
s
i
c
.
b
a
t
t
l
e
.
n
e
t
.
C
a
n


t
r
a
n
s
I
a
t
e

y
o
u
r

s
I
t
e

I
n
t
o

a
n
o
t
h
e
r

I
a
n
g
u
a
g
e

I
I

n
o

s
u
c
h

s
I
t
e
e
x
I
s
t
s

N
o
.

A
s
i
d
e

f
r
o
m

t
h
e

c
o
n
f
u
s
i
o
n

i
t

m
i
g
h
t

c
a
u
s
e

o
u
r

c
u
s
t
o
m
e
r
s
,

w
e

w
o
u
l
d

h
a
v
e

n
o
c
o
n
t
r
o
l

o
v
e
r

t
h
e

q
u
a
l
i
t
y
,

a
c
c
u
r
a
c
y

o
r

c
o
n
t
e
n
t

o
f

s
u
c
h

t
r
a
n
s
l
a
t
e
d

s
i
t
e
s
.
C
a
n


m
a
k
e

a
d
d
-
o
n
s

o
r

e
x
p
a
n
s
I
o
n
s

I
o
r

8
I
I
z
z
a
r
d
E
n
t
e
r
t
a
I
n
m
e
n
t


g
a
m
e
s


C
a
n


s
e
I
I

t
h
e
m

T
o

d
i
s
t
r
i
b
u
t
e

l
a
r
g
e

f
i
l
e
s
,

s
u
c
h

a
s

c
i
n
e
m
a
t
i
c

t
r
a
i
l
e
r
s
,

B
l
i
z
z
a
r
d

u
t
i
l
i
z
e
s

t
h
e
B
l
i
z
z
a
r
d

D
o
w
n
l
o
a
d
e
r
,

w
h
i
c
h

i
s

a

s
o
f
t
w
a
r
e

u
t
i
l
i
t
y

t
h
a
t

w
i
l
l

m
a
k
e

u
s
e

o
f

t
h
e
"
u
p
l
o
a
d
"

c
a
p
a
b
i
l
i
t
y

o
f

y
o
u
r

c
o
m
p
u
t
e
r

t
o

d
i
s
t
r
i
b
u
t
e

t
h
e

P
r
o
g
r
a
m

t
o

o
t
h
e
r
i
n
d
i
v
i
d
u
a
l
s

w
h
o

m
a
y

a
l
s
o

b
e

d
o
w
n
l
o
a
d
i
n
g

f
i
l
e
s

f
r
o
m

B
l
i
z
z
a
r
d
.

N
o
t
e

t
h
a
t

t
h
i
s
u
t
i
l
i
t
y

i
s

o
n
l
y

a
c
t
i
v
e

w
h
e
n

y
o
u

a
r
e

d
o
w
n
l
o
a
d
i
n
g

f
i
l
e
s
,

a
n
d

t
h
a
t

o
n
l
y

f
i
l
e
s
a
s
s
o
c
i
a
t
e
d

w
i
t
h

t
h
e

f
i
l
e

t
h
a
t

y
o
u

a
r
e

d
o
w
n
l
o
a
d
i
n
g

a
r
e

u
p
l
o
a
d
e
d
.

B
l
i
z
z
a
r
d

w
i
l
l
n
o
t

u
p
l
o
a
d

a
n
y

o
t
h
e
r

f
i
l
e
s
,

o
r

o
b
t
a
i
n

a
n
y

p
e
r
s
o
n
a
l

i
n
f
o
r
m
a
t
i
o
n

a
b
o
u
t

y
o
u

a
s

a
r
e
s
u
l
t

o
f

t
h
i
s

a
c
t
i
v
i
t
y
.
T
h
e

B
l
i
z
z
a
r
d

D
o
w
n
l
o
a
d
e
r

i
s

b
a
s
e
d

u
p
o
n

t
h
e

B
i
t
T
o
r
r
e
n
t

o
p
e
n

s
o
u
r
c
e
,

w
h
i
c
h

i
s
f
r
e
e
l
y

d
i
s
t
r
i
b
u
t
a
b
l
e

p
u
r
s
u
a
n
t

t
o

t
h
e

M

T

L
i
c
e
n
s
e
,

a
s

f
o
l
l
o
w
s
:
C
o
p
y
r
i
g
h
t


2
0
0
1
-
2
0
0
2

B
r
a
m

C
o
h
e
n

P
e
r
m
i
s
s
i
o
n

i
s

h
e
r
e
b
y

g
r
a
n
t
e
d
,

f
r
e
e

o
f

c
h
a
r
g
e
,

t
o

a
n
y

p
e
r
s
o
n

o
b
t
a
i
n
i
n
g

a

c
o
p
y

o
f
t
h
i
s

s
o
f
t
w
a
r
e

a
n
d

a
s
s
o
c
i
a
t
e
d

d
o
c
u
m
e
n
t
a
t
i
o
n

f
i
l
e
s

(
t
h
e

"
S
o
f
t
w
a
r
e
"
)
,

t
o

d
e
a
l

i
n
t
h
e

S
o
f
t
w
a
r
e

w
i
t
h
o
u
t

r
e
s
t
r
i
c
t
i
o
n
,

i
n
c
l
u
d
i
n
g

w
i
t
h
o
u
t

l
i
m
i
t
a
t
i
o
n

t
h
e

r
i
g
h
t
s

t
o

u
s
e
,
c
o
p
y
,

m
o
d
i
f
y
,

m
e
r
g
e
,

p
u
b
l
i
s
h
,

d
i
s
t
r
i
b
u
t
e
,

s
u
b
l
i
c
e
n
s
e
,

a
n
d
/
o
r

s
e
l
l

c
o
p
i
e
s

o
f

t
h
e
S
o
f
t
w
a
r
e
,

a
n
d

t
o

p
e
r
m
i
t

p
e
r
s
o
n
s

t
o

w
h
o
m

t
h
e

S
o
f
t
w
a
r
e

i
s

f
u
r
n
i
s
h
e
d

t
o

d
o

s
o
,
s
u
b
j
e
c
t

t
o

t
h
e

f
o
l
l
o
w
i
n
g

c
o
n
d
i
t
i
o
n
s
:

T
h
e

a
b
o
v
e

c
o
p
y
r
i
g
h
t

n
o
t
i
c
e

a
n
d

t
h
i
s

p
e
r
m
i
s
s
i
o
n

n
o
t
i
c
e

s
h
a
l
l

b
e

i
n
c
l
u
d
e
d

i
n

a
l
l
c
o
p
i
e
s

o
r

s
u
b
s
t
a
n
t
i
a
l

p
o
r
t
i
o
n
s

o
f

t
h
e

S
o
f
t
w
a
r
e
.

T
h
e

S
o
f
t
w
a
r
e

i
s

p
r
o
v
i
d
e
d

"
A
S

S
"
,

w
i
t
h
o
u
t

w
a
r
r
a
n
t
y

o
f

a
n
y

k
i
n
d
,

e
x
p
r
e
s
s

o
r
i
m
p
l
i
e
d
,

i
n
c
l
u
d
i
n
g

b
u
t

n
o
t

l
i
m
i
t
e
d

t
o

t
h
e

w
a
r
r
a
n
t
i
e
s

o
f

m
e
r
c
h
a
n
t
a
b
i
l
i
t
y
,

f
i
t
n
e
s
s

f
o
r
a

p
a
r
t
i
c
u
l
a
r

p
u
r
p
o
s
e

a
n
d

n
o
n
i
n
f
r
i
n
g
e
m
e
n
t
.

n

n
o

e
v
e
n
t

s
h
a
l
l

t
h
e

a
u
t
h
o
r
s

o
r
c
o
p
y
r
i
g
h
t

h
o
l
d
e
r
s

b
e

l
i
a
b
l
e

f
o
r

a
n
y

c
l
a
i
m
,

d
a
m
a
g
e
s

o
r

o
t
h
e
r

l
i
a
b
i
l
i
t
y
,

w
h
e
t
h
e
r

i
n
a
n

a
c
t
i
o
n

o
f

c
o
n
t
r
a
c
t
,

t
o
r
t

o
r

o
t
h
e
r
w
i
s
e
,

a
r
i
s
i
n
g

f
r
o
m
,

o
u
t

o
f

o
r

i
n

c
o
n
n
e
c
t
i
o
n
w
i
t
h

t
h
e

S
o
f
t
w
a
r
e

o
r

t
h
e

u
s
e

o
r

o
t
h
e
r

d
e
a
l
i
n
g
s

i
n

t
h
e

S
o
f
t
w
a
r
e
.
C
a
n


m
a
k
e

a
n
d

s
e
I
I

m
y

o
w
n

p
r
o
d
u
c
t
s

(
T
-
s
h
I
r
t
s
,

c
a
r
d

g
a
m
e
s
,
m
o
d
e
I
s
]
I
I
g
u
r
e
s
,

e
t
c
.
)

b
a
s
e
d

o
n

a

8
I
I
z
z
a
r
d

u
n
I
v
e
r
s
e

N
o
.

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t


d
o
e
s

n
o
t

e
n
t
e
r

i
n
t
o

l
i
c
e
n
s
i
n
g

a
g
r
e
e
m
e
n
t
s

w
i
t
h
i
n
d
i
v
i
d
u
a
l
s
.

T
o

e
n
s
u
r
e

t
h
e

q
u
a
l
i
t
y

o
f

a
l
l

B
l
i
z
z
a
r
d

p
r
o
d
u
c
t
s
,

a
l
l

o
f

o
u
r
m
e
r
c
h
a
n
d
i
s
e

i
s

c
r
e
a
t
e
d

u
n
d
e
r

a

l
i
c
e
n
s
i
n
g

a
g
r
e
e
m
e
n
t

a
n
d

a
l
l

p
r
o
s
p
e
c
t
i
v
e
l
i
c
e
n
s
e
e
s

a
r
e

t
h
o
r
o
u
g
h
l
y

r
e
v
i
e
w
e
d

b
y

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t

b
e
f
o
r
e

a

l
i
c
e
n
s
e
i
s

g
r
a
n
t
e
d
.
Exhibit 8
Page66
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 75 of 163 Page ID
#:1003
1
/
1
0
/
1
3
5
/
6
u
s
.
b
l
i
z
z
a
r
d
.
c
o
m
/
e
n
-
u
s
/
c
o
m
p
a
n
y
/
a
b
o
u
t
/
l
e
g
a
l
-
f
a
q
.
h
t
m
l
W
e

d
o

n
o
t

a
l
l
o
w

a
d
d
-
o
n
s

o
r

e
x
p
a
n
s
i
o
n
s

f
o
r

D
i
a
b
l
o

.

Y
o
u

m
a
y

m
a
k
e

a
n
d
d
i
s
t
r
i
b
u
t
e

S
t
a
r
C
r
a
f
t


a
n
d

W
a
r
c
r
a
f
t


m
a
p
s

a
n
d

c
a
m
p
a
i
g
n
s

t
h
a
t

y
o
u

h
a
v
e
c
r
e
a
t
e
d

y
o
u
r
s
e
l
f
,

s
o

l
o
n
g

a
s

i
t

i
s

f
o
r

p
e
r
s
o
n
a
l
,

n
o
n
-

c
o
m
m
e
r
c
i
a
l

p
u
r
p
o
s
e
s
.

A
n
y
s
u
c
h

m
a
p
s

o
r

c
a
m
p
a
i
g
n
s

w
o
u
l
d

a
l
s
o

b
e

s
u
b
j
e
c
t

t
o

t
h
e

o
t
h
e
r

t
e
r
m
s

o
u
t
l
i
n
e
d

i
n
o
u
r

E
n
d

U
s
e
r

L
i
c
e
n
s
e

A
g
r
e
e
m
e
n
t
s

i
n
c
l
u
d
e
d

w
i
t
h

t
h
o
s
e

p
r
o
d
u
c
t
s
.
C
a
n


s
e
I
I

o
r

c
h
a
r
g
e

I
o
r

a

C
D

o
r

o
t
h
e
r

m
e
d
I
a

c
o
n
t
a
I
n
I
n
g

m
a
p
s
,
a
d
d
-
o
n
s

o
r

c
a
m
p
a
I
g
n
s

I
o
u
n
d

o
n

t
h
e

n
t
e
r
n
e
t

N
o
.
C
o
m
p
a
n
y

P
r
o
f
i
l
e
M
i
s
s
i
o
n

S
t
a
t
e
m
e
n
t
A
w
a
r
d
s
2
0

Y
e
a
r

A
n
n
i
v
e
r
s
a
r
y
P
a
r
t
n
e
r
s
B
l
i
z
z
a
r
d

F
A
Q
C
o
n
t
a
c
t

U
s
S
e
a
r
c
h

f
o
r

J
o
b
s
J
o
b

S
i
t
e

S
u
p
p
o
r
t
C
a
n
d
i
d
a
t
e

P
r
o
f
i
l
e
U
n
i
v
e
r
s
i
t
y

R
e
l
a
t
i
o
n
s

C
a
r
e
e
r
s

F
A
Q
B
l
i
z
z
C
o
n
T
o
u
r
n
a
m
e
n
t
s
P
r
e
s
s

R
e
l
e
a
s
e
s
P
r
e
s
s

K
i
t
s
C
O
M
P
A
N
Y
P
r
e
s
s


C
a
r
e
e
r

O
p
p
o
r
t
u
n
I
t
I
e
s


P
r
I
v
a
c
y

P
o
I
I
c
y


L
e
g
a
I

D
o
c
u
m
e
n
t
a
t
I
o
n


C
o
n
t
a
c
t

U
s


S
I
t
e
m
a
p

A
I
I

t
r
a
d
e
m
a
r
k
s

r
e
I
e
r
e
n
c
e
d

h
e
r
e
I
n

a
r
e

t
h
e

p
r
o
p
e
r
t
I
e
s

o
I

t
h
e
I
r

r
e
s
p
e
c
t
I
v
e

o
w
n
e
r
s
.

2
0
1
3

8
I
I
z
z
a
r
d

E
n
t
e
r
t
a
I
n
m
e
n
t
.

A
I
I

r
I
g
h
t
s

r
e
s
e
r
v
e
d
.
L
a
n
g
u
a
g
e
:

A
b
o
u
t

B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
C
a
r
e
e
r

O
p
p
o
r
t
u
n
i
t
i
e
s
E
v
e
n
t
s
P
r
e
s
s
E
n
g
I
I
s
h

(
U
S
)

Exhibit 8
Page67
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 76 of 163 Page ID
#:1004
1
/
1
0
/
1
3
B
l
i
z
z
a
r
d

E
n
t
e
r
t
a
i
n
m
e
n
t
:
B
l
i
z
z
a
r
d

F
A
Q
6
/
6
u
s
.
b
l
i
z
z
a
r
d
.
c
o
m
/
e
n
-
u
s
/
c
o
m
p
a
n
y
/
a
b
o
u
t
/
l
e
g
a
l
-
f
a
q
.
h
t
m
l
Exhibit 8
Page68
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 77 of 163 Page ID
#:1005




EXHIBIT 9
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 78 of 163 Page ID
#:1006
1
/
1
0
/
1
3
F
r
e
e

o
n
l
i
n
e

e
n
g
i
n
e
e
r
i
n
g

c
o
u
r
s
e
s

p
r
o
v
e

a

b
i
g

h
i
t
1
/
2
n
e
w
s
.
s
t
a
n
f
o
r
d
.
e
d
u
/
n
e
w
s
/
2
0
0
8
/
o
c
t
o
b
e
r
1
5
/
o
n
l
i
n
e
-
1
0
1
5
0
8
.
h
t
m
l
N
e
w
s
S
e
a
r
c
h

n
e
w
s
.
s
t
a
n
f
o
r
d
.
e
d
u
S
t
a
n
f
o
r
d

R
e
p
o
r
t
,

O
c
t
o
b
e
r

1
5
,

2
0
0
8

r
e
e

o
n
I
I
n
e

e
n
g
I
n
e
e
r
I
n
g

c
o
u
r
s
e
s

r
o
v
e

u

b
I
g

h
I
L
B
Y

D
A
N

S
T
O
B
E
R
I
n

t
h
e

m
o
n
t
h

s
i
n
c
e

S
t
a
n
f
o
r
d

p
u
t

1
0

o
f

i
t
s

m
o
s
t

p
o
p
u
l
a
r

c
o
m
p
u
t
e
r

s
c
i
e
n
c
e

a
n
d

e
l
e
c
t
r
i
c
a
l

e
n
g
i
n
e
e
r
i
n
g
c
o
u
r
s
e
s

o
n

t
h
e

I
n
t
e
r
n
e
t

a
n
d

m
a
d
e

t
h
e
m

a
v
a
i
l
a
b
l
e
-
f
o
r

f
r
e
e
-
t
o

a
n
y
o
n
e

i
n

t
h
e

w
o
r
l
d
,

2
0
0
,
0
0
0
p
e
o
p
l
e

h
a
v
e

v
i
s
i
t
e
d

t
h
e

s
i
t
e

S
t
a
n
f
o
r
d

E
n
g
i
n
e
e
r
i
n
g

E
v
e
r
y
w
h
e
r
e

(
h
t
t
p
:
/
/
s
e
e
.
s
t
a
n
f
o
r
d
.
e
d
u
)
.
T
h
e
r
e

h
a
v
e

b
e
e
n

s
o
m
e

s
u
r
p
r
i
s
e
s
.

N
o

o
n
e

h
a
d

p
r
e
d
i
c
t
e
d

a

f
l
o
o
d

o
f

v
i
s
i
t
s

f
r
o
m

B
r
a
z
i
l
,

a

t
u
r
n
o
u
t

t
h
a
t
p
l
a
c
e
d

t
h
e

c
o
u
n
t
r
y

b
e
h
i
n
d

o
n
l
y

C
a
n
a
d
a

o
n

t
h
e

l
i
s
t

o
f

h
i
t
s

f
r
o
m

f
o
r
e
i
g
n

c
o
u
n
t
r
i
e
s
.

(
R
o
u
n
d
i
n
g

o
u
t

t
h
e
t
o
p

f
i
v
e

w
e
r
e

C
h
i
n
a
,

I
t
a
l
y

a
n
d

t
h
e

U
n
i
t
e
d

K
i
n
g
d
o
m
.
)
T
h
e

S
t
a
n
f
o
r
d

p
r
o
g
r
a
m

i
s

s
i
g
n
i
f
i
c
a
n
t
l
y

d
i
f
f
e
r
e
n
t

f
r
o
m

m
o
s
t

o
n
l
i
n
e

o
f
f
e
r
i
n
g
s

o
f

c
o
l
l
e
g
e

c
o
u
r
s
e
s
.

I
t
'
s

n
o
t
j
u
s
t

v
i
d
e
o
s

o
f

l
e
c
t
u
r
e
s
;

t
h
e

w
e
b
s
i
t
e

p
r
o
v
i
d
e
s

d
o
w
n
l
o
a
d
s

o
f

f
u
l
l

c
o
u
r
s
e

m
a
t
e
r
i
a
l
s

i
n
c
l
u
d
i
n
g

s
y
l
l
a
b
i
,
h
a
n
d
o
u
t
s
,

h
o
m
e
w
o
r
k

a
n
d

e
x
a
m
s
.

O
n
l
i
n
e

s
t
u
d
y

s
e
s
s
i
o
n
s

t
h
r
o
u
g
h

F
a
c
e
b
o
o
k

a
n
d

o
t
h
e
r

s
o
c
i
a
l

s
i
t
e
s
a
r
e

e
n
c
o
u
r
a
g
e
d
.

A
b
o
u
t

t
h
e

o
n
l
y

t
h
i
n
g

n
o
t

b
e
i
n
g

h
a
n
d
e
d

o
u
t

i
s

c
o
l
l
e
g
e

c
r
e
d
i
t
.
T
h
e

c
o
u
r
s
e
s
,

f
r
o
m

p
r
o
g
r
a
m
m
i
n
g

t
o

r
o
b
o
t
i
c
s
,

a
r
e

a
v
a
i
l
a
b
l
e

a
t

S
t
a
n
f
o
r
d

E
n
g
i
n
e
e
r
i
n
g

E
v
e
r
y
w
h
e
r
e
,
Y
o
u
T
u
b
e

a
n
d

i
T
u
n
e
s
,

a
n
d

t
h
r
o
u
g
h

B
i
t
T
o
r
r
e
n
t

d
o
w
n
l
o
a
d
s
.
T
h
e

c
o
u
r
s
e

o
f
f
e
r
i
n
g
s

a
r
e

c
l
e
a
r
l
y

p
o
p
u
l
a
r
,

b
u
t

g
a
u
g
i
n
g

t
h
e

n
u
m
b
e
r

o
f

s
t
u
d
e
n
t
s

i
s

d
i
f
f
i
c
u
l
t
.

C
o
u
n
t
i
n
g
s
i
t
e

v
i
s
i
t
s

d
o
e
s

n
o
t

g
i
v
e

a

c
l
e
a
r

a
n
s
w
e
r
,

s
i
n
c
e

s
t
u
d
e
n
t
s

d
o

n
o
t

h
a
v
e

t
o

r
e
v
i
s
i
t

t
h
e

s
i
t
e

f
o
r

e
a
c
h
l
e
c
t
u
r
e
;

t
h
e

e
n
t
i
r
e

l
e
a
r
n
i
n
g

p
a
c
k
a
g
e
,

l
e
c
t
u
r
e
s

a
n
d

a
l
l
,

m
a
y

b
e

d
o
w
n
l
o
a
d
e
d

i
n

o
n
e

s
w
o
o
p
.
T
h
e

S
c
h
o
o
l

o
f

E
n
g
i
n
e
e
r
i
n
g

i
s

e
n
c
o
u
r
a
g
i
n
g

t
e
a
c
h
e
r
s

a
t

o
t
h
e
r

i
n
s
t
i
t
u
t
i
o
n
s

t
o

u
s
e

t
h
e

c
o
u
r
s
e

m
a
t
e
r
i
a
l
s
i
f

t
h
e
y

w
i
s
h
.

"
Y
o
u

m
i
g
h
t

h
a
v
e

s
o
m
e
b
o
d
y

i
n

C
h
i
n
a

t
r
a
n
s
l
a
t
e

i
t
.

W
e
'
d

b
e

h
a
p
p
y

f
o
r

t
h
e
m

t
o

d
o

t
h
a
t
,
"
s
a
i
d

A
n
d
y

D
i
P
a
o
l
o
,

t
h
e

e
x
e
c
u
t
i
v
e

d
i
r
e
c
t
o
r

o
f

t
h
e

S
t
a
n
f
o
r
d

C
e
n
t
e
r

f
o
r

P
r
o
f
e
s
s
i
o
n
a
l

D
e
v
e
l
o
p
m
e
n
t
.
T
h
e

s
t
u
d
e
n
t
s

r
a
n
g
e

f
r
o
m

p
r
o
f
e
s
s
i
o
n
a
l
s

b
r
u
s
h
i
n
g

u
p

t
h
e
i
r

s
k
i
l
l
s

t
o

t
e
e
n
a
g
e
r
s

i
n

h
i
g
h

s
c
h
o
o
l
.

F
o
r
s
o
m
e
,

t
h
i
s

i
s

a
n

o
p
p
o
r
t
u
n
i
t
y

t
o

"
e
x
p
e
r
i
e
n
c
e

w
h
a
t

t
h
e
y

m
i
g
h
t

o
t
h
e
r
w
i
s
e

n
e
v
e
r

h
a
v
e

a

c
h
a
n
c
e

t
o
S
H
A
R
E

T
H

S

S
T
O
R
Y
R
E

A
T
E
D

T
O

T
H

S

S
T
O
R
Y
S
t
a
n
f
o
r
d

E
n
g
i
n
e
e
r
i
n
g

E
v
e
r
y
w
h
e
r
e
S
t
a
n
f
o
r
d

S
c
h
o
o
l

o
f

E
n
g
i
n
e
e
r
i
n
g




H
o
m
e
A
l
l

N
e
w
s
F
a
c
u
l
t
y

&

S
t
a
f
f

N
e
w
s
F
o
r

J
o
u
r
n
a
l
i
s
t
s
A
b
o
u
t

U
s
0
R
e
c
o
m
m
e
n
d
0

0
S
t
u
m
b
l
e
3

Exhibit 9 Page69
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 79 of 163 Page ID
#:1007
1
/
1
0
/
1
3
F
r
e
e

o
n
l
i
n
e

e
n
g
i
n
e
e
r
i
n
g

c
o
u
r
s
e
s

p
r
o
v
e

a

b
i
g

h
i
t
2
/
2
n
e
w
s
.
s
t
a
n
f
o
r
d
.
e
d
u
/
n
e
w
s
/
2
0
0
8
/
o
c
t
o
b
e
r
1
5
/
o
n
l
i
n
e
-
1
0
1
5
0
8
.
h
t
m
l
t
o
u
c
h
,
"

D
i
P
a
o
l
o

s
a
i
d
.
H
e

e
n
j
o
y
s

r
e
a
d
i
n
g

t
h
e

e
-
m
a
i
l
s

s
e
n
t

b
y

f
a
r
-
f
l
u
n
g

s
t
u
d
e
n
t
s
.

"
T
h
a
n
k
s

f
o
r

s
h
a
r
i
n
g

y
o
u
r

g
r
e
a
t

c
o
n
t
e
n
t
a
n
d

k
n
o
w
l
e
d
g
e

t
o

t
h
e

w
o
r
l
d
,
"

w
r
o
t
e

o
n
e
.

"
H
e
y

t
h
e
r
e
!

F
i
r
s
t

o
f
f

I

j
u
s
t

w
a
n
t
e
d

t
o

s
a
y

T
H
A
N
K

Y
O
U

S
O
M
U
C
H
-
y
o
u

j
u
s
t

s
a
v
e
d

m
e

l
i
k
e

2
0
,
0
0
0

d
o
l
l
a
r
s
,
"

w
r
o
t
e

a
n
o
t
h
e
r
.
M
O
R
E

S
T
A
N

O
R
D

N
E
W
S
H
e
u
L
-
r
e
s
I
s
L
u
n
L

c
o
r
u
I
s

r
o
v
I
d
e

c
I
u
e
s

L
o
c
I
I
m
u
L
e

c
h
u
n
g
e

s
u
r
v
I
v
u
I
,

S
L
u
n
I
o
r
d
r
e
s
e
u
r
c
h
e
r
s

s
u
y
S
L
u
n
I
o
r
d
'
s

B
I
n
g

C
o
n
c
e
r
L

H
u
I
I

o

e
n
s

r
I
d
u
y

w
I
L
h

s
o
u
n
d
s
c
u

e

I
u
n
I
u
r
e
S
L
u
n
I
o
r
d

L
u
I
k

b
y

U
N

S
e
c
r
e
L
u
r
y
-
G
e
n
e
r
u
I

B
u
n

K
I
-
m
o
o
n
S
u
c
r
I
I
I
c
e

u
n
d

I
u
c
k

h
e
I


J
u

u
n

s
u
r
v
I
v
e
w
I
L
h
o
u
L

n
u
c
I
e
u
r

o
w
e
r
,

S
L
u
n
I
o
r
d
v
I
s
I
L
I
n
g

s
c
h
o
I
u
r

s
u
y
s
S
L
u
n
I
o
r
d

r
e
s
e
u
r
c
h
e
r
s

d
e
v
e
I
o

u
c
r
o
b
u
L
I
c

s

u
c
e

r
o
v
e
r
s

L
o

e
x

I
o
r
e
m
o
o
n
s
,

u
s
L
e
r
o
I
d
s
M
O
R
E

S
T
O
R
I
E
S

R
E
C
E
N
T
P
O
P
U
L
A
R
S
U
B
S
C
R
I
B
E
S
t
a
n
f
o
r
d

H
o
m
e
p
a
g
e

C
o
n
t
a
c
t

D
i
r
e
c
t
o
r
i
e
s

M
a
p
s

&

D
i
r
e
c
t
i
o
n
s


S
t
a
n
f
o
r
d

U
n
i
v
e
r
s
i
t
y
.

A
l
l

R
i
g
h
t
s

R
e
s
e
r
v
e
d
.

S
t
a
n
f
o
r
d
,

C
A

9
4
3
0
5
.

(
6
5
0
)

7
2
3
-
2
3
0
0
.
Exhibit 9 Page70
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 80 of 163 Page ID
#:1008




EXHIBIT 10
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 81 of 163 Page ID
#:1009
1
/
1
0
/
1
3
T
E
D

B
l
o
g

|

N
e
w
:

T
E
D
T
a
l
k
s

B
i
t
T
o
r
r
e
n
t

a
p
p
1
/
3
b
l
o
g
.
t
e
d
.
c
o
m
/
2
0
1
0
/
0
9
/
2
7
/
n
e
w
-
t
e
d
t
a
l
k
s
-
b
i
t
t
o
r
r
e
n
t
-
a
p
p
/
T
E
D

B
l
o
g


H
a
p
p
y

2
1
s
t

B
i
r
t
h
d
a
y
,

D
a
r
i
u
s
!

|

M
a
i
n

|

A
n
n
o
u
n
c
i
n
g

T
E
D
'
s

A
d
s

W
o
r
t
h

S
p
r
e
a
d
i
n
g

C
h
a
l
l
e
n
g
e

2
7

S
e
p
t
e
m
b
e
r

2
0
1
0
N
e
w
:

T
E
D
T
a
l
k
s

B
i
t
T
o
r
r
e
n
t

a
p
p
W
a
t
c
h

N
i
c
h
o
l
a
s

C
h
r
i
s
t
a
k
i
s
'

l
a
t
e
s
t

T
E
D
T
a
l
k

o
n

B
i
t
T
o
r
r
e
n
t
.
R
e
a
d

t
h
e

T
E
D

P
r
i
z
e

B
l
o
g

a
t

T
E
D
P
r
i
z
e
.
o
r
g

R
e
a
d

t
h
e

T
E
D

F
e
l
l
o
w
s

h
o
m
e
p
a
g
e
R
e
a
d

t
h
e

T
E
D
x

T
u
m
b
l
r
F
i
n
d

s
t
o
r
i
e
s

o
n

t
h
e

T
E
D

B
l
o
g

a
b
o
u
t
:
S
e
l
e
c
t

C
a
t
e
g
o
r
L
i
k
e

T
E
D
o
n

F
a
c
e
b
o
o
k
F
o
l
l
o
w

T
E
D

o
n

T
w
i
t
t
e
r
:
@
T
E
D
N
e
w
s

|

@
T
E
D
T
a
l
k
s
S
u
b
s
c
r
i
b
e

t
o

T
E
D

R
S
S

f
e
e
d
s
:

T
E
D

B
l
o
g

|

M
o
r
e

R
S
S

O
p
t
i
o
n
s
S
u
b
s
c
r
i
b
e

t
o

T
E
D

B
l
o
g
b
y

E
m
a
i
l
S
u
b
s
c
r
i
b
e

t
o

T
E
D
'
s

w
e
e
k
l
y

n
e
w
s
l
e
t
t
e
r
L
o
o
k
i
n
g

f
o
r

l
i
g
h
t
w
e
i
g
h
t

d
o
w
n
l
o
a
d
s
?
U
s
e

T
E
D
'
s

Q
u
i
c
k

L
i
s
t
S
e
e

1
,
0
0
0
+

T
E
D
T
a
l
k
s

i
n

a
s
p
r
e
a
d
s
h
e
e
t
:
S
i
g
n

I
n
R
e
g
i
s
t
e
r
T
E
D
:

I
d
e
a
s

w
o
r
t
h

s
p
r
e
a
d
i
n
g
T
a
l
k
s
S
p
e
a
k
e
r
s
P
l
a
y
l
i
s
t
s

Exhibit 10 Page71
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 82 of 163 Page ID
#:1010
1
/
1
0
/
1
3
T
E
D

B
l
o
g

|

N
e
w
:

T
E
D
T
a
l
k
s

B
i
t
T
o
r
r
e
n
t

a
p
p
2
/
3
b
l
o
g
.
t
e
d
.
c
o
m
/
2
0
1
0
/
0
9
/
2
7
/
n
e
w
-
t
e
d
t
a
l
k
s
-
b
i
t
t
o
r
r
e
n
t
-
a
p
p
/
S
h
a
r
e

t
h
i
s
:
D
i
s
c
u
s
s

t
h
i
s

B
l
o
g

P
o
s
t
S
i
g
n

i
n

t
o

a
d
d

c
o
m
m
e
n
t
s

o
r

J
o
i
n

(

t
'
s

f
r
e
e

a
n
d

f
a
s
t
!
)
H
e
r
e

a
t

T
E
D

H
Q

w
e
'
r
e

h
a
v
i
n
g

f
u
n

p
l
a
y
i
n
g

w
i
t
h

t
h
e

n
e
w

B
i
t
T
o
r
r
e
n
t

a
p
p

f
o
r

T
E
D
T
a
l
k
s


p
a
r
t

o
f

t
h
e
n
e
w

B
i
t
T
o
r
r
e
n
t

M
a
i
n
l
i
n
e

c
l
i
e
n
t

(
a
n
d

t
h
e

c
u
r
r
e
n
t

T
o
r
r
e
n
t

b
e
t
a
)
.

t
'
s

W
i
n
d
o
w
s
-
o
n
l
y

f
o
r

n
o
w
.

U
s
i
n
g

t
h
e
T
E
D
T
a
l
k
s

B
i
t
T
o
r
r
e
n
t

a
p
p
,

y
o
u

c
a
n

b
r
o
w
s
e

t
h
e

T
E
D
T
a
l
k
s

l
i
b
r
a
r
y

b
y

d
a
t
e

a
n
d

k
e
y
w
o
r
d
,

f
i
n
d
i
n
g

t
a
l
k
s
t
h
a
t

a
r
e

m
o
s
t

e
m
a
i
l
e
d
,

m
o
s
t

t
w
e
e
t
e
d
,

r
a
t
e
d

f
u
n
n
i
e
s
t

o
r

m
o
s
t

i
n
f
o
r
m
a
t
i
v
e

.

a
n
d

t
h
e
n

d
o
w
n
l
o
a
d
t
h
e
m

i
n

a

s
n
a
p
.

W
e
'
r
e

t
h
r
i
l
l
e
d

t
o

c
o
l
l
a
b
o
r
a
t
e

w
i
t
h

B
i
t
T
o
r
r
e
n
t

t
o

b
r
i
n
g

T
E
D
T
a
l
k
s

t
o

m
i
l
l
i
o
n
s

o
f

n
e
w

v
i
e
w
e
r
s
,


s
a
y
s
J
u
n
e

C
o
h
e
n
,

E
x
e
c
u
t
i
v
e

P
r
o
d
u
c
e
r

o
f

T
E
D

M
e
d
i
a
.

T
E
D
'
s

m
i
s
s
i
o
n

i
s

t
o

s
p
r
e
a
d

i
d
e
a
s
,

a
n
d
B
i
t
T
o
r
r
e
n
t
'
s

n
e
w

a
p
p
s

p
l
a
t
f
o
r
m

w
i
l
l

a
m
p
l
i
f
y

o
u
r

w
o
r
k

r
e
a
l
l
y

p
o
w
e
r
f
u
l
l
y
,

r
e
a
c
h
i
n
g

a

l
a
r
g
e

a
n
d
e
n
g
a
g
e
d

a
u
d
i
e
n
c
e

w
h
o

m
a
y

b
e

n
e
w

t
o

T
E
D
.

C
h
e
c
k

o
u
t

t
h
e

n
e
w

B
i
t
T
o
r
r
e
n
t

M
a
i
n
l
i
n
e

c
l
i
e
n
t

>
>
P
o
s
t
e
d

b
y

E
m
i
l
y

M
c
M
a
n
u
s

|

P
e
r
m
a
l
i
n
k

|

C
o
m
m
e
n
t

|

T
r
a
c
k
b
a
c
k
S
i
g
n

i
n

t
o

a
d
d

c
o
m
m
e
n
t
s

o
r

J
o
i
n

(

t
'
s

f
r
e
e

a
n
d

f
a
s
t
!
)
S
p
o
t

a

g
l
i
t
c
h

o
n

T
E
D
.
c
o
m
?

R
e
p
o
r
t

a
b
u
g
T
E
D

t
a
k
e
a
w
a
y
T
E
D

r
i
n
g
t
o
n
e
s
:
T
E
D
T
a
l
k
s

C
l
a
s
s
i
c

t
u
n
e

i
n

[
m
p
3
]

[
m
4
r
]
T
E
D
T
a
l
k
s

P
h
a
s
e


t
u
n
e

i
n

[
m
p
3
]

[
m
4
r
]
T
E
D

B
l
o
g
g
e
r
s
C
h
r
i
s

A
n
d
e
r
s
o
n

|

C
u
r
a
t
o
r
J
u
n
e

C
o
h
e
n

|

E
x
e
c
u
t
i
v
e

P
r
o
d
u
c
e
r

o
f

T
E
D

M
e
d
i
a
E
m
i
l
y

M
c
M
a
n
u
s

|

E
d
i
t
o
r
,

T
E
D
.
c
o
m
B
r
u
n
o

G
i
u
s
s
a
n
i

|

T
E
D

E
u
r
o
p
e
a
n

D
i
r
e
c
t
o
r
K
a
t
e

T
o
r
g
o
v
n
i
c
k

|

W
r
i
t
e
r
T
h
u
-
H
u
o
n
g

H
a

|

W
r
i
t
e
r
J
i
m

D
a
l
y

|

E
d
i
t
o
r
,

T
E
D

B
o
o
k
s
B
e
n

L
i
l
l
i
e

|

W
r
i
t
e
r
,

T
E
D
.
c
o
m
;

c
u
r
a
t
o
r
,

t
h
e

S
t
o
r
y
C
o
l
l
i
d
e
r
H
e
l
e
n

W
a
l
t
e
r
s

T
h
o
u
g
h
t

Y
o
u

S
h
o
u
l
d

S
e
e

T
h
i
s
K
a
r
e
n

E
n
g

|

Y
o
u
t
h

e
d
i
t
o
r
,

T
U
N
Z
A
J
a
m
e
s

D
u
n
c
a
n

D
a
v
i
d
s
o
n

|

P
h
o
t
o
g
r
a
p
h
e
r
B
r
e
t

H
a
r
t
m
a
n

|

P
h
o
t
o
s
,

T
E
D
G
l
o
b
a
l

2
0
1
2
T
w
i
t
t
e
r
4
1
F
a
c
e
b
o
o
k
2
Exhibit 10 Page72
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 83 of 163 Page ID
#:1011
1
/
1
0
/
1
3
T
E
D

B
l
o
g

|

N
e
w
:

T
E
D
T
a
l
k
s

B
i
t
T
o
r
r
e
n
t

a
p
p
3
/
3
b
l
o
g
.
t
e
d
.
c
o
m
/
2
0
1
0
/
0
9
/
2
7
/
n
e
w
-
t
e
d
t
a
l
k
s
-
b
i
t
t
o
r
r
e
n
t
-
a
p
p
/
R
a
c
h
e
l

T
o
b
i
a
s

|

n
e
v
e
r
-
h
a
v
e
-
i
-
e
v
e
r
.
t
u
m
b
l
r
.
c
o
m
I
n
t
e
r
n
s

e
m
e
r
i
t
u
s
:

L
a
r
i
s
s
a

G
r
e
e
n
,

L
i
z

J
a
c
o
b
s
,

S
h
a
w
n
J
a
m
e
s

J
r
.
,

M
o
r
t
o
n

B
a
s
t
B
l
o
g
s

w
e

w
a
t
c
h
+

T
E
D
x

T
u
m
b
l
r

+

T
E
D

F
e
l
l
o
w
s

b
l
o
g

+

T
E
D
P
r
i
z
e
.
o
r
g

+

T
h
o
m
a
s

D
o
l
b
y

|

T
E
D

M
u
s
i
c
a
l

D
i
r
e
c
t
o
r
,

b
l
o
g
g
i
n
g

a
t
T
h
o
m
a
s
D
o
l
b
y
.
c
o
m
+

T
h
e

i
n
d
i
s
p
e
n
s
a
b
l
e

G
l
o
b
a
l

V
o
i
c
e
s
W
a
t
c
h

t
h
e

4
-
m
i
n
u
t
e

v
i
d
e
o

A

T
a
s
t
e

o
f
T
E
D
2
0
1
2
:
T
h
i
s

w
o
r
k

i
s

l
i
c
e
n
s
e
d

u
n
d
e
r

a

C
r
e
a
t
i
v
e

C
o
m
m
o
n
s
l
i
c
e
n
s
e
.
P
o
w
e
r
e
d

b
y

W
o
r
d
P
r
e
s
s
.
c
o
m

V

P
E
n
t
e
r

e
m
a
i
l

f
o
r

n
e
w
s
l
e
t
t
e
r

s
i
g
n

u
p


T
E
D

C
O
N
F
E
R
E
N
C
E
S
,

L
L
C

C
O
N
T
A
C
T

U
S

|

A
D
V
E
R
T

N
G
/
P
A
R
T
N
E
R
S
H

P

|

H
E
L
P

|

T
E
R
M
S

O
F

U
S
E

|

P
R

V
A
C
Y

P
O
L

C
Y
P
o
w
e
r
e
d

b
y

W
o
r
d
P
r
e
s
s
.
c
o
m

V

P
Exhibit 10 Page73
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 84 of 163 Page ID
#:1012




EXHIBIT 11
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 85 of 163 Page ID
#:1013
1
/
1
0
/
1
3
N
i
n
e

n
c
h

N
a
i
l
s

r
e
l
e
a
s
e
s

a
l
b
u
m

o
n

B
i
t
T
o
r
r
e
n
t

-

C
B
C

N
e
w
s
1
/
2
w
w
w
.
c
b
c
.
c
a
/
n
e
w
s
/
s
t
o
r
y
/
2
0
0
8
/
0
3
/
0
3
/
t
e
c
h
-
n
i
n
e
i
n
c
h
n
a
i
l
s
.
h
t
m
l
T
r
e
n
t

R
e
n
z
o
r

o
I

N
i
n
e

I
n
c
h

N
a
i
l
s

s
a
y
s

h
e
h
a
s

w
a
n
t
e
d

t
o

r
e
l
e
a
s
e

a
n

a
l
b
u
m

o
v
e
r

t
h
e
i
n
t
e
r
n
e
t

I
o
r

s
o
m
e

t
i
m
e
.
(
K
a
r
l

W
a
l
t
e
r
/
G
e
t
t
y

I
m
a
g
e
s
)
N
i
n
e

n
c
h

N
a
i
l
s

r
e
l
e
a
s
e
s

a
l
b
u
m

o
n

B
i
t
T
o
r
r
e
n
t
L
a
s
t

U
p
d
a
t
e
d
:

M
o
n
d
a
y
,

M
a
r
c
h

3
,

2
0
0
8

|

1
:
1
6

P
M

E
T
C
B
C

N
e
w
s
B
y

P
e
t
e
r

N
o
w
a
k
I
n
d
u
s
t
r
i
a
l

r
o
c
k

b
a
n
d

N
i
n
e

I
n
c
h

N
a
i
l
s

h
a
s

r
e
l
e
a
s
e
d

a

3
6
-
t
r
a
c
k

a
l
b
u
m

i
n

a

v
a
r
i
e
t
y

o
I

I
o
r
m
a
t
s

o
n

t
h
e

i
n
t
e
r
n
e
t
,

w
i
t
h

a

p
o
r
t
i
o
n

a
v
a
i
l
a
b
l
e

I
o
r

d
o
w
n
l
o
a
d

I
o
r
I
r
e
e

o
v
e
r

I
i
l
e
-
s
h
a
r
i
n
g

n
e
t
w
o
r
k
s
.
T
h
e

b
a
n
d

r
e
l
e
a
s
e
d

t
h
e

I
o
u
r
-
p
a
r
t

i
n
s
t
r
u
m
e
n
t
a
l

a
l
b
u
m


G
h
o
s
t
s

I
-
I
V


o
n

M
o
n
d
a
y

o
n

i
t
s

o
w
n

w
e
b
s
i
t
e

a
s

a
I
u
l
l

d
o
w
n
l
o
a
d

I
o
r

$
5

U
S

o
r

a
s

a

$
1
0

U
S

d
o
u
b
l
e
-
C
D
,

a
s

w
e
l
l

a
s

d
e
l
u
x
e

e
d
i
t
i
o
n
s

I
o
r

$
7
5

U
S

a
n
d

$
3
0
0

U
S
.
T
h
e

b
a
n
d

a
l
s
o

d
e
c
i
d
e
d

t
o

m
a
k
e

t
h
e

I
i
r
s
t

v
o
l
u
m
e

o
I

n
i
n
e

t
r
a
c
k
s

a
v
a
i
l
a
b
l
e

I
o
r

I
r
e
e

o
v
e
r

t
h
e

B
i
t
T
o
r
r
e
n
t

I
i
l
e
-
s
h
a
r
i
n
g
p
r
o
t
o
c
o
l
.
T
r
e
n
t

R
e
z
n
o
r
,

w
h
o

w
r
i
t
e
s

a
l
l

N
i
n
e

I
n
c
h

N
a
i
l
s

s
o
n
g
s

a
n
d

i
s

a

p
r
o
p
o
n
e
n
t

o
I

n
e
w

t
e
c
h
n
o
l
o
g
y
,

s
a
i
d

h
e

h
a
s

w
a
n
t
e
d
t
o

d
i
s
t
r
i
b
u
t
e

a
n

a
l
b
u
m

I
o
r

I
r
e
e

o
v
e
r

t
h
e

i
n
t
e
r
n
e
t

I
o
r

s
o
m
e

t
i
m
e
,

b
u
t

w
a
s

n
o
t

a
b
l
e

t
o

b
e
c
a
u
s
e

o
I

i
n
t
e
r
I
e
r
e
n
c
e

I
r
o
m
h
i
s

r
e
c
o
r
d

l
a
b
e
l
.

N
i
n
e

I
n
c
h

N
a
i
l
s

s
p
l
i
t

I
r
o
m

I
n
t
e
r
s
c
o
p
e

i
n

l
a
t
e

2
0
0
7
.
"
N
o
w

t
h
a
t

w
e
'
r
e

n
o

l
o
n
g
e
r

c
o
n
s
t
r
a
i
n
e
d

b
y

a

r
e
c
o
r
d

l
a
b
e
l
,

w
e
'
v
e

d
e
c
i
d
e
d

t
o

p
e
r
s
o
n
a
l
l
y

u
p
l
o
a
d

G
h
o
s
t
s

I
,

t
h
e
I
i
r
s
t

o
I

t
h
e

I
o
u
r

v
o
l
u
m
e
s
,

t
o

v
a
r
i
o
u
s

t
o
r
r
e
n
t

s
i
t
e
s
,

b
e
c
a
u
s
e

w
e

b
e
l
i
e
v
e

B
i
t
T
o
r
r
e
n
t

i
s

a

r
e
v
o
l
u
t
i
o
n
a
r
y

d
i
g
i
t
a
l
d
i
s
t
r
i
b
u
t
i
o
n

m
e
t
h
o
d
,

a
n
d

w
e

b
e
l
i
e
v
e

i
n

I
i
n
d
i
n
g

w
a
y
s

t
o

u
t
i
l
i
z
e

n
e
w

t
e
c
h
n
o
l
o
g
i
e
s

i
n
s
t
e
a
d

o
I

I
i
g
h
t
i
n
g

t
h
e
m
,
"

R
e
z
n
o
r
s
a
i
d

i
n

a

r
e
l
e
a
s
e

o
n

t
h
e

a
l
b
u
m
'
s

w
e
b
s
i
t
e
.
"
I
`
m

v
e
r
y

p
l
e
a
s
e
d

w
i
t
h

t
h
e

r
e
s
u
l
t

a
n
d

t
h
e

a
b
i
l
i
t
y

t
o

p
r
e
s
e
n
t

i
t

d
i
r
e
c
t
l
y

t
o

y
o
u

w
i
t
h
o
u
t

i
n
t
e
r
I
e
r
e
n
c
e
.
"
R
e
z
n
o
r

t
e
l
e
g
r
a
p
h
e
d

t
h
e

m
o
v
e

n
e
a
r
l
y

a

y
e
a
r

a
g
o
,

w
h
e
n

h
e

t
o
l
d

t
h
e

H
e
r
a
l
d

S
u
n

i
n

A
u
s
t
r
a
l
i
a

o
I

h
i
s

i
n
t
e
n
t
i
o
n
s
.
"
I
I

I

c
o
u
l
d

d
o

w
h
a
t

I

w
a
n
t

r
i
g
h
t

n
o
w
,

I

w
o
u
l
d

p
u
t

o
u
t

m
y

n
e
x
t

a
l
b
u
m
,

y
o
u

c
o
u
l
d

d
o
w
n
l
o
a
d

i
t

I
r
o
m

m
y

s
i
t
e

a
t

a
s
h
i
g
h

a

b
i
t
-
r
a
t
e

a
s

y
o
u

w
a
n
t
,

p
a
y

$
4

t
h
r
o
u
g
h

P
a
y
P
a
l
,
"

h
e

s
a
i
d

i
n

M
a
y

2
0
0
7
.
T
h
e

r
e
l
e
a
s
e

i
s

a
l
s
o

t
h
e

s
e
c
o
n
d

m
o
v
e

b
y

a

h
i
g
h
-
p
r
o
I
i
l
e

a
c
t

t
o

u
s
e

t
h
e

i
n
t
e
r
n
e
t

a
s

i
t
s

p
r
i
m
a
r
y

d
i
s
t
r
i
b
u
t
o
r
.

B
r
i
t
i
s
h
r
o
c
k

b
a
n
d

R
a
d
i
o
h
e
a
d

r
e
l
e
a
s
e
d

I
n

R
a
i
n
b
o
w
s

o
n

t
h
e

i
n
t
e
r
n
e
t

i
n

O
c
t
o
b
e
r

2
0
0
7

a
n
d

a
s
k
e
d

I
a
n
s

t
o

p
a
y

w
h
a
t
e
v
e
r
t
h
e
y

w
a
n
t
e
d
.

T
h
e

b
a
n
d

a
l
s
o

r
e
l
e
a
s
e
d

t
h
e

a
l
b
u
m

a
s

a

r
e
g
u
l
a
r

C
D

i
n

D
e
c
e
m
b
e
r
.
N
i
n
e

I
n
c
h

N
a
i
l
s
'

l
a
s
t

r
e
l
e
a
s
e

w
i
t
h

I
n
t
e
r
s
c
o
p
e

w
a
s

Y
e
a
r

Z
e
r
o

R
e
m
i
x
e
d

i
n

N
o
v
e
m
b
e
r
,

w
h
i
c
h

w
a
s

a

r
e
w
o
r
k
e
d

v
e
r
s
i
o
n

o
I

t
h
e

o
r
i
g
i
n
a
l

a
l
b
u
m

r
e
l
e
a
s
e
d

i
n
Exhibit 11 Page74
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 86 of 163 Page ID
#:1014
1
/
1
0
/
1
3
N
i
n
e

n
c
h

N
a
i
l
s

r
e
l
e
a
s
e
s

a
l
b
u
m

o
n

B
i
t
T
o
r
r
e
n
t

-

C
B
C

N
e
w
s
w
w
w
.
c
b
c
.
c
a
/
n
e
w
s
/
s
t
o
r
y
/
2
0
0
8
/
0
3
/
0
3
/
t
e
c
h
-
n
i
n
e
i
n
c
h
n
a
i
l
s
.
h
t
m
l
A
p
r
i
l

2
0
0
7
.

Y
e
a
r

Z
e
r
o

d
i
d

n
o
t

s
e
l
l

a
s

w
e
l
l

a
s

p
r
e
v
i
o
u
s

a
l
b
u
m
s
,

w
h
i
c
h

R
e
z
n
o
r

s
a
i
d

w
a
s

a

r
e
s
u
l
t

o
I

m
i
s
m
a
n
a
g
e
m
e
n
t

b
y

I
n
t
e
r
s
c
o
p
e
.

T
h
e

l
a
b
e
l

p
r
i
c
e
d
Y
e
a
r

Z
e
r
o

h
i
g
h
e
r

t
h
a
n

m
a
n
y

p
o
p

a
l
b
u
m
s

b
e
c
a
u
s
e

i
t

k
n
e
w

N
i
n
e

I
n
c
h

N
a
i
l
s

I
a
n
s

w
o
u
l
d

p
a
y

e
x
t
r
a

I
o
r

i
t
,

h
e

s
a
i
d
.
"
T
h
e
y
'
r
e

t
h
i
e
v
e
s
,
"

h
e

t
o
l
d

t
h
e

H
e
r
a
l
d

S
u
n
.

"
I
'
v
e

g
o
t

a

c
o
m
p
a
n
y

t
h
a
t
'
s

s
o

b
u
r
e
a
u
c
r
a
t
i
c

a
n
d

c
l
u
m
s
y

a
n
d

i
g
n
o
r
a
n
t

a
n
d

b
e
h
i
n
d

t
h
e

t
i
m
e
s

t
h
e
y

d
o
n
'
t

k
n
o
w
w
h
a
t

t
o

d
o
,

s
o

t
h
e
y

r
i
p

t
h
e

p
e
o
p
l
e

o
I
I
.
"
T
h
e

a
l
b
u
m

w
a
s

m
a
r
k
e
t
e
d

i
n

p
a
r
t

w
i
t
h

a
n

a
l
t
e
r
n
a
t
e

r
e
a
l
i
t
y

g
a
m
e

t
h
a
t

r
e
v
o
l
v
e
d

a
r
o
u
n
d

a

n
e
a
r
-
I
u
t
u
r
e

d
y
s
t
o
p
i
a
n

U
n
i
t
e
d

S
t
a
t
e
s
,

w
h
e
r
e

t
h
e

c
o
u
n
t
r
y

h
a
d
d
e
v
o
l
v
e
d

i
n
t
o

a

C
h
r
i
s
t
i
a
n

I
u
n
d
a
m
e
n
t
a
l
i
s
t

t
h
e
o
c
r
a
c
y
.

C
l
u
e
s

t
o

t
h
e

o
n
l
i
n
e

g
a
m
e

w
e
r
e

I
o
u
n
d

i
n

t
h
e

I
o
r
m

o
I

c
l
u
e
s

o
n

N
i
n
e

I
n
c
h

N
a
i
l
s

T
-
s
h
i
r
t
s
,

t
h
e

s
o
n
g
s

o
n
t
h
e

a
l
b
u
m
,

a
n
d

i
n

U
S
B

d
r
i
v
e
s

l
e
I
t

i
n

b
a
t
h
r
o
o
m
s

a
t

c
o
n
c
e
r
t
s
.
R
e
z
n
o
r

d
e
s
c
r
i
b
e
d

t
h
e

n
e
w

a
l
b
u
m
,

w
h
i
c
h

w
a
s

r
e
c
o
r
d
e
d

o
v
e
r

a

1
0
-
w
e
e
k

s
t
r
e
t
c
h

i
n

t
h
e

I
a
l
l
,

a
s

"
a

s
o
u
n
d
t
r
a
c
k

I
o
r

d
a
y
d
r
e
a
m
s
.
"
"
T
h
i
s

c
o
l
l
e
c
t
i
o
n

o
I

m
u
s
i
c

i
s

t
h
e

r
e
s
u
l
t

o
I

w
o
r
k
i
n
g

I
r
o
m

a

v
e
r
y

v
i
s
u
a
l

p
e
r
s
p
e
c
t
i
v
e


d
r
e
s
s
i
n
g

i
m
a
g
i
n
e
d

l
o
c
a
t
i
o
n
s

a
n
d

s
c
e
n
a
r
i
o
s

w
i
t
h

s
o
u
n
d

a
n
d

t
e
x
t
u
r
e
,
"
h
e

w
r
o
t
e

o
n

t
h
e

a
l
b
u
m
'
s

w
e
b
s
i
t
e
.
Exhibit 11 Page75
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 87 of 163 Page ID
#:1015




EXHIBIT 12
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 88 of 163 Page ID
#:1016
to download the slip, choose your preIerred audio Iormat below. MP3 is the best option Ior most users. you can
download multiple Iormats iI you wish.
due to their large Iile sizes, we are distributing the FLAC and apple lossless Iormats via torrents. when you click
the links below Ior either oI those Iormats, you'll receive a small .torrent Iile which you must open in a torrent
application in order to download the audio Iiles. iI you are not comIortable using torrent Iiles, you should avoid
choosing the FLAC, apple lossless or wave options. visit this site to learn about torrents and how to use them.
high-quality MP3s (87 mb)
will play in any MP3 player. encoded with LAME at V0, Iully tagged.
recommended Ior most users.
the Iiles will arrive as a zip archive. in most cases, double-clicking the zip Iile will open it. iI you need more help
with zip Iiles, go here.
FLAC lossless (259 mb)
CD quality - will not play in itunes or many other popular media players. (more inIo)
recommended only Ior advanced users.
this link will download a small .torrent Iile, which you must open with a torrent application in order to download
the audio Iiles. visit this site Ior inIormation about using torrents.
FLAC high deIinition 24/96 (942 mb)
better-than-CD-quality 24bit 96kHZ audio - will not play in itunes or many other popular media players. (more
inIo)
recommended only Ior advanced users.
this link will download a small .torrent Iile, which you must open with a torrent application in order to download
the audio Iiles. visit this site Ior inIormation about using torrents.
Exhibit 12
Page76
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 89 of 163 Page ID
#:1017
M4A apple lossless (263 mb)
CD quality - will play in itunes. (more inIo)
recommended only Ior advanced users.
this link will download a small .torrent Iile, which you must open with a torrent application in order to download
the audio Iiles. visit this site Ior inIormation about using torrents.
high deIinition WAVE 24/96 (1.5 gb)
better-than-CD-quality 24bit 96kHz audio (more inIo)
Ior advanced audiophiles only! although you will be able to play these Iiles with most players that support WAVE
Iormat, you will not get any beneIits Irom the higher resolution audio unless you have extremely high-end audio
equipment. iI you're not Iamiliar with 24/96 audio, this download is not recommended.
this link will download a small .torrent Iile, which you must open with a torrent application in order to download
the audio Iiles. visit this site Ior inIormation about using torrents.
all Iiles are 100 DRM-Iree.
Exhibit 12
Page77
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 90 of 163 Page ID
#:1018




EXHIBIT 13
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 91 of 163 Page ID
#:1019
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

N
L
N
N

L
O
M
N
N
L
M
S
L
H
o
m
e
P
o
d
c
a
s
t
s
A
l
b
u
m
s
S
t
o
r
e
B
i
o
T
u
m
b
l
r
S
o
l
a
v
e
i
A
r
c
h
i
v
e

I
o
r

J
u
n
e
,

2
0
1
1
J
u
n
8 B
U
T
R

A
P
P

D
E
B
U
T
S

I
N

B
I
T
T
O
R
R
E
N
T

A
P
P

S
T
O
R
E
:

D
O
W
N
L
O
A
D

F
R
E
E

M
U
S
I
C

F
R
O
M

1
3
A
R
T
I
S
T
S
A
p
p
s
,
B
i
t
T
o
r
r
e
n
t
,
F
r
e
e

M
u
s
i
c
T
o
d
a
y

I
`
m

e
x
c
i
t
e
d

t
o

a
n
n
o
u
n
c
e

t
h
a
t

B
U
T
R

h
a
s

p
a
r
t
n
e
r
e
d

w
i
t
h

B
i
t
T
o
r
r
e
n
t

t
o

b
r
i
n
g

y
o
u

s
t
e
l
l
a
r

u
p

a
n
d

c
o
m
i
n
g

n
e
w

m
u
s
i
c

I
r
o
m

b
u
z
z
w
o
r
t
h
y

a
r
t
i
s
t
s
.

B
i
t
T
o
r
r
e
n
t

h
a
s

l
a
u
n
c
h
e
d
a
n

a
p
p

s
t
o
r
e

s
i
m
i
l
a
r

t
o

t
h
e

i
T
u
n
e
s

a
p
p

s
t
o
r
e

b
u
t

e
v
e
r
y
t
h
i
n
g

i
s

I
r
e
e

(
a
n
d

l
e
g
a
l
)
!
!

S
o

I
o
r

a
l
l

y
o
u

B
i
t
T
o
r
r
e
n
t

I
a
n
s

o
u
t

t
h
e
r
e

n
o
w

y
o
u

w
i
l
l

g
e
t

t
o

d
o
w
n
l
o
a
d

l
o
t
s

o
I

I
r
e
e

m
u
s
i
c
h
a
n
d
-
p
i
c
k
e
d

b
y

m
e
!


W
h
e
t
h
e
r

y
o
u
`
r
e

i
n
t
o

I
u
n
k
y

r
i
I
I
s
,

s
t
a
d
i
u
m

a
n
t
h
e
m
s

o
r

y
o
u
`
r
e

o
n

t
h
e

h
u
n
t

I
o
r

t
h
e

n
e
x
t

R
a
d
i
o
h
e
a
d

t
h
e
r
e

i
s

s
o
m
e
t
h
i
n
g

I
o
r

e
v
e
r
y
o
n
e
.

R
e
a
d

a
l
l

a
b
o
u
t

t
h
e
l
a
u
n
c
h

o
I

t
h
e

B
U
T
R

a
p
p

o
n

B
i
t
T
o
r
r
e
n
t

I
e
a
t
u
r
i
n
g

t
h
e

1
3

a
r
t
i
s
t
s

l
i
s
t
e
d

b
e
l
o
w
.
I
I

y
o
u

h
a
v
e

n
e
v
e
r

d
o
w
n
l
o
a
d
e
d

m
u
s
i
c

b
e
I
o
r
e

v
i
a

B
i
t
T
o
r
r
e
n
t

y
o
u

w
i
l
l

n
e
e
d

t
o

i
n
s
t
a
l
l

t
h
e

u
T
o
r
r
e
n
t

c
l
i
e
n
t

.


I
I

y
o
u

a
l
r
e
a
d
y

h
a
v
e

t
h
e

c
l
i
e
n
t

i
n
s
t
a
l
l
e
d

m
a
k
e

s
u
r
e

i
t
`
s

u
p
d
a
t
e
d
w
i
t
h

t
h
e

l
a
t
e
s
t

v
e
r
s
i
o
n

(
2
.
2
.
1
)

s
o

y
o
u

h
a
v
e

a
c
c
e
s
s

t
o

t
h
e

a
p
p

s
t
o
r
e
.


Y
o
u

c
a
n

a
l
s
o

b
r
o
w
s
e

t
h
e

B
i
t
T
o
r
r
e
n
t

s
t
o
r
e

o
n
l
i
n
e

b
u
t

i
I

y
o
u

h
a
v
e

a

P
C


t
h
e

s
t
o
r
e

w
i
l
l

a
p
p
e
a
r

i
n

t
h
e
l
e
I
t

h
a
n
d

c
o
l
u
m
n

o
I

y
o
u
r

u
T
o
r
r
e
n
t

c
l
i
e
n
t

a
s

p
i
c
t
u
r
e
d

b
e
l
o
w
:
Exhibit 13 Page78
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 92 of 163 Page ID
#:1020
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

O
L
N
N

L
O
M
N
N
L
M
S
L

Exhibit 13 Page79
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 93 of 163 Page ID
#:1021
I
t
`
s

i
m
p
o
r
t
a
n
t

t
o

n
o
t
e

t
h
a
t

t
h
e

B
i
t
T
o
r
r
e
n
t

a
p
p

s
t
o
r
e

i
s

o
n
l
y

a
v
a
i
l
a
b
l
e

o
n

P
C
s

a
t

t
h
i
s

t
i
m
e
.


I
t

w
i
l
l

l
a
u
n
c
h

o
n

M
a
c
s

l
a
t
e
r

t
h
i
s

y
e
a
r
.

I
I

y
o
u

a
r
e

o
n

a

M
a
c

(
o
r

P
C
)

a
n
d

h
a
v
e

t
h
e
u
T
o
r
r
e
n
t

c
l
i
e
n
t

a
l
r
e
a
d
y

i
n
s
t
a
l
l
e
d

c
l
i
c
k

o
n

e
a
c
h

a
r
t
i
s
t

n
a
m
e

b
e
l
o
w

a
n
d

t
h
e

t
o
r
r
e
n
t

I
i
l
e

w
i
l
l

a
u
t
o
m
a
t
i
c
a
l
l
y

a
p
p
e
a
r

i
n

y
o
u
r

c
l
i
e
n
t

a
n
d

s
t
a
r
t

d
o
w
n
l
o
a
d
i
n
g

t
h
e

a
l
b
u
m
.


I

p
l
a
n

t
o
a
d
d

m
o
r
e

a
r
t
i
s
t
s

t
o

t
h
e

B
U
T
R

a
p
p

v
i
a

t
h
e

B
i
t
T
o
r
r
e
n
t

s
t
o
r
e

e
v
e
r
y

m
o
n
t
h
!
A
p
p
r
o
x
i
m
a
t
e
l
y

1
2
0

m
i
l
l
i
o
n

u
s
e

B
i
t
T
o
r
r
e
n
t

e
v
e
r
y

m
o
n
t
h

s
o

t
h
i
s

i
s

a

g
r
e
a
t

w
a
y

t
o

e
x
p
o
s
e

y
o
u
r

m
u
s
i
c

t
o

a
n

e
n
o
r
m
o
u
s

b
u
i
l
t
-
i
n

a
u
d
i
e
n
c
e
.


I
I

y
o
u

w
a
n
t

t
o

s
u
b
m
i
t

m
u
s
i
c

I
o
r
c
o
n
s
i
d
e
r
a
t
i
o
n

t
o

b
e

I
e
a
t
u
r
i
n
g

i
n

t
h
e

B
U
T
R

a
p
p

e
m
a
i
l

K
a
m
i

a
t

b
u
t
r
m
g
m
t

g
m
a
i
l
.
c
o
m
.
T
h
e

1
3

I
e
a
t
u
r
e
d

a
r
t
i
s
t
s

i
n

t
h
e

B
U
T
R

a
p
p

w
i
t
h

l
i
n
k
s

t
o

t
h
e
i
r

c
o
r
r
e
s
p
o
n
d
i
n
g

t
o
r
r
e
n
t
:
-

L
o
v
e
d
r
u
g


F
o
r

I
a
n
s

o
I

M
o
d
e
s
t

M
o
u
s
e
,

M
u
s
e
,

S
m
a
s
h
i
n
g

P
u
m
p
k
i
n
s
,

K
i
n
g
s

o
I

L
e
o
n
,

M
e
r
c
u
r
y

R
e
v
-

R
i
n
g
s
i
d
e


F
o
r

I
a
n
s

o
I

U
2
,

B
e
n

H
a
r
p
e
r
,

T
h
e

D
o
o
r
s
,

D
e
p
e
c
h
e

M
o
d
e
,

J
o
h
n

L
e
n
n
o
n

-

T
h
e

D
a
y
l
i
g
h
t
s

-

F
o
r

I
a
n
s

o
I

D
o
v
e
s
,

M
u
s
e
,

T
h
e

K
i
l
l
e
r
s
,

C
o
l
d
p
l
a
y
,

U
2

-

S
l
o
w

M
o
t
i
o
n

C
e
n
t
e
r
I
o
l
d


F
o
r

I
a
n
s

o
I

T
h
e

S
t
r
o
k
e
s
,

C
a
g
e

T
h
e

E
l
e
p
h
a
n
t
,

R
H
C
P
,

B
l
o
c

P
a
r
t
y
-

T
h
e

R
u
s
e

-

F
o
r

I
a
n
s

o
I

U
2
,

D
e
a
t
h

C
a
b

I
o
r

C
u
t
i
e
,

T
h
e

K
i
l
l
e
r
s
,

C
o
l
d
p
l
a
y
,

K
i
n
g

o
I

L
e
o
n

-

N
e
u
l
o
r
e

-

F
o
r

I
a
n
s

o
I

M
u
m
I
o
r
d

&

S
o
n
s
,

B
o
n

I
v
e
r
,

F
l
e
e
t

F
o
x
e
s
,

C
o
l
d
p
l
a
y
,

N
E
E
D
T
O
B
R
E
A
T
H
E
-

G
a
b
r
a
h
m

V
i
t
e
k


F
o
r

I
a
n
s

o
I

A
r
c
a
d
e

F
i
r
e
,

S
p
o
o
n
,

D
a
v
e

M
a
t
t
h
e
w
s

B
a
n
d
,

J
a
n
e
l
l
e

M
o
n
a
e
-

T
h
e

K
i
c
k
s

-

F
o
r

F
a
n
s

O
I

T
h
e

B
l
a
c
k

C
r
o
w
s
,

N
e
e
d
t
o
b
r
e
a
t
h
e
,

G
a
v
i
n

D
e
g
r
a
w
,

J
e
l
l
y
I
i
s
h
,

A
e
r
o
s
m
i
t
h

-

T
h
e

C
o

-

F
o
r

I
a
n
s

o
I

T
h
e

F
r
a
y
,

O
n
e
R
e
p
u
b
l
i
c
,

T
h
e

S
c
r
i
p
t
,

A
u
g
u
s
t
a
n
a
,

N
e
e
d
t
o
b
r
e
a
t
h
e

-

T
h
e

L
a
s
t

R
o
y
a
l
s

-

F
o
r

I
a
n
s

o
I

B
e
c
k
,

T
h
e

K
i
l
l
e
r
s
,

C
a
k
e
,

M
a
t
t

&

K
i
m
,

P
h
o
e
n
i
x

-

R
e
n


B
r
e
t
o
n


F
o
r

I
a
n
s

o
I

C
o
l
d
p
l
a
y
,

R
a
d
i
o
h
e
a
d
,

G
r
i
z
z
l
y

B
e
a
r
,

F
l
e
e
t

F
o
x
e
s
,

L
o
c
a
l

N
a
t
i
v
e
s

-

J
e
n

G
l
o
e
c
k
n
e
r

-

F
o
r

I
a
n
s

o
I

P
o
r
t
i
s
h
e
a
d
,

B
j
o
r
k
,

P
J

H
a
r
v
e
y
,

N
i
n
a

S
i
m
o
n
e
,

B
a
t

F
o
r

L
a
s
h
e
s

-

D
E
E
R
P
E
O
P
L
E

-

F
o
r

I
a
n
s

o
I

O
t
h
e
r

L
i
v
e
s
,

C
o
l
o
u
r
m
u
s
i
c
,

T
h
e

B
o
o
m

B
a
n
g
,

B
r
o
n
c
h
o
,

B
r
o
t
h
e
r

B
e
a
r
L
i
s
t
e
n

t
o

1

s
o
n
g

I
r
o
m

t
h
e

1
3

I
e
a
t
u
r
e
d

a
r
t
i
s
t
s

b
e
l
o
w
:
Exhibit 13 Page80
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 94 of 163 Page ID
#:1022
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

Q
L
N
N

L
O
M
N
N
L
M
S
L
S
h
a
r
e

o
n

F
a
c
e
b
o
o
k
L
o
v
e
d
r
u
g

-

S
h
e
'
s

D
i
s
a
s
t
e
r

(
d
e
m
o
)
P
L
A
Y
L

S
T
G
e
t

B
U
T
R

A
p
p
G
e
t

B
U
T
R

A
p
p
L
i
k
e
0




P
i
n
t
e
r
e
s
t
Exhibit 13 Page81
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 95 of 163 Page ID
#:1023
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

R
L
N
N

L
O
M
N
N
L
M
S
L
Exhibit 13 Page82
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 96 of 163 Page ID
#:1024
Exhibit 13 Page83
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 97 of 163 Page ID
#:1025
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

T
L
N
N

L
O
M
N
N
L
M
S
L
Exhibit 13 Page84
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 98 of 163 Page ID
#:1026
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

U
L
N
N

L
O
M
N
N
L
M
S
L
G
e
t

B
U
T
R

S
o
u
n
d
t
r
a
c
k
s

V
o
l
.

1
-
6


F
r
e
e

B
U
T
R

A
p
p


S
u
b
s
c
r
i
b
e

t
o

P
o
d
c
a
s
t


L
i
s
t
e
n

t
o

B
U
T
R

L
i
v
e

o
n

5
F
M
Exhibit 13 Page85
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 99 of 163 Page ID
#:1027
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

V
L
N
N

L
O
M
N
N
L
M
S
L
J
o
i
n

t
h
e

B
U
T
R

M
a
i
l
i
n
g

L
i
s
t
C
o
n
n
e
c
t

w
i
t
h

B
U
T
R
K
a
m
i

K
n
a
k
e
b
u
t
r
J
o
i
n

t
h
e

c
o
n
v
e
r
s
a
t
i
o
n
b
u
t
r

!

h
a
t
e

t
h
a
t

i
t
'
s

n
o
t

o
n

S
p
o
t
i
f
y

!

w
a
n
n
a

s
h
a
r
e

i
t
!
!
R
T

@
h
y
p
e
b
o
t
:

O
n

S
e
a
r
c
h
i
n
g

S
p
o
t
i
f
y

F
o
r

R
a
d
i
o
h
e
a
d
'
s
!
n

R
a
i
n
b
o
w
s

b
i
t
.
l
y
f
1
3
j
+
3
Y
v
1
1

h
o
u
r
s

a
g
o


r
e
p
l
y


r
e
t
w
e
e
t


f
a
v
o
r
i
t
e
b
u
t
r

R
T

@
S
a
m
P
a
l
l
a
d
i
o
:

!
'
v
e

t
r
a
v
e
l
e
d

1
0
0
0
s

o
f

m
i
l
e
s
f
r
o
m

U
K

t
o

g
e
t

b
a
c
k

t
o

N
a
s
h
v
i
l
l
e

t
o
d
a
y
.

G
r
e
a
t

t
o

b
e
b
a
c
k
!

W
a
t
c
h

e
p

9

@
N
a
s
h
v
i
l
l
e
_
A
B
C

t
o
n
i
g
h
t
!
1
2

h
o
u
r
s

a
g
o


r
e
p
l
y


r
e
t
w
e
e
t


f
a
v
o
r
i
t
e
b
u
t
r

W
a
n
t

a

b
l
a
s
t

f
r
o
m

t
h
e

p
a
s
t
?

O
n
e

o
f

t
h
e

f
u
n
n
i
e
s
t
m
o
v
i
e
s

e
v
e
r

i
t
'
s

s
o

b
a
d

i
t
'
s

g
o
o
d
.
.
.
"
S
h
o
w
g
i
r
l
s
"
.
L
a
u
g
h
i
n
g

s
o

h
a
r
d

!
'
m

c
r
y
i
n
g
!
!
!
+

d
a
y
s

a
g
o


r
e
p
l
y


r
e
t
w
e
e
t


f
a
v
o
r
i
t
e
b
u
t
r

C
o
n
g
r
a
t
s

@
f
m
g
n
o
w

o
n

t
h
e

n
e
w

g
i
g
!
@
J
e
r
e
m
y
H
o
l
l
e
y

g
o
t

a

g
o
o
d

o
n
e
!

R
T

@
h
y
p
e
b
o
t
:
W
a
y
n
e

L
e
e
l
o
y

E
x
i
t
s

T
o
p
s
p
i
n

F
o
r

W
a
r
n
e
r

N
u
s
i
c
b
i
t
.
l
y
f
W
6
f
A
U
v
6

d
a
y
s

a
g
o


r
e
p
l
y


r
e
t
w
e
e
t


f
a
v
o
r
i
t
e
b
u
t
r

W
e

c
a
n
'
t

b
u
y

w
i
n
e

8

b
e
e
r

t
o
g
e
t
h
e
r

b
u
t

w
e

c
a
n
Exhibit 13 Page86
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 100 of 163 Page ID
#:1028
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

N
M
L
N
N

L
O
M
N
N
L
M
S
L

B
a
n
d
s

U
n
d
e
r

T
h
e

R
a
d
a
r

(
B
U
T
R
)
B
U
T
R

V
O
L
.

6
:

P
O
P

R
O
C
K

(
P
O
D
C
A
S
T

#
6
3
)
S
I
X

P
A
C
K

3

(
P
O
D
C
A
S
T

#
6
2
)
S
I
X

P
A
C
K

2
:

A
M
E
R
I
C
A
N
A

(
P
O
D
C
A
S
T

#
6
1
)
M
I
C
H
A
E
L

J
O
H
N
S


L
O
V
E

&

S
E
X

E
P
S
I
X

P
A
C
K

1

(
P
O
D
C
A
S
T

#
6
0
)
C
a
t
e
r
g
o
r
i
e
s
S
e
l
e
c
t

C
a
t
e
g
o
r
y
A
r
c
h
i
v
e
s
S
e
l
e
c
t

M
o
n
t
h
T
a
g

C
l
o
u
d
B
a
t

I
o
r

L
a
s
h
e
s

B
i
g
B
a
n
g

B
o
b

D
y
l
a
n

C
a
t

P
o
w
e
r

C
h
r
i
s

I
s
a
a
k

C
o
l
d
p
l
a
y

C
o
l
d

W
a
r

K
i
d
s

D
a
v
e

M
a
t
t
h
e
w
s

B
a
n
d

D
a
w
e
s

D
a
y
l
i
g
h
t
s

D
e
e
r

T
i
c
k

D
u
I
I
y

E
m
m
y
l
o
u

H
a
r
r
i
s

F
i
t
z
&

T
h
e

T
a
n
t
r
u
m
s

G
r
i
I
I
i
n

H
o
u
s
e

H
o
c
k
e
y

K
a
t
i
e

H
e
r
z
i
g

K
i
n
g
s

o
I

L
e
o
n

K
o
o
k
s

L
a
u
r
a

M
a
r
l
i
n
g

M
a
n
c
h
e
s
t
e
r

O
r
c
h
e
s
t
r
a

M
e
t
r
i
c

M
G
M
T

M
i
c
h
a
e
l

J
o
h
n
s

M
i
k
k
y
E
k
k
o

M
u
m
I
o
r
d

&

S
o
n
s

M
u
s
e

M
u
t
e
M
a
t
h

N
a
d
a

S
u
r
I

N
e
k
o

C
a
s
e

P
e
t
e
r

B
j
o
r
n

a
n
d

J
o
h
n

R
a
c
o
n
t
e
u
r
s

R
a
d
i
o
h
e
a
d

R
a
y

L
a
m
o
n
t
a
g
n
e

R
a
z
o
r
l
i
g
h
t

R
o
b

D
i
c
k
i
n
s
o
n
S
c
i
s
s
o
r
s

I
o
r

L
e
I
t
y

S
l
o
w

M
o
t
i
o
n

C
e
n
t
e
r
I
o
l
d

S
t
a
r
s
a
i
l
o
r

S
X
S
W

T
h
e

D
a
y
l
i
g
h
t
s

T
h
e

R
u
s
e

T
h
e

V
e
i
l
s

Y
e
a
h

Y
e
a
h

Y
e
a
h
s

Y
o
u
r

V
e
g
a
s
P
o
d
c
a
s
t

D
i
s
c
l
a
i
m
e
r
B
a
n
d
s

U
n
d
e
r

t
h
e

R
a
d
a
r

o
n

F
a
c
e
b
o
o
k
L
i
k
e
3
,
1
6
+
3
,
1
6
+

p
e
o
p
l
e

l
i
k
e

B
a
n
d
s

U
n
d
e
r

t
h
e

R
a
d
a
r
.
Exhibit 13 Page87
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 101 of 163 Page ID
#:1029
N
L
N
M
L
N
P
_
~

=
r

=
q

=
o
~

=
E
_
r
q
o
F
=

=
O
M
N
N
=

=
g

N
N
L
N
N

L
O
M
N
N
L
M
S
L
S
o
n
g
s

I
e
a
t
u
r
e
d

i
n

B
U
T
R

m
u
s
i
c

p
o
d
c
a
s
t
s

a
r
e

I
o
r

p
r
o
m
o
t
i
o
n
a
l

p
u
r
p
o
s
e
s

o
n
l
y
.

B
U
T
R

s
t
r
o
n
g
l
y

e
n
c
o
u
r
a
g
e
s

y
o
u

t
o

p
u
r
c
h
a
s
e

m
u
s
i
c

a
t

y
o
u
r

r
e
t
a
i
l
e
r

o
I

c
h
o
i
c
e
.

I
I

y
o
u

w
o
u
l
d
l
i
k
e

t
o

r
e
q
u
e
s
t

t
o

h
a
v
e

a

s
o
n
g

r
e
m
o
v
e
d
,

p
l
e
a
s
e

e
m
a
i
l
:

b
u
t
r
m
g
m
t

g
m
a
i
l
.
c
o
m
O
l
d

B
U
T
R

W
e
b
s
i
t
e
T
h
e

o
l
d

B
U
T
R

w
e
b
s
i
t
e

o
n

B
l
o
g
g
e
r

w
a
s

p
u
t

t
o

r
e
s
t

J
u
l
y

2
0
0
9

c
l
i
c
k

h
e
r
e

t
o

v
i
s
i
t
.
C
o
p
y
r
i
g
h
t

2
0
0
4

B
a
n
d
s

U
n
d
e
r

T
h
e

R
a
d
a
r

(
B
U
T
R
)
S
u
b
s
c
r
i
b
e

(
R
S
S
)
Exhibit 13 Page88
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 102 of 163 Page ID
#:1030




EXHIBIT 14
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 103 of 163 Page ID
#:1031
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
1
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
S
h
a
r
e

S
e
e

l
a
r
g
e
r

i
m
a
g
e

S
h
a
r
e

y
o
u
r

o
w
n

c
u
s
t
o
m
e
r

i
m
a
g
e
s
L
i
s
t
e
n

t
o

s
a
m
p
l
e
s

M
e
m
b
e
r
:

D
o
r
i
a
n

B
e
r
g
e
r
D
o
r
i
a
n

B
e
r
g
e
r
:

T
h
i
s

i
t
e
m

i
s

e
l
i
g
i
b
l
e

f
o
r

A
m
a
z
o
n

P
r
i
m
e
.

C
l
i
c
k

h
e
r
e

t
o

t
u
r
n

o
n

1
-
C
l
i
c
k

a
n
d

m
a
k
e

P
r
i
m
e
e
v
e
n

b
e
t
t
e
r

f
o
r

y
o
u
.

(
W
i
t
h

1
-
C
l
i
c
k

e
n
a
b
l
e
d
,

y
o
u

c
a
n

a
l
w
a
y
s

u
s
e

t
h
e

r
e
g
u
l
a
r

s
h
o
p
p
i
n
g

c
a
r
t

a
s

w
e
l
l
.
)
W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
W
h
o
d
i
n
i

|

F
o
r
m
a
t
:

A
u
d
i
o

C
D



(
3
0

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s
)

|

(
2
4
2
)
P
r
i
c
e
:
$
1
0
.
9
1


S
p
e
c
i
a
l

O
f
f
e
r
s

A
v
a
i
l
a
b
l
e

:

I
n
c
l
u
d
e
s

F
R
E
E

M
P
3

v
e
r
s
i
o
n

o
f

t
h
i
s

a
l
b
u
m
.




P
r
o
v
i
d
e
d

b
y

A
m
a
z
o
n

D
i
g
i
t
a
l

S
e
r
v
i
c
e
s
,

I
n
c
.
T
e
r
m
s

a
n
d

C
o
n
d
i
t
i
o
n
s
.

D
o
e
s

n
o
t

a
p
p
l
y

t
o

g
i
f
t

o
r
d
e
r
s
.
I
n

s
t
o
c
k

o
n

J
a
n
u
a
r
y

1
2
,

2
0
1
3
.
O
r
d
e
r

i
t

n
o
w
.
S
h
i
p
s

f
r
o
m

a
n
d

s
o
l
d

b
y

A
m
a
z
o
n
.
c
o
m
.

G
i
f
t
-
w
r
a
p

a
v
a
i
l
a
b
l
e
.
C
o
m
p
l
e
t
e

y
o
u
r

p
u
r
c
h
a
s
e

t
o

s
a
v
e

t
h
e

M
P
3

v
e
r
s
i
o
n

t
o

C
l
o
u
d

P
l
a
y
e
r
.
7

n
e
w

f
r
o
m

$
7
.
9
2

1
5

u
s
e
d

f
r
o
m

$
7
.
9
6

2

c
o
l
l
e
c
t
i
b
l
e

f
r
o
m

$
1
3
.
7
5
L
i
s
t
e
n

t
o

S
a
m
p
l
e
s

a
n
d

B
u
y

M
P
3
s
S
o
n
g
s

f
r
o
m

t
h
i
s

a
l
b
u
m

a
r
e

a
v
a
i
l
a
b
l
e

t
o

p
u
r
c
h
a
s
e

a
s

M
P
3
s
.

C
l
i
c
k

o
n

"
B
u
y

M
P
3
"

o
r

v
i
e
w

t
h
e

M
P
3

A
l
b
u
m
.


I
n
c
l
u
d
e
s

F
R
E
E

M
P
3
v
e
r
s
i
o
n

o
f

t
h
i
s

a
l
b
u
m
.
o
r
S
i
g
n

i
n

t
o

t
u
r
n

o
n

1
-
C
l
i
c
k

o
r
d
e
r
i
n
g
.
S
e
l
l

U
s

Y
o
u
r

I
t
e
m
F
o
r

u
p

t
o

a

$
2
.
6
0

G
i
f
t

C
a
r
d
L
e
a
r
n

m
o
r
e
M
o
r
e

B
u
y
i
n
g

C
h
o
i
c
e
s
2
4

u
s
e
d

&

n
e
w

f
r
o
m

$
7
.
9
2
H
a
v
e

o
n
e

t
o

s
e
l
l
?

F
o
r
m
a
t
s
A
m
a
z
o
n

P
r
i
c
e
N
e
w

f
r
o
m
U
s
e
d

f
r
o
m
M
P
3

M
u
s
i
c
,

1
4

S
o
n
g
s
,

1
9
9
0
$
9
.
9
9

$
9
.
9
9
-
-
A
u
d
i
o

C
D
,

1
9
9
0
$
1
0
.
9
1
$
7
.
9
2
$
7
.
9
6
A
u
d
i
o

C
a
s
s
e
t
t
e
,

1
9
9
0
-
-

$
1
9
.
9
9
$
1
5
.
0
0
I
n
t
r
o
d
u
c
i
n
g

A
u
t
o
R
i
p
A
m
a
z
o
n

i
s

e
x
c
i
t
e
d

t
o

a
n
n
o
u
n
c
e

A
u
t
o
R
i
p
,

w
h
e
r
e

y
o
u

c
a
n

g
e
t

a

f
r
e
e

M
P
3

v
e
r
s
i
o
n

w
h
e
n

y
o
u

p
u
r
c
h
a
s
e

a

q
u
a
l
i
f
y
i
n
g

C
D
.

L
e
a
r
n

m
o
r
e
M
u
s
i
c
M
P
3
V
i
n
y
l

R
e
c
o
r
d
s
T
o
d
a
y
'
s

D
e
a
l
s
N
e
w

R
e
l
e
a
s
e
s
B
e
s
t

S
e
l
l
e
r
s
A
d
v
a
n
c
e
d

S
e
a
r
c
h
R
e
c
o
m
m
e
n
d
a
t
i
o
n
s
B
r
o
w
s
e

G
e
n
r
e
s
A
m
a
z
o
n

C
l
o
u
d

P
l
a
y
e
r
T
r
a
d
e
-

n
S
h
o
p

b
y
D
e
p
a
r
t
m
e
n
t
S
e
a
r
c
h
w
h
o
d
i
n
i
M
u
s
i
c
G
o
H
e
l
l
o
,

D
o
r
i
a
n
Y
o
u
r

A
c
c
o
u
n
t
Y
o
u
r
P
r
i
m
e
C
a
r
t
0
W
i
s
h
L
i
s
t
D
o
r
i
a
n
'
s

A
m
a
z
o
n
.
c
o
m

T
o
d
a
y
'
s

D
e
a
l
s

G
i
f
t

C
a
r
d
s

H
e
l
p
Exhibit 14 Page89
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 104 of 163 Page ID
#:1032
1
/
1
0
/
1
3
2
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
T
r
y

o
u
r

m
u
s
i
c

s
a
m
p
l
e
r

t
o

h
e
a
r

s
o
n
g

s
a
m
p
l
e
s

f
r
o
m

t
h
i
s

a
l
b
u
m
.








































































































































































































































































































































S
a
m
p
l
e
s
S
o
n
g

T
i
t
l
e
A
r
t
i
s
t
T
i
m
e
P
r
i
c
e


1
.

F
u
n
k
y

B
e
a
t
W
h
o
d
i
n
i
5
:
0
6
$
0
.
9
9

B
u
y

M
P
3



2
.

O
n
e

L
o
v
e
W
h
o
d
i
n
i
5
:
3
1
$
0
.
9
9

B
u
y

M
P
3



3
.

F
r
i
e
n
d
s
W
h
o
d
i
n
i
4
:
4
1
$
1
.
2
9

B
u
y

M
P
3



4
.

H
a
u
n
t
e
d

H
o
u
s
e

O
f

R
o
c
k
W
h
o
d
i
n
i
6
:
3
2
$
0
.
9
9

B
u
y

M
P
3



5
.

B
e

Y
o
u
r
s
e
l
f
W
h
o
d
i
n
i

F
e
a
t
u
r
i
n
g

M
i
l
l
i
e

J
a
c
k
s
o
n
3
:
2
3
$
0
.
9
9

B
u
y

M
P
3



6
.

F
r
e
a
k
s

C
o
m
e

O
u
t

A
t

N
i
g
h
t
W
h
o
d
i
n
i
4
:
4
4
$
1
.
2
9

B
u
y

M
P
3



7
.

F
i
v
e

M
i
n
u
t
e
s

O
f

F
u
n
k
W
h
o
d
i
n
i
5
:
2
3
$
0
.
9
9

B
u
y

M
P
3



8
.

I
'
m

A

H
o
W
h
o
d
i
n
i
4
:
0
6
$
0
.
9
9

B
u
y

M
P
3



9
.

T
r
i
c
k
y

T
r
i
c
k
W
h
o
d
i
n
i
5
:
0
5
$
0
.
9
9

B
u
y

M
P
3

1
0
.

B
i
g

M
o
u
t
h
W
h
o
d
i
n
i
2
:
5
0
$
0
.
9
9

B
u
y

M
P
3

1
1
.

A
n
y

W
a
y

I

G
o
t
t
a

S
w
i
n
g

I
t
W
h
o
d
i
n
i
4
:
2
7
$
0
.
9
9

B
u
y

M
P
3

1
2
.

I
n

T
h
e

B
e
g
i
n
n
i
n
g
W
h
o
d
i
n
i
3
:
4
1
$
0
.
9
9

B
u
y

M
P
3

1
3
.

M
a
g
i
c
'
s

W
a
n
d
W
h
o
d
i
n
i
5
:
3
9
$
0
.
9
9

B
u
y

M
P
3

1
4
.

E
s
c
a
p
e

(
I

N
e
e
d

A

B
r
e
a
k
)
W
h
o
d
i
n
i
3
:
4
5
$
0
.
9
9

B
u
y

M
P
3

V
i
s
i
t

o
u
r

a
u
d
i
o

h
e
l
p

p
a
g
e

f
o
r

m
o
r
e

i
n
f
o
r
m
a
t
i
o
n
A
m
a
z
o
n
'
s

W
h
o
d
i
n
i

S
t
o
r
e
Exhibit 14 Page90
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 105 of 163 Page ID
#:1033
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
3
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
+
+

V
i
s
i
t

A
m
a
z
o
n
'
s

W
h
o
d
i
n
i

S
t
o
r
e
f
o
r

a
l
l

t
h
e

m
u
s
i
c
,

d
i
s
c
u
s
s
i
o
n
s
,

a
n
d

m
o
r
e
.
S
p
e
c
i
a
l

O
f
f
e
r
s

a
n
d

P
r
o
d
u
c
t

P
r
o
m
o
t
i
o
n
s
I
n
c
l
u
d
e
s

F
R
E
E

M
P
3

v
e
r
s
i
o
n

o
f

t
h
i
s

a
l
b
u
m

H
e
r
e
'
s

h
o
w

(
r
e
s
t
r
i
c
t
i
o
n
s

a
p
p
l
y
)
F
r
e
q
u
e
n
t
l
y

B
o
u
g
h
t

T
o
g
e
t
h
e
r
P
r
i
c
e

F
o
r

A
l
l

T
h
r
e
e
:

$
2
9
.
8
9


S
o
m
e

o
f

t
h
e
s
e

i
t
e
m
s

s
h
i
p

s
o
o
n
e
r

t
h
a
n

t
h
e

o
t
h
e
r
s
.

S
h
o
w

d
e
t
a
i
l
s
P
r
o
d
u
c
t

D
e
t
a
i
l
s
A
u
d
i
o

C
D

(
J
u
n
e

1
2
,

1
9
9
0
)
N
u
m
b
e
r

o
f

D
i
s
c
s
:

1
L
a
b
e
l
:

J
i
v
e
A
S
I
N
:

B
0
0
0
0
0
0
4
W
E
T
h
i
s

i
t
e
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s

~

W
h
o
d
i
n
i

A
u
d
i
o

C
D

$
1
0
.
9
1
R
u
n
-
D
.
M
.
C
.

-

G
r
e
a
t
e
s
t

H
i
t
s

~

R
u
n

D
.
M
.
C
.

A
u
d
i
o

C
D

$
8
.
9
9
A
l
l

W
o
r
l
d
:

G
r
e
a
t
e
s
t

H
i
t
s

~

L
.
L
.

C
o
o
l

J

A
u
d
i
o

C
D

$
9
.
9
9
C
u
s
t
o
m
e
r
s

W
h
o

B
o
u
g
h
t

T
h
i
s

I
t
e
m

A
l
s
o

B
o
u
g
h
t


P
a
g
e

1

o
f

1
3
R
u
n
-
D
.
M
.
C
.

-

G
r
e
a
t
e
s
t

H
i
t
s


R
u
n

D
.
M
.
C
.

(
5
0
)
A
u
d
i
o

C
D
$
8
.
9
9
H
e
a
v
y

H
i
t
s


H
e
a
v
y

D

&

T
h
e

B
o
y
s

(
1
7
)
A
u
d
i
o

C
D
$
1
1
.
2
8
T
h
e

G
r
e
a
t
e
s
t

H
i
t
s


D
o
u
g

E

F
r
e
s
h

(
4
)
A
u
d
i
o

C
D
$
9
.
9
8
A
l
l

W
o
r
l
d
:

G
r
e
a
t
e
s
t

H
i
t
s


L
.
L
.

C
o
o
l

J
(
5
8
)
A
u
d
i
o

C
D
$
9
.
9
9
Exhibit 14 Page91
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 106 of 163 Page ID
#:1034
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
4
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
(
3
0
)
4
.
7

o
u
t

o
f

5

s
t
a
r
s
5

s
t
a
r
2
2
4

s
t
a
r
8
3

s
t
a
r
0
2

s
t
a
r
0
1

s
t
a
r
0
S
e
e

a
l
l

3
0

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s
T
h
e

S
p
e
c
i
a
l
i
s
t


|


4

r
e
v
i
e
w
e
r
s

m
a
d
e

a

s
i
m
i
l
a
r

s
t
a
t
e
m
e
n
t
H
i
g
h

D
e
s
e
r
t

W
o
m
a
n


|


3

r
e
v
i
e
w
e
r
s

m
a
d
e

a

s
i
m
i
l
a
r

s
t
a
t
e
m
e
n
t
K
y
l
e

R
.

L
i
l
l
y


|


5

r
e
v
i
e
w
e
r
s

m
a
d
e

a

s
i
m
i
l
a
r

s
t
a
t
e
m
e
n
t
A
l
s
o

A
v
a
i
l
a
b
l
e

i
n
:

A
u
d
i
o

C
D


|


A
u
d
i
o

C
a
s
s
e
t
t
e


|


M
P
3

M
u
s
i
c
A
v
e
r
a
g
e

C
u
s
t
o
m
e
r

R
e
v
i
e
w
:




(
3
0

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s
)
A
m
a
z
o
n

B
e
s
t

S
e
l
l
e
r
s

R
a
n
k
:

#
4
2
,
2
3
1

i
n

M
u
s
i
c

(
S
e
e

T
o
p

1
0
0

i
n

M
u
s
i
c
)
#
7
7

i
n

M
u
s
i
c

>

R
a
p

&

H
i
p
-
H
o
p

>

O
l
d

S
c
h
o
o
l
D
i
d

w
e

m
i
s
s

a
n
y

r
e
l
e
v
a
n
t

f
e
a
t
u
r
e
s

f
o
r

t
h
i
s

p
r
o
d
u
c
t
?

T
e
l
l

u
s

w
h
a
t

w
e

m
i
s
s
e
d
.

W
o
u
l
d

y
o
u

l
i
k
e

t
o

u
p
d
a
t
e

p
r
o
d
u
c
t

i
n
f
o
,

g
i
v
e

f
e
e
d
b
a
c
k

o
n

i
m
a
g
e
s
,

o
r

t
e
l
l

u
s

a
b
o
u
t

a

l
o
w
e
r

p
r
i
c
e
?
E
d
i
t
o
r
i
a
l

R
e
v
i
e
w
s
1
4
-
t
r
a
c
k

c
o
l
l
e
c
t
i
o
n

f
r
o
m

o
l
d

s
k
o
o
l

h
i
p

h
o
p

c
r
e
w
.

F
e
a
t
u
r
e
s

'
I
'
M

A

H
O
'

'
F
i
v
e

M
i
n
u
t
e
s

o
f

F
u
n
k
'

&

'
E
s
c
a
p
e
'
C
u
s
t
o
m
e
r

R
e
v
i
e
w
s

W
h
o
d
I
n
I

w
u
s

b
I
g

I
n

L
h
e

m
I
d

e
I
g
h
L
I
e
s

w
I
L
h

h
I
L
s

I
I
k
e


M
I
n
u
L
e
s

o
I

u
n
k
,

r
I
e
n
d
s
,

O
n
e

o
v
e
,

B
I
g

M
o
u
L
h

u
n
d
L
h
e

r
e
u
k
s

C
o
m
e

O
u
L

u
L

N
I
g
h
L
.

G
o
o
d

L
r
I


d
o
w
n

m
e
m
o
r
y

I
u
n
e
.

O
I
d

s
c
h
o
o
I

H
I


H
o

o
r
e
v
e
r
!

M
o
s
t

H
e
l
p
f
u
l

C
u
s
t
o
m
e
r

R
e
v
i
e
w
s
1
4

o
f

1
4

p
e
o
p
l
e

f
o
u
n
d

t
h
e

f
o
l
l
o
w
i
n
g

r
e
v
i
e
w

h
e
l
p
f
u
l

U
n
d
e
r
a
p
p
r
e
c
i
a
t
e
d

M
a
r
c
h

3
,

2
0
0
0
B
y

M
e
c
c
a

E
g
y
p
t
F
o
r
m
a
t
:
A
u
d
i
o

C
D
W
h
o
d
i
n
i

h
a
s

l
o
n
g

b
e
e
n

d
e
p
r
i
v
e
d

o
f

t
h
e
i
r

r
i
g
h
t
f
u
l

p
l
a
c
e

i
n

h
i
p
-
h
o
p
'
s

g
l
o
r
i
e
d

a
n
d

o
f
t
e
n

f
a
b
l
e
d
p
a
s
t
.

M
a
n
y

p
e
o
p
l
e

o
f

t
o
d
a
y
'
s

g
e
n
e
r
a
t
i
o
n

a
r
e

c
o
m
p
l
e
t
e
l
y

u
n
a
w
a
r
e

o
f

W
h
o
d
i
n
i
'
s

l
e
g
a
c
y

o
r
i
n
d
i
f
f
e
r
e
n
t

a
l
l

t
o
g
e
t
h
e
r
.

T
h
i
s

g
r
o
u
p

p
r
o
d
u
c
e
d

t
o
p

q
u
a
l
i
t
y
,

t
i
m
e
l
e
s
s

m
u
s
i
c

t
h
a
t

h
a
s

i
n
f
l
u
e
n
c
e
d

a
m
u
l
t
i
t
u
d
e

o
f

p
r
o
m
i
n
e
n
t

r
a
p

&

h
i
p
-
h
o
p

s
u
p
e
r
s
t
a
r
s
.

S
o
n
g
s

l
i
k
e

"
F
r
i
e
n
d
s
,
"

"
O
n
e

L
o
v
e
,
"

"
T
h
e

F
r
e
a
k
s
C
o
m
e

O
u
t

A
t

N
i
g
h
t
,
"

"
F
u
n
k
y

B
e
a
t
,
"

a
n
d

"
F
i
v
e

M
i
n
u
t
e
s

O
f

F
u
n
k
"

s
h
o
w

t
h
e
i
r

m
a
s
t
e
r
f
u
l
n
e
s
s
,
a
r
t
i
s
t
r
y
,

i
n
c
r
e
d
i
b
l
e

l
y
r
i
c
i
s
m

a
n
d

a
n

a
b
i
l
i
t
y

t
o

m
a
k
e

a

h
i
t

s
o
n
g
.

T
h
e
i
r

r
h
y
m
e
s

w
e
r
e
n
'
t

m
i
n
d
l
e
s
s
,
Exhibit 14 Page92
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 107 of 163 Page ID
#:1035
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
5
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
W
a
s

t
h
i
s

r
e
v
i
e
w

h
e
l
p
f
u
l

t
o

y
o
u
?
Y
e
s
Y
e
s
Y
e
s
Y
e
s
N
o
N
o
N
o
N
o
C
o
m
m
e
n
t
|
t
h
e
y

s
p
o
k
e

o
f

u
n
i
v
e
r
s
a
l

t
h
e
m
e
s

a
n
d

t
h
e

p
r
o
b
l
e
m
s

o
f

t
h
e

t
i
m
e
s
.

T
h
e
y

n
e
v
e
r

d
i
d

h
e
s
t
i
t
a
t
e

t
o

m
a
k
e
y
o
u

d
a
n
c
e

e
i
t
h
e
r
.

W
h
o
d
i
n
i

s
h
o
u
l
d

n
o
t

b
e

f
o
r
g
o
t
t
e
n
,

t
h
e
y

a
r
e

o
n
e

o
f

t
h
e

t
r
u
e

p
i
o
n
e
e
r
s

o
f

r
a
p

&
h
i
p
-
h
o
p
.
6

o
f

6

p
e
o
p
l
e

f
o
u
n
d

t
h
e

f
o
l
l
o
w
i
n
g

r
e
v
i
e
w

h
e
l
p
f
u
l
L
o
n
g

O
v
e
r
l
o
o
k
e
d
,

B
u
t

C
r
u
c
i
a
l

T
o

H
i
p
-
H
o
p

H
i
s
t
o
r
y
!
!
O
c
t
o
b
e
r

2
9
,

2
0
0
6
B
y
H
E

W
H
O

F
U
N
K
S

B
E
H

N
D

T
H
E

R
O
W
S
!
!
F
o
r
m
a
t
:
A
u
d
i
o

C
D
W
h
o
d
i
n
i

w
a
s

o
n
e

o
f

t
h
e

b
e
s
t

o
f

t
h
e

e
a
r
l
y

h
i
p
-
h
o
p
t
r
i
o
s

a
n
d

t
h
e
y

h
a
d

q
u
i
t
e

a

f
o
l
l
o
w
i
n
g

a
n
d

s
t
r
i
n
g

o
f

h
i
t
s
a
s

t
h
i
s

c
o
l
l
e
c
t
i
o
n

w
i
l
l

a
t
t
e
s
t
!


r
e
m
e
m
b
e
r

s
e
e
i
n
g

t
h
e
m

b
a
c
k

i
n

s
u
m
m
e
r

1
9
8
4

w
i
t
h
t
h
e

t
h
e
n

n
e
w

R
u
n

D
M
C

f
r
e
s
h

o
f
f

t
h
e

s
u
c
c
e
s
s

o
f
"

t
'
s

L
i
k
e

T
h
a
t
"
,

"
H
a
r
d

T
i
m
e
s
"

&

"
S
u
c
k
a

M
C
'
s
"
!
!
W
h
o
d
i
n
i

h
a
d

"
H
a
u
n
t
e
d

H
o
u
s
e

O
f

R
o
c
k
"
,

"
F
r
e
i
n
d
s
"
,
"
T
h
e

F
r
e
a
k
s

C
o
m
e

O
u
t

A
t

N
i
g
h
t
"

a
n
d

t
h
e

c
l
a
s
s
i
c
"
5

M
i
n
u
t
e
s

o
f

F
u
n
k
"

o
u
t

t
h
e
n

a
n
d
t
h
e
y

r
e
a
l
l
y

s
t
o
p
p
e
d

t
h
e

s
h
o
w
!
L
a
t
e
r

c
a
m
e

t
h
e

o
t
h
e
r

j
a
m
s

l
i
k
e

"
O
n
e

L
o
v
e
"
,
"

'
m

A

H
o
!
!
"

a
n
d

o
t
h
e
r
s

t
h
a
t

l
e
f
t

t
h
e
i
r

m
a
r
k
w
i
t
h

e
a
r
l
y

h
i
p
-
h
o
p
p
e
r
s
.


h
o
p
e

t
h
a
t

t
h
e
y

w
i
l
l

g
e
t

t
h
e
i
r

j
u
s
t

d
u
e

a
n
d

b
e

h
o
n
o
r
e
d
o
n

V
H
-
1
'
s

H
i
p
-
H
o
p

H
o
n
o
r
s

f
o
r

2
0
0
7
.


a
l
w
a
y
s

w
a
n
t
e
d

a

l
e
a
t
h
e
r

h
a
t

j
u
s
t

l
i
k
e

t
h
e

o
n
e

t
h
a
t

g
u
y

w
h
o
w
a
s

t
h
e

l
e
a
d

M
C

(
E
c
s
t
a
c
y
)

u
s
e

t
o

r
o
c
k
!
!

(
L
O
L
!
!
)
T
h
e
y

w
e
r
e

j
u
s
t

a
s

c
r
u
c
i
a
l

a
s

K
u
r
t
i
s

B
l
o
w
,

S
u
g
a
r
h
i
l
l
,
R
u
n

D
M
C
,

L
L

C
O
O
L

J

a
n
d

a
l
l

t
h
e

o
t
h
e
r
s

w
h
o

w
e

t
h
i
n
k

o
f
n
o
w

a
s

i
c
o
n
s

o
f

t
h
e

g
a
m
e
!
-
-
-
M
u
c
h

L
o
v
e

&

R
e
s
p
e
c
t
!
!
*
*
A
D
D
E
N
D
U
M
!
!
-
-
-
(
0
9
/
0
6
/
0
7
)


a
m

s
o

h
a
p
p
y

t
o

a
n
n
o
u
n
c
e

t
h
a
t

W
H
O
D


w
i
l
l
b
e

f
i
n
a
l
l
y

g
e
t
t
i
n
g

t
h
e
i
r

p
r
o
p
s

a
t

t
h
i
s

y
e
a
r
'
s

H
i
p
-
H
o
p

H
o
n
o
r
s
!
!
C
o
n
g
r
a
t
u
l
a
t
i
o
n
s

g
u
y
s
!
-
-
Y
o
u

s
u
r
e

d
e
s
e
r
v
e

i
t
!

'
l
l

b
e

w
a
t
c
h
i
n
g

a
n
d

r
e
m
i
n
i
s
c
i
n
g
!

(
-
:
M
o
s
t

R
e
c
e
n
t

C
u
s
t
o
m
e
r

R
e
v
i
e
w
s
G
r
e
a
t
G
o
t

t
h
i
s

f
o
r

m
y

h
u
s
b
a
n
d

a
n
d

h
e

l
o
v
e

i
t
,

r
e
c
e
i
v
e
d
i
n

a

t
i
m
e
l
y

m
a
n
n
e
r
,

w
i
l
l

d
o

b
u
s
i
n
e
s
s

a
g
a
i
n
.

s
t
i
l
l
l
i
s
t
e
n
i
n
g

t
o

m
y

h
u
s
b
a
n
d

p
l
a
y
i
n
g

i
t

o
v
e
r

a
n
d

o
v
e
r
a
g
a
i
n
.
P
u
b
l
i
s
h
e
d

1

m
o
n
t
h

a
g
o

b
y

B
r
i
g
e
t
t
e

B
r
a
v
o
W
h
o
d
i
n
i
L
o
v
e

t
h
i
s

a
l
b
u
m

G
r
e
a
t

t
o

h
a
v
e

t
h
i
s

o
l
d

s
c
h
o
o
l
f
l
a
v
o
r

k
e
e
p
s

y
o
u
r

h
e
a
d

b
o
b
b
i
n

a
n
d

y
o
u
r

f
e
e
t
t
a
p
p
i
n

a
n
d

y
o
u
r

b
o
d
y

m
o
v
i
n
P
u
b
l
i
s
h
e
d

4

m
o
n
t
h
s

a
g
o

b
y

E
r
i
c

E
r
v
i
n
P
r
o
d
u
c
t

r
e
v
i
e
w
T
h
e

i
t
e
m

c
a
m
e

q
i
u
c
k

a
n
d

w
a
s

w
e
l
l

p
a
c
k
a
g
e
d
.

a
m

p
l
e
a
s
e
d

t
o

o
n
c
e

a
g
a
i
n

h
e
a
r

c
l
a
s
s
i
c

s
o
n
g
s
f
r
o
m

a

g
r
e
a
t

h
i
p

h
o
p

g
r
o
u
p
.
P
u
b
l
i
s
h
e
d

5

m
o
n
t
h
s

a
g
o

b
y

M
r
.
A
m
b
a
s
s
a
d
o
r
A
d
v
e
r
t
i
s
e
m
e
n
t

Exhibit 14 Page93
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 108 of 163 Page ID
#:1036
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
6
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
W
a
s

t
h
i
s

r
e
v
i
e
w

h
e
l
p
f
u
l

t
o

y
o
u
?
Y
e
s
Y
e
s
N
o
N
o
C
o
m
m
e
n
t

|

W
a
s

t
h
i
s

r
e
v
i
e
w

h
e
l
p
f
u
l

t
o

y
o
u
?
Y
e
s
Y
e
s
N
o
N
o
C
o
m
m
e
n
t

|

5

o
f

5

p
e
o
p
l
e

f
o
u
n
d

t
h
e

f
o
l
l
o
w
i
n
g

r
e
v
i
e
w

h
e
l
p
f
u
l

T
h
i
s

i
s

a

r
e
c
o
r
d

t
o

l
i
s
t
e
n

t
o

c
o
n
s
i
s
t
a
n
t
l
y

t
h
r
o
u
g
h
o
u
t

l
i
f
e
!

F
e
b
r
u
a
r
y

1
5
,

2
0
0
4
B
y

e
n
u
f
f
o
d
i
s
F
o
r
m
a
t
:
A
u
d
i
o

C
D
Q
u
i
t
e

s
i
m
p
l
y
,

t
h
i
s

i
s

o
n
e

o
f

t
h
e

b
e
s
t

r
e
c
o
r
d
s
,

h
i
p
-
h
o
p

o
r

o
t
h
e
r
w
i
s
e
,

e
v
e
r
.
T
h
i
s

a
l
b
u
m

h
a
s

s
o
n
g
s

t
h
a
t

t
o

t
h
i
s

v
e
r
y

d
a
y

w
i
l
l

p
i
c
k

m
e

u
p

o
r

i
n
s
p
i
r
e

m
e

w
h
e
n

'
m

b
u
m
m
e
d

o
u
t
.
F
o
r

e
x
a
m
p
l
e
,

a

f
e
w

v
e
r
s
e
s

f
r
o
m

t
h
e

o
u
t
s
t
a
n
d
i
n
g

t
r
a
c
k

c
a
l
l
e
d

"
E
s
c
a
p
e
.
"

A
f
t
e
r

s
t
r
e
s
s
e
d

o
u
t

v
e
r
s
e
s
d
e
s
c
r
i
b
i
n
g

s
o
m
e

o
f

t
h
e

p
r
o
b
l
e
m
s

w
e

a
l
l

f
a
c
e

i
n

d
a
i
l
y

l
i
f
e
,

t
h
e

s
o
n
g

d
o
e
s
n
'
t

e
n
d

l
i
k
e

y
o
u

m
i
g
h
t
e
x
p
e
c
t

a
t

f
i
r
s
t
.

T
h
e
r
e

c
o
m
e
s

a

p
o
w
e
r
f
u
l

r
e
p
l
y

r
i
g
h
t

a
w
a
y

-
-

w
h
i
c
h

i
s

e
m
p
o
w
e
r
i
n
g

a
n
d

s
i
m
p
l
e

-
-
"
S
o

y
o
u

w
a
n
t

t
o

q
u
i
t

y
o
u
r

j
o
b
,

b
u
t

y
o
u

d
o
n
'
t

h
a
v
e

t
h
e

f
u
n
d
s
?
T
a
k
e

a

s
i
c
k

d
a
y

o
f
f
,

a
n
d

f
i
n
d

a

n
e
w

o
n
e
,
.
.
.
a
n
d

t
h
e

s
o
n
g

c
o
n
t
i
n
u
e
s
:
"
Y
o
u

d
o
n
'
t

o
w
e

n
o
o
n
e

a

t
h
i
n
g
,

b
u
t

y
o
u

o
w
e

y
o
u
r
s
e
l
f
!
"
S
e
l
f
-
e
m
p
o
w
e
r
i
n
g

a
n
d

t
r
u
t
h
f
u
l
,

W
H
O
D

N
N


w
e
r
e

h
o
n
e
s
t
,

i
n
t
r
o
s
p
e
c
t
i
v
e
.

L
i
k
e

o
n

t
h
e

t
r
a
c
k
"
F
R

E
N
D
S
,
"

w
h
e
r
e

t
h
e

g
r
o
u
p

i
s

s
t
r
i
v
i
n
g

t
o

d
e
f
i
n
e

w
h
a
t

m
a
k
e
s

f
o
r

t
r
u
e

f
r
i
e
n
d
s
h
i
p
-
-
"
.
.
.
a
n
d

i
f

y
o
u

a
s
k

m
e
,

y
o
u

k
n
o
w


c
o
u
l
d
n
'
t

b
e

m
u
c
h

h
e
l
p
,
b
e
c
a
u
s
e

a

f
r
i
e
n
d

i
s

s
o
m
e
b
o
d
y

y
o
u

j
u
d
g
e

f
o
r

y
o
u
r
s
e
l
f
.
"
B
u
t

d
o
n
'
t

g
e
t

i
t

t
w
i
s
t
e
d
,

b
e
c
a
u
s
e

m
o
s
t

o
f

t
h
e
s
e

s
o
n
g
s

a
r
e

w
i
c
k
e
d

p
a
r
t
y

j
a
m
s

t
h
a
t

c
a
n

s
t
i
l
l

r
o
c
k
t
h
e

h
o
u
s
e
.

T
h
e
s
e

t
r
a
c
k
s

a
r
e
n
'
t

n
e
r
d
y
,

b
o
r
i
n
g

o
r

o
v
e
r
l
y

i
n
t
e
l
l
e
c
t
u
a
l

i
n

a
n
y

w
a
y
!
G
e
t

t
h
i
s

r
e
c
o
r
d

-
-

i
t

s
e
r
v
e
s

a
s

t
h
e

p
e
r
f
e
c
t

1
0
0
%
-
e
f
f
e
c
t
i
v
e

a
n
t
i
d
o
t
e

t
o

t
h
e

s
u
b
s
t
a
n
d
a
r
d

n
e
g
a
t
i
v
i
t
y
o
f

t
o
d
a
y
'
s

f
a
n
t
a
s
y
-
r
i
d
d
e
n

r
a
p
!


S
e
e

a
l
l

3
0

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s

(
n
e
w
e
s
t

f
i
r
s
t
)
W
r
i
t
e

a

c
u
s
t
o
m
e
r

r
e
v
i
e
w
W
r
i
t
e

a

c
u
s
t
o
m
e
r

r
e
v
i
e
w

o
l
d

s
k
o
o
l

f
l
a
v
o
r
i

g
r
e
w

w
h
e
n

h
i
p
h
o
p

w
a
s

b
o
r
n

i

h
a
d

t
h
i
s

a
l
b
u
m

o
n
v
i
n
y
l
.

i
t
'
s

2
0
0
9

n
o
w

s
o

i

c
o
n
v
e
r
t
e
d

f
r
o
m

v
i
n
y
l

t
o
c
d
'
s
a

m
u
s
t

h
a
v
e

f
o
r

y
o
u
r

c
o
l
l
e
c
t
i
o
n
.
P
u
b
l
i
s
h
e
d

o
n

A
u
g
u
s
t

2
7
,

2
0
0
9

b
y

R
a
f
a
e
l

F
R

E
N
D
S
T
h
e

b
e
s
t

o
f

H
o
u
d
i
n
i
.

Y
o
u

g
o
t
t
a

l
o
v
e

i
t
.

F
r
i
e
n
d
s
,
F
r
e
a
k
s

C
o
m
e

O
u
t

A
t

N
i
g
h
t
,

F
i
v
e

M
i
n
u
t
e
s

O
f
F
u
n
k
.
P
u
b
l
i
s
h
e
d

o
n

F
e
b
r
u
a
r
y

6
,

2
0
0
8

b
y

L
e
s
t
e
r

L
.

C
a
r
t
e
r

O
M
G
!


M

W
A
T
C
H

N
G

T
H
E
M

R

G
H
T
N
O
W
,

A
N
D

F

Y
O
U

D
O

N
O
T

H
A
V
E

O
R

O
W
N
W
H
O
D

.
.
.
O
M
G
!


M

W
A
T
C
H

N
G

T
H
E
M

R

G
H
T

N
O
W
,

A
N
D

F

Y
O
U

D
O

N
O
T

H
A
V
E

O
R

O
W
N

W
H
O
D

C
D
,
O
R

K
N
O
W

O
F

T
H
E
M
,
T
H
E
N

Y
O
U
R

W
H
A
C
K
.
t
e
n

s
t
a
r
s

a
c
r
o
s
s

t
h
e

b
o
a
r
d
s
s
s
s
s
s
s
s
s
s
s
s
!
!
!
!
!
!
R
e
a
d

m
o
r
e
P
u
b
l
i
s
h
e
d

o
n

O
c
t
o
b
e
r

1
3
,

2
0
0
7

b
y

S
h
a
k
a
a
r
i
i

M
e
l
e
n
d
e
z

O
l
d

S
c
h
o
o
l

R
a
p

A
t

t
s

B
e
s
t
!
W
h
o
d
i
n
i

h
a
s

t
o

b
e

o
n
e

o
f

m
y

f
a
v
o
r
i
t
e

o
l
d

s
c
h
o
o
l
r
a
p

g
r
o
u
p
s

o
f

a
l
l

t
i
m
e
!

T
h
e
y

h
a
v
e

a

d
i
s
t
i
n
c
t

s
t
y
l
e
a
n
d

o
r
i
g
i
n
a
l

b
a
c
k
g
r
o
u
n
d

b
e
a
t
s

t
h
a
t

n
o

o
n
e

e
l
s
e
c
a
n

m
a
t
c
h
.

R
e
a
d

m
o
r
e
P
u
b
l
i
s
h
e
d

o
n

M
a
r
c
h

1
0
,

2
0
0
7

b
y

J
.

S
u
h

C
l
a
s
s
i
c

r
a
p

i
n

t
h
e

m
i
d
1
9
8
0
'
s
.
.
.
.
.
.
.
.
.
.
.
.
W
h
o
d
i
n
i

w
a
s

b
i
g

i
n

t
h
e

m
i
d

e
i
g
h
t
i
e
s

w
i
t
h

h
i
t
s

l
i
k
e
5

M
i
n
u
t
e
s

o
f

F
u
n
k
,

F
r
i
e
n
d
s
,

O
n
e

L
o
v
e
,

B
i
g
M
o
u
t
h

a
n
d

t
h
e

F
r
e
a
k
s

C
o
m
e

O
u
t

a
t

N
i
g
h
t
.
R
e
a
d

m
o
r
e
P
u
b
l
i
s
h
e
d

o
n

J
a
n
u
a
r
y

1
5
,

2
0
0
7

b
y

T
h
e

S
p
e
c
i
a
l
i
s
t
Exhibit 14 Page94
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 109 of 163 Page ID
#:1037
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
7
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
C
o
n
c
e
r
t

T
i
c
k
e
t
s

f
o
r

R
e
l
a
t
e
d

A
r
t
i
s
t
s
(
W
h
a
t
'
s

t
h
i
s
?
)
S
p
o
n
s
o
r
e
d

C
o
n
t
e
n
t
L
o
o
k
i
n
g

f
o
r

"
w
h
o
d
i
n
i
"

P
r
o
d
u
c
t
s
?
O
t
h
e
r

c
u
s
t
o
m
e
r
s

s
u
g
g
e
s
t
e
d

t
h
e
s
e

i
t
e
m
s
:
B
a
c
k

i
n

B
l
a
c
k

b
y

W
h
o
d
i
n
i

(
1
0
)


8

u
s
e
d

&

n
e
w

f
r
o
m

$
4
9
.
0
0
S
u
g
g
e
s
t
e
d

b
y

3

c
u
s
t
o
m
e
r
s
S
i
x

b
y

W
h
o
d
i
n
i

(
4
)


B
u
y

n
e
w
:

$
1
8
.
9
8

4
6

u
s
e
d

&

n
e
w

f
r
o
m

$
1
.
9
7
S
u
g
g
e
s
t
e
d

b
y

3

c
u
s
t
o
m
e
r
s
O
p
e
n

S
e
s
a
m
e

b
y

W
h
o
d
i
n
i

(
2
)


1
6

u
s
e
d

&

n
e
w

f
r
o
m

$
1
3
.
9
3
S
u
g
g
e
s
t
e
d

b
y

2

c
u
s
t
o
m
e
r
s
T
o
u
g
h
e
r

T
h
a
n

L
e
a
t
h
e
r

b
y
B
i
l
l

A
d
l
e
r

(
1
)


6

u
s
e
d

&

n
e
w

f
r
o
m

$
8
.
4
1
S
u
g
g
e
s
t
e
d

b
y

1

c
u
s
t
o
m
e
r
T
o
u
g
h
e
r

T
h
a
n

L
e
a
t
h
e
r
:
T
h
e

R
i
s
e

o
f

R
u
n
-
D
M
C

b
y
B
i
l
l

A
d
l
e
r

(
4
)


B
u
y

n
e
w
:

$
1
2
.
9
5

2
0

u
s
e
d

&

n
e
w

f
r
o
m

$
6
.
9
9
S
u
g
g
e
s
t
e
d

b
y

1

c
u
s
t
o
m
e
r


E
x
p
l
o
r
e

1
6

o
t
h
e
r

i
t
e
m
s

r
e
l
a
t
e
d

t
o

"
w
h
o
d
i
n
i
"
G
o

F
u
n
k
y

F
r
e
s
h

F
a
v
o
r
i
t
e

!
T
h
i
s

r
e
v
i
e
w

o
n
l
y

d
e
a
l
s

w
i
t
h

a

b
u
n
c
h

o
f

t
h
e

s
o
n
g
s
o
n

t
h
i
s

a
s


o
n
l
y

h
a
v
e

4
5
s

o
f

W
h
o
d
i
n
i
'
s

a
n
d

t
h
e
y
a
r
e

a
l
l

i
n
c
l
u
d
e
d
.

R
e
a
d

m
o
r
e
P
u
b
l
i
s
h
e
d

o
n

J
a
n
u
a
r
y

2
3
,

2
0
0
6

b
y

Q
.

B
a
s
e
d
e
n

F
l
a
s
h
b
a
c
k
T
h
e

r
e
a
s
o
n

i

g
o
t

i
t

i
s

c
a
u
s
e


w
a
s

i
n

O
a
k
l
a
n
d

a
t
a

f
l
e
e

m
a
r
k
e
t

a
n
d

f
o
u
n
d

a
n

o
g

p
o
s
t
e
r

b
o
a
r
d

o
f
w
h
o
d
i
n
i
,

u
t
f
o

a
n
d

t
h
e

r
e
a
l

r
o
x
a
n
n
e

p
e
r
f
o
r
m
i
n
g

a
t
t
h
e

P
a
r
a
m
o
u
n
t

i
n

O
t
o
w
n

i
n

1
9
8
2
.

R
e
a
d

m
o
r
e
P
u
b
l
i
s
h
e
d

o
n

S
e
p
t
e
m
b
e
r

1
1
,

2
0
0
5

b
y

P
a
u
l
a

G
.
H
e
r
n
a
n
d
e
z
S
e
a
r
c
h

C
u
s
t
o
m
e
r

R
e
v
i
e
w
s

O
n
l
y

s
e
a
r
c
h

t
h
i
s

p
r
o
d
u
c
t
'
s

r
e
v
i
e
w
s
Exhibit 14 Page95
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 110 of 163 Page ID
#:1038
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s
:

W
h
o
d
i
n
i
:

M
u
s
i
c
8
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
1

c
u
s
t
o
m
e
r

d
i
s
c
u
s
s
i
o
n
F
o
r
u
m
s
L
i
s
t
m
a
n
i
a
!
S
o

Y
o
u
'
d

L
i
k
e

t
o
.
.
.
e
G
i
f
t

T
h
i
s

I
t
e
m

(
W
h
a
t
'
s

t
h
i
s
?
)
I
n
s
t
a
n
t

D
e
l
i
v
e
r
y
:

E
-
m
a
i
l

a

g
i
f
t

c
a
r
d
s
u
g
g
e
s
t
i
n
g

t
h
i
s

i
t
e
m
F
l
e
x
i
b
l
e

G
i
f
t
i
n
g

C
h
o
i
c
e
s
:

T
h
e
y

c
a
n

c
h
o
o
s
e
t
h
i
s
,

o
r

p
i
c
k

f
r
o
m

m
i
l
l
i
o
n
s

o
f

o
t
h
e
r

i
t
e
m
s
.
W
h
a
t

O
t
h
e
r

I
t
e
m
s

D
o

C
u
s
t
o
m
e
r
s

B
u
y

A
f
t
e
r

V
i
e
w
i
n
g

T
h
i
s

I
t
e
m
?
R
u
n
-
D
.
M
.
C
.

-

G
r
e
a
t
e
s
t

H
i
t
s


~

R
u
n

D
.
M
.
C
.

A
u
d
i
o

C
D


(
5
0
)
$
8
.
9
9
H
e
a
v
y

H
i
t
s


~

H
e
a
v
y

D

&

T
h
e

B
o
y
s

A
u
d
i
o

C
D


(
1
7
)
$
1
1
.
2
8
A
l
l

W
o
r
l
d
:

G
r
e
a
t
e
s
t

H
i
t
s


~

L
.
L
.

C
o
o
l

J

A
u
d
i
o

C
D


(
5
8
)
$
9
.
9
9
Exhibit 14 Page96
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 111 of 163 Page ID
#:1039
1
/
1
0
/
1
3
9
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
B
e
s
t

o
f


~

T
h
e

S
.
O
.
S
.

B
a
n
d

A
u
d
i
o

C
D


(
5
1
)
$
5
.
9
9


E
x
p
l
o
r
e

s
i
m
i
l
a
r

i
t
e
m
s
L
o
o
k

f
o
r

S
i
m
i
l
a
r

I
t
e
m
s

b
y

C
a
t
e
g
o
r
y
M
u
s
i
c

>

B
l
u
e
s
M
u
s
i
c

>

P
o
p
M
u
s
i
c

>

R
&
B
M
u
s
i
c

>

R
a
p

&

H
i
p
-
H
o
p


F
e
e
d
b
a
c
k

I
f

y
o
u

n
e
e
d

h
e
l
p

o
r

h
a
v
e

a

q
u
e
s
t
i
o
n

f
o
r

C
u
s
t
o
m
e
r

S
e
r
v
i
c
e
,

c
o
n
t
a
c
t

u
s
.

W
o
u
l
d

y
o
u

l
i
k
e

t
o

u
p
d
a
t
e

p
r
o
d
u
c
t

i
n
f
o
,

g
i
v
e

f
e
e
d
b
a
c
k

o
n

i
m
a
g
e
s
,

o
r

t
e
l
l

u
s

a
b
o
u
t

a

l
o
w
e
r

p
r
i
c
e
?

I
s

t
h
e
r
e

a
n
y

o
t
h
e
r

f
e
e
d
b
a
c
k

y
o
u

w
o
u
l
d

l
i
k
e

t
o

p
r
o
v
i
d
e
?

C
l
i
c
k

h
e
r
e
Y
o
u
r

R
e
c
e
n
t

H
i
s
t
o
r
y

(
W
h
a
t
'
s

t
h
i
s
?
)
Exhibit 14 Page97
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 112 of 163 Page ID
#:1040
1
/
1
0
/
1
3
1
0
/
1
0
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
W
h
o
d
i
n
i
-
G
r
e
a
t
e
s
t
-
H
i
t
s
/
d
p
/
B
0
0
0
0
0
0
4
W
E
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
9
3
6
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
w
h
o
d
i
n
i
G
e
t

t
o

K
n
o
w

U
s
C
a
r
e
e
r
s

n
v
e
s
t
o
r

R
e
l
a
t
i
o
n
s
P
r
e
s
s

R
e
l
e
a
s
e
s
A
m
a
z
o
n

a
n
d

O
u
r

P
l
a
n
e
t
A
m
a
z
o
n

i
n

t
h
e

C
o
m
m
u
n
i
t
y
M
a
k
e

M
o
n
e
y

w
i
t
h

U
s
S
e
l
l

o
n

A
m
a
z
o
n
B
e
c
o
m
e

a
n

A
f
f
i
l
i
a
t
e
A
d
v
e
r
t
i
s
e

Y
o
u
r

P
r
o
d
u
c
t
s

n
d
e
p
e
n
d
e
n
t
l
y

P
u
b
l
i
s
h

w
i
t
h

U
s

S
e
e

a
l
l
L
e
t

U
s

H
e
l
p

Y
o
u
Y
o
u
r

A
c
c
o
u
n
t
S
h
i
p
p
i
n
g

R
a
t
e
s

&

P
o
l
i
c
i
e
s
A
m
a
z
o
n

P
r
i
m
e
R
e
t
u
r
n
s

A
r
e

E
a
s
y
M
a
n
a
g
e

Y
o
u
r

K
i
n
d
l
e
H
e
l
p
C
a
n
a
d
a
C
h
i
n
a
F
r
a
n
c
e
G
e
r
m
a
n
y

t
a
l
y
J
a
p
a
n
S
p
a
i
n
U
n
i
t
e
d

K
i
n
g
d
o
m
A
b
e
B
o
o
k
s
R
a
r
e

B
o
o
k
s
&

T
e
x
t
b
o
o
k
s
A
m
a
z
o
n
L
o
c
a
l
G
r
e
a
t

L
o
c
a
l

D
e
a
l
s
i
n

Y
o
u
r

C
i
t
y
A
m
a
z
o
n
S
u
p
p
l
y
B
u
s
i
n
e
s
s
,

n
d
u
s
t
r
i
a
l
&

S
c
i
e
n
t
i
f
i
c

S
u
p
p
l
i
e
s
A
m
a
z
o
n
W
e
b
S
e
r
v
i
c
e
s
S
c
a
l
a
b
l
e
C
l
o
u
d

S
e
r
v
i
c
e
s
A
m
a
z
o
n
W
i
r
e
l
e
s
s
C
e
l
l
p
h
o
n
e
s

&
W
i
r
e
l
e
s
s

P
l
a
n
s
A
s
k
v
i
l
l
e
C
o
m
m
u
n
i
t
y
A
n
s
w
e
r
s
A
u
d
i
b
l
e
D
o
w
n
l
o
a
d
A
u
d
i
o

B
o
o
k
s
B
e
a
u
t
y
B
a
r
.
c
o
m
P
r
e
s
t
i
g
e

B
e
a
u
t
y
D
e
l
i
v
e
r
e
d
B
o
o
k

D
e
p
o
s
i
t
o
r
y
B
o
o
k
s

W
i
t
h

F
r
e
e
D
e
l
i
v
e
r
y

W
o
r
l
d
w
i
d
e
C
r
e
a
t
e
S
p
a
c
e

n
d
i
e

P
u
b
l
i
s
h
i
n
g
M
a
d
e

E
a
s
y
D
i
a
p
e
r
s
.
c
o
m
E
v
e
r
y
t
h
i
n
g
B
u
t

T
h
e

B
a
b
y
D
P
R
e
v
i
e
w
D
i
g
i
t
a
l
P
h
o
t
o
g
r
a
p
h
y
F
a
b
r
i
c
S
e
w
i
n
g
,

Q
u
i
l
t
i
n
g
&

K
n
i
t
t
i
n
g

M
D
b
M
o
v
i
e
s
,

T
V
&

C
e
l
e
b
r
i
t
i
e
s
J
u
n
g
l
e
e
.
c
o
m
S
h
o
p

O
n
l
i
n
e
i
n

n
d
i
a
M
Y
H
A
B

T
P
r
i
v
a
t
e

F
a
s
h
i
o
n
D
e
s
i
g
n
e
r

S
a
l
e
s
S
h
o
p
b
o
p
D
e
s
i
g
n
e
r
F
a
s
h
i
o
n

B
r
a
n
d
s
S
o
a
p
.
c
o
m
H
e
a
l
t
h
,

B
e
a
u
t
y

&
H
o
m
e

E
s
s
e
n
t
i
a
l
s
W
a
g
.
c
o
m
E
v
e
r
y
t
h
i
n
g
F
o
r

Y
o
u
r

P
e
t
W
a
r
e
h
o
u
s
e

D
e
a
l
s
O
p
e
n
-
B
o
x
D
i
s
c
o
u
n
t
s
W
o
o
t
N
e
v
e
r

G
o
n
n
a
G
i
v
e

Y
o
u

U
p
Y
o
y
o
.
c
o
m
A

H
a
p
p
y

P
l
a
c
e
T
o

S
h
o
p

F
o
r

T
o
y
s
Z
a
p
p
o
s
S
h
o
e
s

&
C
l
o
t
h
i
n
g
V
i
n
e
.
c
o
m
E
v
e
r
y
t
h
i
n
g
t
o

L
i
v
e

L
i
f
e

G
r
e
e
n
C
a
s
a
.
c
o
m
K
i
t
c
h
e
n
,

S
t
o
r
a
g
e
&

E
v
e
r
y
t
h
i
n
g

H
o
m
e
C
o
n
d
i
t
i
o
n
s

o
f

U
s
e
P
r
i
v
a
c
y

N
o
t
i
c
e

n
t
e
r
e
s
t
-
B
a
s
e
d

A
d
s


1
9
9
6
-
2
0
1
3
,

A
m
a
z
o
n
.
c
o
m
,

n
c
.

o
r

i
t
s

a
f
f
i
l
i
a
t
e
s
Exhibit 14 Page98
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 113 of 163 Page ID
#:1041
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
1
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
S
h
a
r
e

S
e
e

l
a
r
g
e
r

i
m
a
g
e

S
h
a
r
e

y
o
u
r

o
w
n

c
u
s
t
o
m
e
r

i
m
a
g
e
s
L
i
s
t
e
n

t
o

s
a
m
p
l
e
s

M
e
m
b
e
r
:

D
o
r
i
a
n

B
e
r
g
e
r
D
o
r
i
a
n

B
e
r
g
e
r
:

T
h
i
s

i
t
e
m

i
s

e
l
i
g
i
b
l
e

f
o
r

A
m
a
z
o
n

P
r
i
m
e
.

C
l
i
c
k

h
e
r
e

t
o

t
u
r
n

o
n

1
-
C
l
i
c
k

a
n
d

m
a
k
e

P
r
i
m
e
e
v
e
n

b
e
t
t
e
r

f
o
r

y
o
u
.

(
W
i
t
h

1
-
C
l
i
c
k

e
n
a
b
l
e
d
,

y
o
u

c
a
n

a
l
w
a
y
s

u
s
e

t
h
e

r
e
g
u
l
a
r

s
h
o
p
p
i
n
g

c
a
r
t

a
s

w
e
l
l
.
)
T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
D
o
u
g

E

F
r
e
s
h

(
A
r
t
i
s
t
)

|

F
o
r
m
a
t
:

A
u
d
i
o

C
D



(
4

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s
)

|

(
7
7
)
P
r
i
c
e
:
$
9
.
9
8

I
n

S
t
o
c
k
.
S
h
i
p
s

f
r
o
m

a
n
d

s
o
l
d

b
y

A
m
a
z
o
n
.
c
o
m
.

G
i
f
t
-
w
r
a
p

a
v
a
i
l
a
b
l
e
.
W
a
n
t

i
t

d
e
l
i
v
e
r
e
d

F
r
i
d
a
y
,

J
a
n
u
a
r
y

1
1
?

O
r
d
e
r

i
t

i
n

t
h
e

n
e
x
t

4

h
o
u
r
s

a
n
d
5
1

m
i
n
u
t
e
s
,

a
n
d

c
h
o
o
s
e

O
n
e
-
D
a
y

S
h
i
p
p
i
n
g

a
t

c
h
e
c
k
o
u
t
.

D
e
t
a
i
l
s

C
D
-
R

N
o
t
e
:

T
h
i
s

p
r
o
d
u
c
t

i
s

m
a
n
u
f
a
c
t
u
r
e
d

o
n

d
e
m
a
n
d

w
h
e
n
o
r
d
e
r
e
d

f
r
o
m

A
m
a
z
o
n
.
c
o
m
.

[
L
e
a
r
n

m
o
r
e
]
L
i
s
t
e
n

t
o

S
a
m
p
l
e
s

a
n
d

B
u
y

M
P
3
s
S
o
n
g
s

f
r
o
m

t
h
i
s

a
l
b
u
m

a
r
e

a
v
a
i
l
a
b
l
e

t
o

p
u
r
c
h
a
s
e

a
s

M
P
3
s
.

C
l
i
c
k

o
n

"
B
u
y

M
P
3
"

o
r

v
i
e
w

t
h
e

M
P
3

A
l
b
u
m
.
T
r
y

o
u
r

m
u
s
i
c

s
a
m
p
l
e
r

t
o

h
e
a
r

s
o
n
g

s
a
m
p
l
e
s

f
r
o
m

t
h
i
s

a
l
b
u
m
.








































































































































































































































































































































S
a
m
p
l
e
s
Q
u
a
n
t
i
t
y
:

1

o
r
S
i
g
n

i
n

t
o

t
u
r
n

o
n

1
-
C
l
i
c
k

o
r
d
e
r
i
n
g
.
M
o
r
e

B
u
y
i
n
g

C
h
o
i
c
e
s
H
a
v
e

o
n
e

t
o

s
e
l
l
?

F
o
r
m
a
t
s
A
m
a
z
o
n

P
r
i
c
e
N
e
w

f
r
o
m
U
s
e
d

f
r
o
m
M
P
3

M
u
s
i
c
,

1
3

S
o
n
g
s
,

2
0
1
1
$
8
.
9
9

$
8
.
9
9
-
-
A
u
d
i
o

C
D
,

2
0
1
1
$
9
.
9
8
$
9
.
9
8
-
-
I
n
t
r
o
d
u
c
i
n
g

A
u
t
o
R
i
p
A
m
a
z
o
n

i
s

e
x
c
i
t
e
d

t
o

a
n
n
o
u
n
c
e

A
u
t
o
R
i
p
,

w
h
e
r
e

y
o
u

c
a
n

g
e
t

a

f
r
e
e

M
P
3

v
e
r
s
i
o
n

w
h
e
n

y
o
u

p
u
r
c
h
a
s
e

a

q
u
a
l
i
f
y
i
n
g
C
D
.

L
e
a
r
n

m
o
r
e
M
u
s
i
c
M
P
3
V
i
n
y
l

R
e
c
o
r
d
s
T
o
d
a
y
'
s

D
e
a
l
s
N
e
w

R
e
l
e
a
s
e
s
B
e
s
t

S
e
l
l
e
r
s
A
d
v
a
n
c
e
d

S
e
a
r
c
h
R
e
c
o
m
m
e
n
d
a
t
i
o
n
s
B
r
o
w
s
e

G
e
n
r
e
s
A
m
a
z
o
n

C
l
o
u
d

P
l
a
y
e
r
T
r
a
d
e
-

n
S
h
o
p

b
y
D
e
p
a
r
t
m
e
n
t
S
e
a
r
c
h
d
o
u
g

e

f
r
e
s
h
M
u
s
i
c
G
o
H
e
l
l
o
,

D
o
r
i
a
n
Y
o
u
r

A
c
c
o
u
n
t
Y
o
u
r
P
r
i
m
e
C
a
r
t
0
W
i
s
h
L
i
s
t
D
o
r
i
a
n
'
s

A
m
a
z
o
n
.
c
o
m

T
o
d
a
y
'
s

D
e
a
l
s

G
i
f
t

C
a
r
d
s

H
e
l
p
Exhibit 14 Page99
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 114 of 163 Page ID
#:1042
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
2
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
+
S
o
n
g

T
i
t
l
e
A
r
t
i
s
t
T
i
m
e
P
r
i
c
e


1
.

K
e
e
p

R
i
s
i
n

T
o

T
h
e

T
o
p
D
o
u
g

E

F
r
e
s
h
3
:
5
2
$
0
.
9
9

B
u
y

M
P
3



2
.

L
a
d
i

D
a
d
i
D
o
u
g

E

F
r
e
s
h
,

S
l
i
c
k

R
i
c
k

&

S
l
i
c
k

R
i
c
k
4
:
2
8
$
0
.
9
9

B
u
y

M
P
3



3
.

G
u
e
s
s

W
h
o
D
o
u
g

E

F
r
e
s
h
4
:
2
7
$
0
.
9
9

B
u
y

M
P
3



4
.

E
v
e
r
y
b
o
d
y

L
o
v
e
s

A

S
t
a
r
D
o
u
g

E

F
r
e
s
h
3
:
5
9
$
0
.
9
9

B
u
y

M
P
3



5
.

F
r
e
a
k
s
D
o
u
g

E

F
r
e
s
h
3
:
0
8
$
0
.
9
9

B
u
y

M
P
3



6
.

T
h
e

S
h
o
w
D
o
u
g

E

F
r
e
s
h
6
:
3
6
$
0
.
9
9

B
u
y

M
P
3



7
.

C
u
t

T
h
a
t

Z
e
r
o
D
o
u
g

E

F
r
e
s
h
3
:
5
3
$
0
.
9
9

B
u
y

M
P
3



8
.

A
l
l

T
h
e

W
a
y

T
o

H
e
a
v
e
n
D
o
u
g

E

F
r
e
s
h
6
:
0
5
$
0
.
9
9

B
u
y

M
P
3



9
.

P
l
a
y

T
h
i
s

O
n
l
y

A
t

N
i
g
h
t
D
o
u
g

E

F
r
e
s
h
5
:
5
0
$
0
.
9
9

B
u
y

M
P
3

1
0
.

L
o
v
i
n

E
v
r
y

M
i
n
u
t
e

O
f

I
t
D
o
u
g

E

F
r
e
s
h
4
:
2
7
$
0
.
9
9

B
u
y

M
P
3

1
1
.

I
i
g
h
t
D
o
u
g

E

F
r
e
s
h
4
:
3
2
$
0
.
9
9

B
u
y

M
P
3

1
2
.

W
h
e
r
e

T
h
e

P
a
r
t
y

A
t
D
o
u
g

E

F
r
e
s
h
3
:
5
7
$
0
.
9
9

B
u
y

M
P
3

1
3
.

N
u
t
h
i
n
D
o
u
g

E

F
r
e
s
h
3
:
0
5
$
0
.
9
9

B
u
y

M
P
3

V
i
s
i
t

o
u
r

a
u
d
i
o

h
e
l
p

p
a
g
e

f
o
r

m
o
r
e

i
n
f
o
r
m
a
t
i
o
n
F
r
e
q
u
e
n
t
l
y

B
o
u
g
h
t

T
o
g
e
t
h
e
r
P
r
i
c
e

F
o
r

B
o
t
h
:

$
1
4
.
9
7


S
h
o
w

a
v
a
i
l
a
b
i
l
i
t
y

a
n
d

s
h
i
p
p
i
n
g

d
e
t
a
i
l
s
T
h
i
s

i
t
e
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s

~

D
o
u
g

E

F
r
e
s
h

A
u
d
i
o

C
D

$
9
.
9
8
G
r
e
a
t

A
d
v
e
n
t
u
r
e
s

o
f

S
l
i
c
k

R
i
c
k

~

S
l
i
c
k

R
i
c
k

A
u
d
i
o

C
D

$
4
.
9
9
Exhibit 14 Page100
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 115 of 163 Page ID
#:1043
1
/
1
0
/
1
3
3
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
P
r
o
d
u
c
t

D
e
t
a
i
l
s
A
u
d
i
o

C
D

(
M
a
y

1
8
,

2
0
1
1
)
L
a
b
e
l
:

J
T
C

A
t
l
a
n
t
i
c

P
a
r
t
n
e
r
s
A
S
I
N
:

B
0
0
5
1
I
B
8
A
W
I
n
-
P
r
i
n
t

E
d
i
t
i
o
n
s
:

M
P
3

M
u
s
i
c
A
v
e
r
a
g
e

C
u
s
t
o
m
e
r

R
e
v
i
e
w
:




(
4

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s
)
A
m
a
z
o
n

B
e
s
t

S
e
l
l
e
r
s

R
a
n
k
:

#
3
7
,
9
3
4

i
n

M
u
s
i
c

(
S
e
e

T
o
p

1
0
0

i
n

M
u
s
i
c
)
D
i
d

w
e

m
i
s
s

a
n
y

r
e
l
e
v
a
n
t

f
e
a
t
u
r
e
s

f
o
r

t
h
i
s

p
r
o
d
u
c
t
?

T
e
l
l

u
s

w
h
a
t

w
e

m
i
s
s
e
d
.

W
o
u
l
d

y
o
u

l
i
k
e

t
o

u
p
d
a
t
e

p
r
o
d
u
c
t

i
n
f
o
,

g
i
v
e

f
e
e
d
b
a
c
k

o
n

i
m
a
g
e
s
,

o
r

t
e
l
l

u
s

a
b
o
u
t

a

l
o
w
e
r

p
r
i
c
e
?
C
u
s
t
o
m
e
r

R
e
v
i
e
w
s
C
u
s
t
o
m
e
r
s

W
h
o

B
o
u
g
h
t

T
h
i
s

I
t
e
m

A
l
s
o

B
o
u
g
h
t


P
a
g
e

1

o
f

1
3
G
r
e
a
t

A
d
v
e
n
t
u
r
e
s

o
f

S
l
i
c
k

R
i
c
k


S
l
i
c
k

R
i
c
k

(
8
3
)
A
u
d
i
o

C
D
$
4
.
9
9
W
h
o
d
i
n
i

-

G
r
e
a
t
e
s
t

H
i
t
s


W
h
o
d
i
n
i

(
3
0
)
A
u
d
i
o

C
D
$
1
0
.
9
1

t

T
a
k
e
s

2


R
o
b

B
a
s
e

(
1
6
)
A
u
d
i
o

C
D
$
4
.
9
9
H
e
a
v
y

H
i
t
s


H
e
a
v
y

D

&

T
h
e

B
o
y
s

(
1
7
)
A
u
d
i
o

C
D
$
1
1
.
2
8
(
4
)
5

s
t
a
r
1
4

s
t
a
r
2
3

s
t
a
r
1
2

s
t
a
r
0
1

s
t
a
r
0
S
e
e

a
l
l

4

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s
S
h
a
r
e

y
o
u
r

t
h
o
u
g
h
t
s

w
i
t
h

o
t
h
e
r

c
u
s
t
o
m
e
r
s
W
r
i
t
e

a

c
u
s
t
o
m
e
r

r
e
v
i
e
w
W
r
i
t
e

a

c
u
s
t
o
m
e
r

r
e
v
i
e
w
4
.
0

o
u
t

o
f

5

s
t
a
r
s
Exhibit 14 Page101
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 116 of 163 Page ID
#:1044
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
M
o
s
t

H
e
l
p
f
u
l

C
u
s
t
o
m
e
r

R
e
v
i
e
w
s
4

o
f

4

p
e
o
p
l
e

f
o
u
n
d

t
h
e

f
o
l
l
o
w
i
n
g

r
e
v
i
e
w

h
e
l
p
f
u
l
O
n
e

U
p
o
n

a

T
i
m
e
.
.
.
J
u
n
e

1
6
,

2
0
1
1
B
y
P
r
i
n
c
e
N
i
k
o
d
e
e
m
F
o
r
m
a
t
:
A
u
d
i
o

C
D
|
A
m
a
z
o
n

V
e
r
i
f
i
e
d

P
u
r
c
h
a
s
e
.
.
.
i
n

1
9
6
1

t
o

b
e

p
r
e
c
i
s
e
,

a

J
a
p
a
n
e
s
e

c
r
o
o
n
e
r

n
a
m
e
d

K
y
u

S
a
k
a
m
o
t
o

h
a
d

a
n

i
n
t
e
r
n
a
t
i
o
n
a
l
,

J
a
p
a
n
e
s
e
l
a
n
g
u
a
g
e

h
i
t

w
i
t
h

a

s
o
n
g

c
a
l
l
e
d

"
U
e

o

M
u
i
t
e

A
r
o
k

"

(
"


S
h
a
l
l

W
a
l
k

L
o
o
k
i
n
g

U
p
"
)

o
r
,

a
s

t
h
e

s
o
n
g

c
a
m
e
t
o

b
e

k
n
o
w
n

i
n

t
h
e

E
n
g
l
i
s
h

s
p
e
a
k
i
n
g

w
o
r
l
d

(
f
o
r

s
o
m
e

r
e
a
s
o
n
)
,

"
S
u
k
i
y
a
k
i
"

(
a

k
i
n
d

b
e
e
f

h
o
t

p
o
t

d
i
s
h
)
.
A
s

t
h
e

m
e
a
n
i
n
g
l
e
s
s

a
l
t
e
r
n
a
t
e

t
i
t
l
e

s
u
g
g
e
s
t
s
,

m
o
s
t

A
m
e
r
i
c
a
n
s

c
o
u
l
d
n
'
t

u
n
d
e
r
s
t
a
n
d

a

w
o
r
d

K
y
u

w
a
s
s
i
n
g
i
n
g

a
n
d

h
a
d

n
o

i
d
e
a

w
h
a
t

t
h
e

s
o
n
g

w
a
s

a
b
o
u
t
.

N
e
v
e
r
t
h
e
l
e
s
s
,

t
h
a
n
k
s

t
o

t
h
e

c
a
t
c
h
y

m
e
l
o
d
y
c
r
a
f
t
e
d

b
y

c
o
m
p
o
s
e
r

H
a
c
h
i
d
a
i

N
a
k
a
m
u
r
a
,

t
h
e

s
o
n
g

s
h
o
t

t
o

#

1

o
n

t
h
e

B
i
l
l
b
o
a
r
d

H
o
t

1
0
0

C
h
a
r
t
s

a
n
d
s
t
a
y
e
d

t
h
e
r
e

f
o
r

q
u
i
t
e

s
o
m
e

t
i
m
e
.
T
w
o

d
e
c
a
d
e
s

l
a
t
e
r
,

a

f
o
r
g
e
t
t
a
b
l
e

d
i
s
c
o

a
c
t

o
n

t
h
e
i
r

w
a
y

o
u
t

s
e
t

s
o
m
e

r
u
n
-
o
f
-
t
h
e
-
m
i
l
l
,

k
i
n
d

o
f

c
o
r
n
y
E
n
g
l
i
s
h

l
a
n
g
u
a
g
e

l
y
r
i
c
s

t
o

N
a
k
a
m
u
r
a
'
s

i
n
f
e
c
t
i
o
u
s

m
e
l
o
d
y

a
n
d
,

t
h
a
n
k
s

t
o

t
h
e

a
p
p
e
a
l

o
f

t
h
e
c
o
m
p
o
s
i
t
i
o
n

h
e

h
a
d

c
r
a
f
t
e
d
,

h
a
d

a

h
i
t

w
i
t
h

t
h
e
i
r

o
w
n

v
e
r
s
i
o
n

o
f

"
S
u
k
i
y
a
k
i
"
.
N
o
t

l
o
n
g

a
f
t
e
r

t
h
a
t
,

h
i
p

h
o
p

p
i
o
n
e
e
r
s

S
l
i
c
k

R
i
c
k

a
n
d

D
o
u
g

E
.

F
r
e
s
h

q
u
o
t
e
d

a

v
e
r
s
e

f
r
o
m

t
h
e

d
i
s
c
o
g
r
o
u
p
'
s

E
n
g
l
i
s
h

v
e
r
s
i
o
n

i
n

a

h
u
m
o
u
r
o
u
s
,

p
a
r
o
d
i
c
,

c
o
n
t
e
x
t

a
s

a

s
m
a
l
l

(
b
u
t

h
y
s
t
e
r
i
c
a
l
)

p
a
r
t

o
f

t
h
e
i
r
i
c
o
n
i
c

c
o
l
l
a
b
o
r
a
t
i
o
n

"
L
a
-
D
i
-
D
a
-
D
i
"
.

n

t
h
e

e
a
r
l
y

d
a
y
s
,

w
h
e
n

"
L
a
-
D
i
-
D
a
-
D
i
"

w
a
s

r
e
l
e
a
s
e
d

o
n

v
i
n
y
l
r
e
c
o
r
d
s

a
n
d

o
l
d

f
a
s
h
i
o
n
e
d

c
a
s
s
e
t
t
e
s
,

f
a
n
s

o
f

t
h
e

i
n
s
t
a
n
t

c
l
a
s
s
i
c

c
o
u
l
d

h
e
a
r

t
h
e

s
o
n
g

i
n

a
l
l

o
f

i
t
s
u
n
m
u
t
i
l
a
t
e
d

g
l
o
r
y
.
B
U
T

w
h
e
n

i
t

c
a
m
e

t
i
m
e

t
o

c
o
n
v
e
r
t

t
h
i
s

g
e
m

t
o

d
i
g
i
t
a
l

f
o
r
m
,

t
h
e

d
i
s
c
o

h
a
s
-
b
e
e
n

w
h
o

p
e
n
n
e
d

t
h
e
q
u
o
t
e
d

v
e
r
s
e
,

l
o
n
g

f
o
r
g
o
t
t
e
n

b
y

m
o
s
t

o
f

t
h
e

w
o
r
l
d
,

a
p
p
a
r
e
n
t
l
y

m
a
d
e

a

f
u
s
s
,

a
n
d

s
o

f
r
o
m

t
h
a
t

t
i
m
e

t
o
t
h
i
s
,

"
L
a
-
D
i
-
D
a
-
D
i
"

h
a
s

N
E
V
E
R

b
e
e
n

r
e
l
e
a
s
e
d

o
n

c
d

o
r

a
s

a

d
o
w
n
l
o
a
d

f
u
l
l

a
n
d

i
n
t
a
c
t
.

T
h
e

"
S
u
k
i
y
a
k
i
"
r
e
f
e
r
e
n
c
e

i
s

c
r
u
d
e
l
y

h
a
c
k
e
d

o
u
t
,

a
n
d

a
s

s
u
c
h
,

t
h
e
r
e

i
s

a
l
w
a
y
s

a
n

a
w
k
w
a
r
d

s
k
i
p

s
o
m
e
w
h
e
r
e

i
n

t
h
e
m
i
d
d
l
e

o
f

t
h
e

s
o
n
g
,

w
h
e
r
e

t
h
e

o
t
h
e
r
w
i
s
e

b
a
n
a
l

l
y
r
i
c
s

w
e
r
e

h
i
l
a
r
i
o
u
s
l
y

q
u
o
t
e
d

b
y

S
l
i
c
k

R
i
c
k
,

f
a
u
x
-
s
i
n
g
i
n
g

t
h
e
m

a
s

a
n

i
n
f
a
t
u
a
t
e
d

w
o
m
a
n
.
A
n
d

t
h
a
t
,

k
i
d
d
i
e
s
,

i
s

w
h
y


h
a
v
e

t
o

g
i
v
e

t
h
i
s

c
o
m
p
i
l
a
t
i
o
n

f
o
u
r

s
t
a
r
s

i
n
s
t
e
a
d

o
f

f
i
v
e
.
B
u
t

d
o
n
'
t

d
e
s
p
a
i
r
!

O
t
h
e
r

t
h
a
n

t
h
e

p
o
i
n
t
l
e
s
s

d
e
b
a
s
e
m
e
n
t

o
f

"
L
a
-
D
i
-
D
a
-
D
i
"
,

t
h
i
s

c
o
m
p
i
l
a
t
i
o
n

i
s

p
a
c
k
e
d
f
r
o
m

s
t
a
r
t

t
o

f
i
n
i
s
h

w
i
t
h

c
l
a
s
s
i
c

j
a
m
s

b
y

D
o
u
g

E
.

F
r
e
s
h

&

t
h
e

G
e
t

F
r
e
s
h

C
r
e
w
.
G
o
M
o
s
t

R
e
c
e
n
t

C
u
s
t
o
m
e
r

R
e
v
i
e
w
s
G
r
e
a
t
e
s
t

h
i
t
s
?

r
e
a
l
l
y
?
A
l
t
h
o
u
g
h

t
h
i
s

h
a
s

m
o
s
t

o
f

h
i
s

h
i
t
s

w
h
e
r
e

i
n

t
h
e
h
e
l
l

i
s

"
s
u
m
m
e
r
t
i
m
e
"

o
n
e

o
f

i
f

n
o
t

o
n
e

h
i
s

b
i
g
g
e
s
t
j
o
i
n
t
s

f
r
o
m

b
a
c
k

i
n

t
h
e

d
a
y
?
P
u
b
l
i
s
h
e
d

1
2

m
o
n
t
h
s

a
g
o

b
y

S
t
e
v
e

S
i
l
k
S
e
a
r
c
h

C
u
s
t
o
m
e
r

R
e
v
i
e
w
s
O
n
l
y

s
e
a
r
c
h

t
h
i
s

p
r
o
d
u
c
t
'
s

r
e
v
i
e
w
s
A
d
v
e
r
t
i
s
e
m
e
n
t

Exhibit 14 Page102
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 117 of 163 Page ID
#:1045
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
5
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.

t
'
s

a
n

u
n
d
e
n
i
a
b
l
e

f
a
c
t

t
h
a
t

D
o
u
g
i
e

h
a
s

l
e
f
t

a
n

i
n
d
e
l
i
b
l
e

b
r
a
n
d

o
n

t
h
e

f
a
c
e

o
f

h
i
p

h
o
p

c
u
l
t
u
r
e
.

T
h
i
n
k
a
b
o
u
t

i
t
.
.
.
Y
o
u

k
n
o
w

t
h
a
t

"
H
e
e
e
e
e
e
e
e
e
e
e
e
y

y
o
!

-
i
g
h
t
!
"

c
h
a
n
t

t
h
a
t

e
v
e
r
y
o
n
e

d
o
e
s

i
n

t
h
e

c
l
u
b

t
o

t
h
i
s

d
a
y
?
T
h
o
s
e

o
f

u
s

o
l
d

e
n
o
u
g
h

t
o

r
e
m
e
m
b
e
r

k
n
o
w

t
h
a
t

t
h
a
t

a
l
l

s
t
a
r
t
e
d

w
i
t
h

D
o
u
g

E
.

F
r
e
s
h
,

a
n
d

t
h
a
t
a
r
c
h
e
t
y
p
a
l

c
l
u
b

b
a
n
g
e
r
,

"

-
i
g
h
t
"

i
s

h
e
r
e

o
n

t
h
i
s

d
i
s
c
.

R
e
m
e
m
b
e
r

w
h
e
n

t
h
e

r
a
u
c
o
u
s

"
F
r
e
a
k
s
"
c
a
t
a
p
u
l
t
e
d

J
a
m
a
i
c
a
n

b
o
y
-
w
o
n
d
e
r

L
i
l
'

V
i
c
i
o
u
s

t
o

o
v
e
r
n
i
g
h
t

f
a
m
e

b
a
c
k

i
n

t
h
e

9
0
s
?

O
f

c
o
u
r
s
e
,

D
o
u
g

E
.
w
a
s

b
e
h
i
n
d

t
h
a
t

o
n
e

t
o
o
,

b
e
a
t
b
o
x
i
n
g

h
i
s

l
u
n
g
s

o
u
t
.

H
o
w

a
b
o
u
t

t
h
e

r
o
c
k
i
n
'

"
P
l
a
y

T
h
i
s

O
n
l
y

a
t

N
i
g
h
t
"
,
t
h
e

g
r
i
m
y

"
N
u
t
h
i
n
"
,

o
r

t
h
e

e
x
u
b
e
r
a
n
t
,

h
y
s
t
e
r
i
c
a
l

"
A
l
l

t
h
e

W
a
y

t
o

H
e
a
v
e
n
"

a
n
d

"
L
o
v
i
n
'

E
v
e
r
y

M
i
n
u
t
e

o
f

t
"
?

T
h
e
s
e

s
o
n
g
s

d
e
m
o
n
s
t
r
a
t
e

t
h
a
t

D
o
u
g

E
.

F
r
e
s
h
,

t
h
e

o
r
i
g
i
n
a
l

a
n
d

u
n
d
i
s
p
u
t
a
b
l
y

g
r
e
a
t
e
s
t

h
u
m
a
n
b
e
a
t
b
o
x
,

i
s

a
l
s
o

o
n
e

o
f

t
h
e

f
e
w

M
C
s

o
u
t

t
h
e
r
e

t
h
a
t

c
a
n

d
o

a

m
e
n
a
c
i
n
g
,

g
r
i
t
t
y

"
g
a
n
g
s
t
a
"

s
o
n
g

o
n
e
m
i
n
u
t
e
,

a
n
d

a

l
i
g
h
t
,

u
p
b
e
a
t

"
s
t
o
r
y
"

s
o
n
g

t
h
e

n
e
x
t
.

N
o
t

o
n
l
y

t
h
a
t
,

h
e
'
s

j
u
s
t

a
s

c
o
n
v
i
n
c
i
n
g
,

b
e
l
i
e
v
a
b
l
e
,
a
n
d

s
i
n
c
e
r
e

i
n

b
o
t
h

g
e
n
r
e
s
.
.
.
e
v
e
r
y

b
i
t

a
s

c
o
m
f
o
r
t
a
b
l
e

w
i
t
h

t
h
e

l
i
k
e
s

o
f

K
o
o
l

G
.

R
a
p

a
s

h
e

i
s

w
i
t
h

W
i
l
l
S
m
i
t
h
s

o
f

t
h
e

w
o
r
l
d
.
A
n
d

t
h
e
n

t
h
e
r
e
'
s

t
h
e

q
u
i
n
t
e
s
s
e
n
t
i
a
l

h
i
p

h
o
p

j
a
m
,

"
T
h
e

S
h
o
w
"
.

L
i
k
e

"
L
a
-
D
i
-
D
a
-
D
i
"
,

t
h
e

l
e
g
a
c
y

a
n
d
i
m
p
o
r
t
a
n
c
e

o
f

t
h
i
s

s
o
n
g

c
a
n
'
t

b
e

o
v
e
r
s
t
a
t
e
d
.

t
'
s

b
e
e
n

s
a
m
p
l
e
d
,

q
u
o
t
e
d
,

a
n
d

r
e
f
e
r
e
n
c
e
d

o
n

r
e
c
o
r
d
a
f
t
e
r

r
e
c
o
r
d

a
f
t
e
r

r
e
c
o
r
d
.

f

y
o
u
'
r
e

n
o
t

f
a
m
i
l
i
a
r

w
i
t
h

i
t
,

y
o
u
'
r
e

n
o
t

a

h
i
p

h
o
p

f
a
n
.

Y
o
u

c
a
n

g
e
t

f
a
m
i
l
i
a
r
w
i
t
h

i
t

h
e
r
e
,

o
n

t
h
i
s

d
i
s
c
.
W
h
i
c
h

b
r
i
n
g
s

u
s

b
a
c
k

t
o

t
h
e

s
h
a
m
e
f
u
l
,

m
o
n
e
y
-
g
r
u
b
b
i
n
g

s
t
o
r
y

o
f

"
L
a
-
D
i
-
D
a
-
D
i
"

a
n
d

t
h
e

r
e
a
s
o
n


c
a
n
'
t
g
i
v
e

t
h
i
s

r
e
l
e
a
s
e

t
h
e

p
e
r
f
e
c
t

r
a
t
i
n
g

i
t

d
e
s
e
r
v
e
s
.
W
i
t
h
o
u
t

S
l
i
c
k

R
i
c
k
'
s

c
o
m
e
d
i
c

r
e
c
i
t
a
t
i
o
n
,

t
h
e

"
S
u
k
i
y
a
k
i
"

l
y
r
i
c
s

h
e

q
u
o
t
e
d

a
r
e

t
r
u
l
y

m
u
n
d
a
n
e
.
.
.
d
u
l
l

a
s
p
l
a
i
n

r
i
c
e

n
o
o
d
l
e
s
.

A
n
d

i
t
'
s

l
i
k
e
l
y

t
h
a
t

t
h
e

l
a
m
e

E
n
g
l
i
s
h

l
a
n
g
u
a
g
e

c
o
v
e
r

o
f

K
y
u

S
a
k
a
m
o
t
o
'
s

h
i
t

w
h
i
c
h
s
p
a
w
n
e
d

t
h
e
m

w
o
u
l
d

h
a
v
e

b
e
e
n

f
o
r
g
o
t
t
e
n

a
e
o
n
s

a
g
o
.
B
u
t

w
i
t
h

R
i
c
k

t
i
c
k
l
i
n
g

t
h
e

r
i
b
s

o
f

t
h
e

b
e
a
u
t
i
f
u
l

b
-
g
i
r
l

C
o
m
m
o
n

t
e
l
l
s

u
s

h
i
p

h
o
p

i
s

u
n
t
i
l

s
h
e

s
q
u
e
a
l
s

t
h
e
m
o
u
t

i
n

r
a
p
t
u
r
o
u
s
,

g
i
g
g
l
i
n
g

d
e
l
i
g
h
t
,

t
h
e
y

b
e
c
o
m
e

s
o
m
e
t
h
i
n
g

i
n
s
p
i
r
e
d
.
W
h
o

w
e
r
e

s
u
c
h

a
r
t
i
s
t
s

a
s

S
n
o
o
p

D
o
g
g
,

B
o
n
e

T
h
u
g
s
-
N
-
H
a
r
m
o
n
y
,

W
i
l
l

S
m
i
t
h
,

S
a
l
t
-
N
-
P
e
p
a
,

R
a
p
h
a
e
l
S
a
a
d
i
q
,

M
a
r
y

J
.

B
l
i
g
e
,

a
n
d

c
o
u
n
t
l
e
s
s

o
t
h
e
r
s

r
e
f
e
r
e
n
c
i
n
g

w
h
e
n

t
h
e
y

s
a
m
p
l
e
d

o
r

q
u
o
t
e
d

t
h
o
s
e

b
a
r
s
?
T
h
e

d
i
s
c
o

a
c
t
?


d
o
n
'
t

t
h
i
n
k

s
o
.

S
l
i
c
k

R
i
c
k

a
n
d

D
o
u
g

E
.

F
r
e
s
h

m
a
d
e

t
h
o
s
e

o
t
h
e
r
w
i
s
e

f
o
r
g
e
t
t
a
b
l
e
w
o
r
d
s

M
M
O
R
T
A
L
.
M
a
y
b
e

s
o
m
e

d
a
y
,

w
h
o
e
v
e
r

n
e
e
d
s

t
o

w
i
l
l

g
e
t

o
f
f

t
h
e

d
i
m
e

s
o

t
h
a
t

"
L
a
-
D
i
-
D
a
-
D
i
"

c
a
n

b
e

e
n
j
o
y
e
d

b
y
c
u
r
r
e
n
t

g
e
n
e
r
a
t
i
o
n
s

(
w
h
o
,

l
e
t
'
s

f
a
c
e

i
t
,

k
n
o
w

o
n
l
y

d
i
g
i
t
a
l

m
e
d
i
a
)

i
n

i
t
s

u
n
a
d
u
l
t
e
r
a
t
e
d

f
o
r
m
.

U
n
t
i
l

t
h
e
n
,
a
n
y

s
u
c
h

c
o
m
p
i
l
a
t
i
o
n
s

f
e
a
t
u
r
i
n
g

t
h
e

m
u
t
i
l
a
t
e
d

v
e
r
s
i
o
n

w
i
l
l

s
a
d
l
y

h
a
v
e

t
o

s
a
c
r
i
f
i
c
e

a

s
t
a
r
.
Exhibit 14 Page103
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 118 of 163 Page ID
#:1046
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
6
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
C
o
n
c
e
r
t

T
i
c
k
e
t
s

f
o
r

R
e
l
a
t
e
d

A
r
t
i
s
t
s
(
W
h
a
t
'
s

t
h
i
s
?
)
S
p
o
n
s
o
r
e
d

C
o
n
t
e
n
t
L
o
o
k
i
n
g

f
o
r

"
d
o
u
g

e

f
r
e
s
h
"

P
r
o
d
u
c
t
s
?
O
t
h
e
r

c
u
s
t
o
m
e
r
s

s
u
g
g
e
s
t
e
d

t
h
e
s
e

i
t
e
m
s
:
W
a
s

t
h
i
s

r
e
v
i
e
w

h
e
l
p
f
u
l

t
o

y
o
u
?
Y
e
s
Y
e
s
N
o
N
o
4

C
o
m
m
e
n
t
s

|

W
a
s

t
h
i
s

r
e
v
i
e
w

h
e
l
p
f
u
l

t
o

y
o
u
?
Y
e
s
Y
e
s
N
o
N
o
C
o
m
m
e
n
t

|

W
a
s

t
h
i
s

r
e
v
i
e
w

h
e
l
p
f
u
l

t
o

y
o
u
?
Y
e
s
Y
e
s
N
o
N
o
C
o
m
m
e
n
t

|

B
e
c
a
u
s
e

o
f

t
h
a
t
,

'
m

f
e
e
l
i
n
'

s
a
d

a
n
d

b
l
u
e
.
1

o
f

1

p
e
o
p
l
e

f
o
u
n
d

t
h
e

f
o
l
l
o
w
i
n
g

r
e
v
i
e
w

h
e
l
p
f
u
l

W
h
a
t

m
o
r
e

c
a
n


s
a
y
.
.
.
.
.

A
u
g
u
s
t

1
1
,

2
0
1
1
B
y

L
.

V
a
u
g
h
a
n
F
o
r
m
a
t
:
A
u
d
i
o

C
D
|
A
m
a
z
o
n

V
e
r
i
f
i
e
d

P
u
r
c
h
a
s
e
T
h
e

B
e
s
t

o
f

D
o
u
g

E

F
r
e
s
h

g
e
t
s

n
o

b
e
t
t
e
r
.

G
r
e
a
t

s
e
l
e
c
t
i
o
n

o
f

h
i
s

h
i
t
s

a
n
d

e
v
e
r
y

"
O
l
d

S
c
h
o
o
l

R
a
p
"
a
f
f
e
c
t
i
o
n
a
t
o

M
U
S
T

i
n
c
l
u
d
e

s
o
m
e

o
f

D
o
u
g

E
.

F
r
e
s
h

i
n

t
h
e
i
r

c
o
l
l
e
c
t
i
o
n
!

B
a
c
k

i
n

d
a

d
a
y

J
a
n
u
a
r
y

1
8
,

2
0
1
2
B
y

O
l
d

s
c
h
o
o
l

j
u
n
k
i
e
F
o
r
m
a
t
:
A
u
d
i
o

C
D
|
A
m
a
z
o
n

V
e
r
i
f
i
e
d

P
u
r
c
h
a
s
e


w
a
s

r
e
a
l
l
y

p
l
e
a
s
e

t
o

s
e
e

t
h
a
t

a
n
y

t
i
m
e


n
e
e
d

t
h
a
t

o
l
d

s
k
o
o
l

m
u
s
i
c
,

A
m
a
z
o
n

f
i
t
s

t
h
e

b
i
l
l
.

f

y
o
u

k
n
o
w
t
h
e

a
r
t
i
s
t

o
r

s
o
n
g

t
h
e
y

h
a
v
e

t
h
e

m
u
s
i
c
.


S
e
e

a
l
l

4

c
u
s
t
o
m
e
r

r
e
v
i
e
w
s

(
n
e
w
e
s
t

f
i
r
s
t
)
W
r
i
t
e

a

c
u
s
t
o
m
e
r

r
e
v
i
e
w
W
r
i
t
e

a

c
u
s
t
o
m
e
r

r
e
v
i
e
w
Exhibit 14 Page104
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 119 of 163 Page ID
#:1047
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
7
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
O
h

M
y

G
o
d

b
y

D
o
u
g

E
F
r
e
s
h

(
3
)


S
u
g
g
e
s
t
e
d

b
y

4

c
u
s
t
o
m
e
r
s
T
h
e

W
o
r
l
d
'
s

G
r
e
a
t
e
s
t
E
n
t
e
r
t
a
i
n
e
r

b
y

D
o
u
g

E
.
F
r
e
s
h

&

T
h
e

G
e
t

F
r
e
s
h
C
r
e
w

(
1
3
)


7

u
s
e
d

&

n
e
w

f
r
o
m

$
3
0
.
0
0
S
u
g
g
e
s
t
e
d

b
y

3

c
u
s
t
o
m
e
r
s
L
a
d
i
d
a
d
i

b
y

D
o
u
g

E

F
r
e
s
h

(
3
)


S
u
g
g
e
s
t
e
d

b
y

3

c
u
s
t
o
m
e
r
s
S
e
l
f

D
e
s
t
r
u
c
t
i
o
n

(
4
V
e
r
s
i
o
n
s
)

b
y

S
t
o
p

T
h
e
V
i
o
l
e
n
c
e

m
o
v
e
m
e
n
t
3

u
s
e
d

&

n
e
w

f
r
o
m

$
2
6
.
1
1
S
u
g
g
e
s
t
e
d

b
y

2

c
u
s
t
o
m
e
r
s
B
e
s
t

o
f

W
o
r
d

U
p

[
V
H
S
]
V
H
S

~
V
a
r
i
o
u
s
7

u
s
e
d

&

n
e
w

f
r
o
m

$
9
.
9
5
S
u
g
g
e
s
t
e
d

b
y

1

c
u
s
t
o
m
e
r


E
x
p
l
o
r
e

1
6

o
t
h
e
r

i
t
e
m
s

r
e
l
a
t
e
d

t
o

"
d
o
u
g

e

f
r
e
s
h
"
F
o
r
u
m
s
L
i
s
t
m
a
n
i
a
!
S
o

Y
o
u
'
d

L
i
k
e

t
o
.
.
.
e
G
i
f
t

T
h
i
s

I
t
e
m

(
W
h
a
t
'
s

t
h
i
s
?
)
I
n
s
t
a
n
t

D
e
l
i
v
e
r
y
:

E
-
m
a
i
l

a

g
i
f
t

c
a
r
d
s
u
g
g
e
s
t
i
n
g

t
h
i
s

i
t
e
m
F
l
e
x
i
b
l
e

G
i
f
t
i
n
g

C
h
o
i
c
e
s
:

T
h
e
y

c
a
n

c
h
o
o
s
e
t
h
i
s
,

o
r

p
i
c
k

f
r
o
m

m
i
l
l
i
o
n
s

o
f

o
t
h
e
r

i
t
e
m
s
.
W
h
a
t

O
t
h
e
r

I
t
e
m
s

D
o

C
u
s
t
o
m
e
r
s

B
u
y

A
f
t
e
r

V
i
e
w
i
n
g

T
h
i
s

I
t
e
m
?
G
r
e
a
t

A
d
v
e
n
t
u
r
e
s

o
f

S
l
i
c
k

R
i
c
k


~

S
l
i
c
k

R
i
c
k

A
u
d
i
o

C
D


(
8
3
)
$
4
.
9
9
H
e
a
v
y

H
i
t
s


~

H
e
a
v
y

D

&

T
h
e

B
o
y
s

A
u
d
i
o

C
D


(
1
7
)
$
1
1
.
2
8
Exhibit 14 Page105
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 120 of 163 Page ID
#:1048
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
8
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
A
l
l

W
o
r
l
d
:

G
r
e
a
t
e
s
t

H
i
t
s


~

L
.
L
.

C
o
o
l

J

A
u
d
i
o

C
D


(
5
8
)
$
9
.
9
9
I
t

T
a
k
e
s

2


~

R
o
b

B
a
s
e

A
u
d
i
o

C
D


(
1
6
)
$
4
.
9
9


E
x
p
l
o
r
e

s
i
m
i
l
a
r

i
t
e
m
s
L
o
o
k

f
o
r

S
i
m
i
l
a
r

I
t
e
m
s

b
y

C
a
t
e
g
o
r
y


F
e
e
d
b
a
c
k

I
f

y
o
u

n
e
e
d

h
e
l
p

o
r

h
a
v
e

a

q
u
e
s
t
i
o
n

f
o
r

C
u
s
t
o
m
e
r

S
e
r
v
i
c
e
,

c
o
n
t
a
c
t

u
s
.

W
o
u
l
d

y
o
u

l
i
k
e

t
o

u
p
d
a
t
e

p
r
o
d
u
c
t

i
n
f
o
,

g
i
v
e

f
e
e
d
b
a
c
k

o
n

i
m
a
g
e
s
,

o
r

t
e
l
l

u
s

a
b
o
u
t

a

l
o
w
e
r

p
r
i
c
e
?

I
s

t
h
e
r
e

a
n
y

o
t
h
e
r

f
e
e
d
b
a
c
k

y
o
u

w
o
u
l
d

l
i
k
e

t
o

p
r
o
v
i
d
e
?

C
l
i
c
k

h
e
r
e
G
e
t

t
o

K
n
o
w

U
s
M
a
k
e

M
o
n
e
y

w
i
t
h

U
s
L
e
t

U
s

H
e
l
p

Y
o
u
Y
o
u
r

R
e
c
e
n
t

H
i
s
t
o
r
y

(
W
h
a
t
'
s

t
h
i
s
?
)
Exhibit 14 Page106
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 121 of 163 Page ID
#:1049
1
/
1
0
/
1
3
A
m
a
z
o
n
.
c
o
m
:

T
h
e

G
r
e
a
t
e
s
t

H
i
t
s
:

D
o
u
g

E

F
r
e
s
h
:

M
u
s
i
c
9
/
9
w
w
w
.
a
m
a
z
o
n
.
c
o
m
/
G
r
e
a
t
e
s
t
-
H
i
t
s
-
D
o
u
g
-
E
-
F
r
e
s
h
/
d
p
/
B
0
0
5
1

B
8
A
W
/
r
e
f
=
s
r
_
1
_
1
?
s
=
m
u
s
i
c
&
i
e
=
U
T
F
8
&
q
i
d
=
1
3
5
7
8
2
6
8
9
0
&
s
r
=
1
-
1
&
k
e
y
w
o
r
d
s
=
.
C
a
r
e
e
r
s

n
v
e
s
t
o
r

R
e
l
a
t
i
o
n
s
P
r
e
s
s

R
e
l
e
a
s
e
s
A
m
a
z
o
n

a
n
d

O
u
r

P
l
a
n
e
t
A
m
a
z
o
n

i
n

t
h
e

C
o
m
m
u
n
i
t
y
S
e
l
l

o
n

A
m
a
z
o
n
B
e
c
o
m
e

a
n

A
f
f
i
l
i
a
t
e
A
d
v
e
r
t
i
s
e

Y
o
u
r

P
r
o
d
u
c
t
s

n
d
e
p
e
n
d
e
n
t
l
y

P
u
b
l
i
s
h

w
i
t
h

U
s

S
e
e

a
l
l
Y
o
u
r

A
c
c
o
u
n
t
S
h
i
p
p
i
n
g

R
a
t
e
s

&

P
o
l
i
c
i
e
s
A
m
a
z
o
n

P
r
i
m
e
R
e
t
u
r
n
s

A
r
e

E
a
s
y
M
a
n
a
g
e

Y
o
u
r

K
i
n
d
l
e
H
e
l
p
C
a
n
a
d
a
C
h
i
n
a
F
r
a
n
c
e
G
e
r
m
a
n
y

t
a
l
y
J
a
p
a
n
S
p
a
i
n
U
n
i
t
e
d

K
i
n
g
d
o
m
A
b
e
B
o
o
k
s
R
a
r
e

B
o
o
k
s
&

T
e
x
t
b
o
o
k
s
A
m
a
z
o
n
L
o
c
a
l
G
r
e
a
t

L
o
c
a
l

D
e
a
l
s
i
n

Y
o
u
r

C
i
t
y
A
m
a
z
o
n
S
u
p
p
l
y
B
u
s
i
n
e
s
s
,

n
d
u
s
t
r
i
a
l
&

S
c
i
e
n
t
i
f
i
c

S
u
p
p
l
i
e
s
A
m
a
z
o
n
W
e
b
S
e
r
v
i
c
e
s
S
c
a
l
a
b
l
e
C
l
o
u
d

S
e
r
v
i
c
e
s
A
m
a
z
o
n
W
i
r
e
l
e
s
s
C
e
l
l
p
h
o
n
e
s

&
W
i
r
e
l
e
s
s

P
l
a
n
s
A
s
k
v
i
l
l
e
C
o
m
m
u
n
i
t
y
A
n
s
w
e
r
s
A
u
d
i
b
l
e
D
o
w
n
l
o
a
d
A
u
d
i
o

B
o
o
k
s
B
e
a
u
t
y
B
a
r
.
c
o
m
P
r
e
s
t
i
g
e

B
e
a
u
t
y
D
e
l
i
v
e
r
e
d
B
o
o
k

D
e
p
o
s
i
t
o
r
y
B
o
o
k
s

W
i
t
h

F
r
e
e
D
e
l
i
v
e
r
y

W
o
r
l
d
w
i
d
e
C
r
e
a
t
e
S
p
a
c
e

n
d
i
e

P
u
b
l
i
s
h
i
n
g
M
a
d
e

E
a
s
y
D
i
a
p
e
r
s
.
c
o
m
E
v
e
r
y
t
h
i
n
g
B
u
t

T
h
e

B
a
b
y
D
P
R
e
v
i
e
w
D
i
g
i
t
a
l
P
h
o
t
o
g
r
a
p
h
y
F
a
b
r
i
c
S
e
w
i
n
g
,

Q
u
i
l
t
i
n
g
&

K
n
i
t
t
i
n
g

M
D
b
M
o
v
i
e
s
,

T
V
&

C
e
l
e
b
r
i
t
i
e
s
J
u
n
g
l
e
e
.
c
o
m
S
h
o
p

O
n
l
i
n
e
i
n

n
d
i
a
M
Y
H
A
B

T
P
r
i
v
a
t
e

F
a
s
h
i
o
n
D
e
s
i
g
n
e
r

S
a
l
e
s
S
h
o
p
b
o
p
D
e
s
i
g
n
e
r
F
a
s
h
i
o
n

B
r
a
n
d
s
S
o
a
p
.
c
o
m
H
e
a
l
t
h
,

B
e
a
u
t
y

&
H
o
m
e

E
s
s
e
n
t
i
a
l
s
W
a
g
.
c
o
m
E
v
e
r
y
t
h
i
n
g
F
o
r

Y
o
u
r

P
e
t
W
a
r
e
h
o
u
s
e

D
e
a
l
s
O
p
e
n
-
B
o
x
D
i
s
c
o
u
n
t
s
W
o
o
t
N
e
v
e
r

G
o
n
n
a
G
i
v
e

Y
o
u

U
p
Y
o
y
o
.
c
o
m
A

H
a
p
p
y

P
l
a
c
e
T
o

S
h
o
p

F
o
r

T
o
y
s
Z
a
p
p
o
s
S
h
o
e
s

&
C
l
o
t
h
i
n
g
V
i
n
e
.
c
o
m
E
v
e
r
y
t
h
i
n
g
t
o

L
i
v
e

L
i
f
e

G
r
e
e
n
C
a
s
a
.
c
o
m
K
i
t
c
h
e
n
,

S
t
o
r
a
g
e
&

E
v
e
r
y
t
h
i
n
g

H
o
m
e
C
o
n
d
i
t
i
o
n
s

o
f

U
s
e
P
r
i
v
a
c
y

N
o
t
i
c
e

n
t
e
r
e
s
t
-
B
a
s
e
d

A
d
s


1
9
9
6
-
2
0
1
3
,

A
m
a
z
o
n
.
c
o
m
,

n
c
.

o
r

i
t
s

a
f
f
i
l
i
a
t
e
s
Exhibit 14 Page107
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 122 of 163 Page ID
#:1050




EXHIBIT 15
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 123 of 163 Page ID
#:1051
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 1 of 11 Page ID #:588
Exhibit 15 Page108
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 124 of 163 Page ID
#:1052
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 2 of 11 Page ID #:589
Exhibit 15 Page109
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 125 of 163 Page ID
#:1053
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 3 of 11 Page ID #:590
Exhibit 15 Page110
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 126 of 163 Page ID
#:1054
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 4 of 11 Page ID #:591
Exhibit 15 Page111
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 127 of 163 Page ID
#:1055
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 5 of 11 Page ID #:592
Exhibit 15 Page112
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 128 of 163 Page ID
#:1056
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 6 of 11 Page ID #:593
Exhibit 15 Page113
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 129 of 163 Page ID
#:1057
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 7 of 11 Page ID #:594
Exhibit 15 Page114
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 130 of 163 Page ID
#:1058
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 8 of 11 Page ID #:595
Exhibit 15 Page115
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 131 of 163 Page ID
#:1059
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 9 of 11 Page ID #:596
Exhibit 15 Page116
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 132 of 163 Page ID
#:1060
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 10 of 11 Page ID
#:597
Exhibit 15 Page117
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 133 of 163 Page ID
#:1061
Case 2:11-cv-09437-DSF-JC Document 42-2 Filed 11/09/12 Page 11 of 11 Page ID
#:598
Exhibit 15 Page118
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 134 of 163 Page ID
#:1062
Case 2:11-cv-09437-DSF-JC Document 42-3 Filed 11/09/12 Page 41 of 119 Page ID
#:639
Exhibit 15 Page119
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 135 of 163 Page ID
#:1063
C
a
s
e

2
:
1
1
-
c
v
-
0
9
4
3
7
-
D
S
F
-
J
C



D
o
c
u
m
e
n
t

4
2
-
3




F
i
l
e
d

1
1
/
0
9
/
1
2



P
a
g
e

4
2

o
f

1
1
9



P
a
g
e

I
D

#
:
6
4
0
Exhibit 15 Page120
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 136 of 163 Page ID
#:1064
C
a
s
e

2
:
1
1
-
c
v
-
0
9
4
3
7
-
D
S
F
-
J
C



D
o
c
u
m
e
n
t

4
2
-
3




F
i
l
e
d

1
1
/
0
9
/
1
2



P
a
g
e

4
3

o
f

1
1
9



P
a
g
e

I
D

#
:
6
4
1
Exhibit 15 Page121
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 137 of 163 Page ID
#:1065
C
a
s
e

2
:
1
1
-
c
v
-
0
9
4
3
7
-
D
S
F
-
J
C



D
o
c
u
m
e
n
t

4
2
-
3




F
i
l
e
d

1
1
/
0
9
/
1
2



P
a
g
e

4
4

o
f

1
1
9



P
a
g
e

I
D

#
:
6
4
2
Exhibit 15 Page122
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 138 of 163 Page ID
#:1066




EXHIBIT 16
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 139 of 163 Page ID
#:1067
Dictionary of Business
Tylor & Francis (1998)
Exhibit 16 Page123
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 140 of 163 Page ID
#:1068
Exhibit 16 Page124
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 141 of 163 Page ID
#:1069




EXHIBIT 17
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 142 of 163 Page ID
#:1070
Browse the Dictionary
di stri buti on fee noun
di stri buti on ri ghts noun
di stri buti on warehouse noun
di stri buti ve adjective
di stri buti ve trades noun
di stri butor noun
di stri butorshi p noun
di stri ct noun
di stri ct attorney noun
di stri ct audi tor noun
di stri ct court noun
Our dictionaries
Bri ti sh Engl i sh
Ameri can Engl i sh
Busi ness Engl i sh
Learner's Di cti onary
Essenti al Bri ti sh Engl i sh
Essenti al Ameri can Engl i sh
Engl i sh-Spani sh
Espaol -i ngl s
Engl i sh-Turki sh
Browse the Thesaurus
Busi ness
Cl othes
Educati on
Fi nance
Li ght and col our
Personal care
dIsLrIbuLor
GVWUEMW noun [C]
Definition
COMMERCE , TRANSPORT a person or company that buys products from a
manufacturer and sel l s them for a profi t to other busi nesses, stores, or
customers, often by transporti ng the goods to di fferent pl aces:
di stri butor of sth For the past twenty years they have been the country's
leading distributor of household appliances.
l ocal /i nternati onal /nati onal di stri butor
excl usi ve/sol e di stri butor We have appointed the company as sole
distributor of our goods in Japan.
(Defi ni ti on of di stri butor noun from the Cambri dge Busi ness Engl i sh Di cti onary Cambri dge
Uni versi ty Press)
Al l
Word of the Day
ecl i pse
when the sun disappears
from view, either
completely or partly, while
the moon is moving
between it...
Bl og
Read our blog about how
the English language
behaves.
New Words
Find words and meanings
that have just started to be
used in English, and let us
know what you think of
them.
NEW
distributor in other
dictionaries
i n Spani sh i n Turki sh
Bri ti sh Engl i sh Ameri can Engl i sh
Learner's
More Results for distributor
i ndependent di stri butor noun
See all results
U
Exhibit 17 Page125
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 143 of 163 Page ID
#:1071




EXHIBIT 18
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 144 of 163 Page ID
#:1072
BitTor r ent
From Wikipedia, the Iree encyclopedia
BitTorrent is a protocol that underpins the practice oI peer-to-peer Iile sharing and is used Ior distributing large
amounts oI data over the Internet. BitTorrent is one oI the most common protocols Ior transIerring large Iiles and it
has been estimated that, collectively, peer-to-peer networks have accounted Ior approximately 43 to 70 oI all
Internet traIIic (depending on geographical location) as oI February 2009.
|1|
Most oI this peer-to-peer traIIic is
likely Irom BitTorrent, aIter the demise oI LimeWire.
Programmer Bram Cohen designed the protocol in April 2001 and released the Iirst available version on July 2,
2001.
|2|
Currently, numerous BitTorrent clients are available Ior a variety oI computing platIorms
|citation needed|
,
including an oIIicial one released by Bittorrent, Inc.
As oI January 2012, BitTorrent is utilized by 150 million active users (according to BitTorrent, Inc.). Based on this
Iigure, the total number oI monthly BitTorrent users can be estimated at more than a quarter oI a billion.
|3|
At any
given instant, BitTorrent has, on average, more active users than YouTube and Facebook combined (this reIers to
the number oI active users at any instant and not to the total number oI unique users).
|4||5|
Since 2010, more than
200,000 users oI the protocol have been sued Ior copyright inIringement.
|6|
Contents
1 Description
2 Operation
2.1 Creating and publishing torrents
2.2 Downloading torrents and sharing Iiles
3 Adoption
3.1 Film, video, and music
3.2 Broadcasters
3.3 Personal material
3.4 SoItware
3.5 Government
3.6 Education
3.7 Others
4 Indexing
5 Technologies built on BitTorrent
5.1 Distributed trackers
5.2 Web seeding
5.3 RSS Ieeds
5.4 Throttling and encryption
5.5 Multitracker
5.6 Decentralized keyword search
6 Implementations
7 Development
8 Legal issues
Exhibit 18 Page126
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 145 of 163 Page ID
#:1073
9 BitTorrent and malware
10 See also
11 ReIerences
12 Further reading
13 External links
Descr iption
The BitTorrent protocol can be used to reduce the server and network impact oI distributing large Iiles. Rather than
downloading a Iile Irom a single source server, the BitTorrent protocol allows users to join a "swarm" oI hosts to
download and upload Irom each other simultaneously. The protocol is an alternative to the older single source,
multiple mirror sources technique Ior distributing data, and can work over networks with lower bandwidth so many
small computers, like mobile phones, are able to eIIiciently distribute Iiles to many recipients.
A user who wants to upload a Iile Iirst creates a small torrent descriptor Iile that they distribute by conventional
means (web, email, etc.). They then make the Iile itselI available through a BitTorrent node acting as a seed. Those
with the torrent descriptor Iile can give it to their own BitTorrent nodes which, acting as peers or leechers,
download it by connecting to the seed and/or other peers.
The Iile being distributed is divided into segments called pieces. As each peer receives a new piece oI the Iile it
becomes a source (oI that piece) Ior other peers, relieving the original seed Irom having to send that piece to every
computer or user wishing a copy. With BitTorrent, the task oI distributing the Iile is shared by those who want it; it
is entirely possible Ior the seed to send only a single copy oI the Iile itselI and eventually distribute to an unlimited
number oI peers.
Each piece is protected by a cryptographic hash contained in the torrent descriptor.
|7|
This ensures that any
modiIication oI the piece can be reliably detected, and thus prevents both accidental and malicious modiIications oI
any oI the pieces received at other nodes. II a node starts with an authentic copy oI the torrent descriptor, it can
veriIy the authenticity oI the entire Iile it receives.
Pieces are typically downloaded non-sequentially and are rearranged into the correct order by the BitTorrent
Client, which monitors which pieces it needs, and which pieces it has and can upload to other peers. Pieces are oI
the same size throughout a single download (Ior example a 10 MB Iile may be transmitted as ten 1 MB Pieces or as
Iorty 256 KB Pieces). Due to the nature oI this approach, the download oI any Iile can be halted at any time and be
resumed at a later date, without the loss oI previously downloaded inIormation, which in turn makes BitTorrent
particularly useIul in the transIer oI larger Iiles. This also enables the client to seek out readily available pieces and
download them immediately, rather than halting the download and waiting Ior the next (and possibly unavailable)
piece in line, which typically reduces the overall length oI the download.
When a peer completely downloads a Iile, it becomes an additional seed. This eventual shiIt Irom peers to seeders
determines the overall "health" oI the Iile (as determined by the number oI times a Iile is available in its complete
Iorm).
The distributed nature oI BitTorrent can lead to a Ilood like spreading oI a Iile throughout many peer computer
nodes. As more peers join the swarm, the likelihood oI a complete successIul download by any particular node
increases. Relative to traditional Internet distribution schemes, this permits a signiIicant reduction in the original
distributor's hardware and bandwidth resource costs.
Exhibit 18 Page127
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 146 of 163 Page ID
#:1074
In this animation, the colored bars beneath all oI the 7 clients
represent the Iile, with each color representing an individual
piece oI the Iile. AIter the initial pieces transIer Irom the seed
(large system at the bottom), the pieces are individually
transIerred Irom client to client. The original seeder only
needs to send out one copy oI the Iile Ior all the clients to
receive a copy.
Distributed downloading protocols in general provide redundancy against system problems, reduces dependence
on the original distributor
|8|
and provides sources Ior the Iile which are generally transient and thereIore harder to
trace by those who would block distribution compared to the situation provided by limiting availability oI the Iile to
a Iixed host machine (or even several).
One such example oI BitTorrent being used to reduce the distribution cost oI Iile transmission is in the BOINC
Client-Server system. II a BOINC distributed computing application needs to be updated (or merely sent to a user)
it can be done so with little impact on the BOINC Server.
|citation needed|
Oper ati on
A BitTorrent client is any program that implements
the BitTorrent protocol. Each client is capable oI
preparing, requesting, and transmitting any type oI
computer Iile over a network, using the protocol.
A peer is any computer running an instance oI a
client.
To share a Iile or group oI Iiles, a peer Iirst creates
a small Iile called a "torrent" (e.g. MyFile.torrent).
This Iile contains metadata about the Iiles to be
shared and about the tracker, the computer that
coordinates the Iile distribution. Peers that want to
download the Iile must Iirst obtain a torrent Iile Ior
it and connect to the speciIied tracker, which tells
them Irom which other peers to download the
pieces oI the Iile.
Though both ultimately transIer Iiles over a
network, a BitTorrent download diIIers Irom a
classic download (as is typical with an HTTP or
FTP request, Ior example) in several Iundamental
ways:
BitTorrent makes many small data requests
over diIIerent TCP connections to diIIerent
machines, while classic downloading is
typically made via a single TCP connection
to a single machine.
BitTorrent downloads in a random or in a "rarest-Iirst"
|9|
approach that ensures high availability, while classic
downloads are sequential.
Taken together, these diIIerences allow BitTorrent to achieve much lower cost to the content provider, much higher
redundancy, and much greater resistance to abuse or to "Ilash crowds" than regular server soItware. However, this
protection, theoretically, comes at a cost: downloads can take time to rise to Iull speed because it may take time Ior
enough peer connections to be established, and it may take time Ior a node to receive suIIicient data to become an
eIIective uploader. This contrasts with regular downloads (such as Irom an HTTP server, Ior example) that, while
Exhibit 18 Page128
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 147 of 163 Page ID
#:1075
more vulnerable to overload and abuse, rise to Iull speed very quickly and maintain this speed throughout.
In general, BitTorrent's non-contiguous download methods have prevented it Irom supporting progressive
download or "streaming playback". However, comments made by Bram Cohen in January 2007
|10|
suggest that
streaming torrent downloads will soon be commonplace and ad supported streaming
|11|
appears to be the result oI
those comments. In January 2011 Cohen demonstrated an early version oI BitTorrent streaming, saying the Ieature
was projected to be available by summer 2011.
|9|
Creating and publishing torrents
The peer distributing a data Iile treats the Iile as a number oI identically sized pieces, usually with byte sizes oI a
power oI 2, and typically between 32 kB and 16 MB each. The peer creates a hash Ior each piece, using the
SHA-1 hash Iunction, and records it in the torrent Iile. Pieces with sizes greater than 512 kB will reduce the size oI
a torrent Iile Ior a very large payload, but is claimed to reduce the eIIiciency oI the protocol.
|12|
When another peer
later receives a particular piece, the hash oI the piece is compared to the recorded hash to test that the piece is
error-Iree.
|13|
Peers that provide a complete Iile are called seeders, and the peer providing the initial copy is called
the initial seeder.
The exact inIormation contained in the torrent Iile depends on the version oI the BitTorrent protocol. By convention,
the name oI a torrent Iile has the suIIix WRUUHQW. Torrent Iiles have an "announce" section, which speciIies the
URL oI the tracker, and an "inIo" section, containing (suggested) names Ior the Iiles, their lengths, the piece length
used, and a SHA-1 hash code Ior each piece, all oI which are used by clients to veriIy the integrity oI the data they
receive.
Torrent Iiles are typically published on websites or elsewhere, and registered with at least one tracker. The tracker
maintains lists oI the clients currently participating in the torrent.
|13|
Alternatively, in a trackerless system
(decentralized tracking) every peer acts as a tracker. Azureus was the Iirst
|citation needed|
BitTorrent client to
implement such a system through the distributed hash table (DHT) method. An alternative and incompatible DHT
system, known as Mainline DHT, was later developed and adopted by the BitTorrent (Mainline), Torrent,
Transmission, rTorrent, KTorrent, BitComet, and Deluge clients.
AIter the DHT was adopted, a "private" Ilag analogous to the broadcast Ilag was unoIIicially introduced,
telling clients to restrict the use oI decentralized tracking regardless oI the user's desires.
|14|
The Ilag is intentionally
placed in the inIo section oI the torrent so that it cannot be disabled or removed without changing the identity oI the
torrent. The purpose oI the Ilag is to prevent torrents Irom being shared with clients that do not have access to the
tracker. The Ilag was requested Ior inclusion in the oIIicial speciIication in August, 2008, but has not been accepted
yet.
|15|
Clients that have ignored the private Ilag were banned by many trackers, discouraging the practice.
|16|
Downloading torrents and sharing Iiles
Users Iind a torrent oI interest, by browsing the web or by other means, download it, and open it with a BitTorrent
client. The client connects to the tracker(s) speciIied in the torrent Iile, Irom which it receives a list oI peers currently
transIerring pieces oI the Iile(s) speciIied in the torrent. The client connects to those peers to obtain the various
pieces. II the swarm contains only the initial seeder, the client connects directly to it and begins to request pieces.
Clients incorporate mechanisms to optimize their download and upload rates; Ior example they download pieces in
a random order to increase the opportunity to exchange data, which is only possible iI two peers have diIIerent
Exhibit 18 Page129
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 148 of 163 Page ID
#:1076
pieces oI the Iile.
The eIIectiveness oI this data exchange depends largely on the policies that clients use to determine to whom to
send data. Clients may preIer to send data to peers that send data back to them (a tit Ior tat scheme), which
encourages Iair trading. But strict policies oIten result in suboptimal situations, such as when newly joined peers are
unable to receive any data because they don't have any pieces yet to trade themselves or when two peers with a
good connection between them do not exchange data simply because neither oI them takes the initiative. To counter
these eIIects, the oIIicial BitTorrent client program uses a mechanism called "optimistic unchoking", whereby the
client reserves a portion oI its available bandwidth Ior sending pieces to random peers (not necessarily known good
partners, so called preIerred peers) in hopes oI discovering even better partners and to ensure that newcomers get
a chance to join the swarm.
|17|
Although swarming scales well to tolerate Ilash crowds Ior popular content, it is less useIul Ior unpopular content.
Peers arriving aIter the initial rush might Iind the content unavailable and need to wait Ior the arrival oI a seed in
order to complete their downloads. The seed arrival, in turn, may take long to happen (this is termed the seeder
promotion problem). Since maintaining seeds Ior unpopular content entails high bandwidth and administrative costs,
this runs counter to the goals oI publishers that value BitTorrent as a cheap alternative to a client-server approach.
This occurs on a huge scale; measurements have shown that 38 oI all new torrents become unavailable within the
Iirst month.
|18|
A strategy adopted by many publishers which signiIicantly increases availability oI unpopular content
consists oI bundling multiple Iiles in a single swarm.
|19|
More sophisticated solutions have also been proposed;
generally, these use cross-torrent mechanisms through which multiple torrents can cooperate to better share
content.
|20|
BitTorrent does not oIIer its users anonymity. It is possible to obtain the IP addresses oI all current and possibly
previous participants in a swarm Irom the tracker. This may expose users with insecure systems to attacks.
|17|
It
may also expose users to the risk oI being sued, iI they are distributing Iiles without permission Irom the copyright
holder(s). However, there are ways to promote anonymity; Ior example, the OneSwarm project layers privacy-
preserving sharing mechanisms on top oI the original BitTorrent protocol.
Adoption
A growing number oI individuals and organizations are using BitTorrent to distribute their own or licensed material.
Independent adopters report that without using BitTorrent technology and its dramatically reduced demands on
their private networking hardware and bandwidth, they could not aIIord to distribute their Iiles.
|21|
Film, video, and music
BitTorrent Inc. has obtained a number oI licenses Irom Hollywood studios Ior distributing popular content
Irom their websites.
Sub Pop Records releases tracks and videos via BitTorrent Inc.
|22|
to distribute its 1000 albums.
Babyshambles and The Libertines (both bands associated with Pete Doherty) have extensively used torrents
to distribute hundreds oI demos and live videos. US industrial rock band Nine Inch Nails Irequently
distributes albums via BitTorrent.
Podcasting soItware is starting to integrate BitTorrent to help podcasters deal with the download demands oI
their MP3 "radio" programs. SpeciIically, Juice and Miro (Iormerly known as Democracy Player) support
automatic processing oI .torrent Iiles Irom RSS Ieeds. Similarly, some BitTorrent clients, such as Torrent,
Exhibit 18 Page130
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 149 of 163 Page ID
#:1077
are able to process web Ieeds and automatically download content Iound within them.
DGM Live purchases are provided via BitTorrent.
|23|
Vodo, a service which distributes "Iree-to-share" movies and TV show BitTorrent.
|24||25||26|
Broadcasters
In 2008, the CBC became the Iirst public broadcaster in North America to make a Iull show (Canada' s
Next Great Prime Minister) available Ior download using BitTorrent.
|27|
The Norwegian Broadcasting Corporation (NRK) has since March 2008 experimented with bittorrent
distribution, available online.
|28|
Only selected material in which NRK owns all royalties are published.
Responses have been very positive, and NRK is planning to oIIer more content.
The Dutch VPRO broadcasting organization released Iour documentaries under a Creative Commons license
using the content distribution Ieature oI the Mininova tracker.
|29|
Personal material
The Amazon S3 "Simple Storage Service" is a scalable Internet-based storage service with a simple web
service interIace, equipped with built-in BitTorrent support.
Blog Torrent oIIers a simpliIied BitTorrent tracker to enable bloggers and non-technical users to host a
tracker on their site. Blog Torrent also allows visitors to download a "stub" loader, which acts as a
BitTorrent client to download the desired Iile, allowing users without BitTorrent soItware to use the
protocol.
|30|
This is similar to the concept oI a selI-extracting archive.
SoItware
Blizzard Entertainment uses BitTorrent (via a proprietary client called the "Blizzard Downloader") to
distribute content and patches Ior Diablo III, StarCraIt II and World oI WarcraIt, including the games
themselves.
|31|
Many soItware games, especially those whose large size makes them diIIicult to host due to bandwidth limits,
extremely Irequent downloads, and unpredictable changes in network traIIic, will distribute instead a
specialized, stripped down bittorrent client with enough Iunctionality to download the game Irom the other
running clients and the primary server (which is maintained in case not enough peers are available).
Many major open source and Iree soItware projects encourage BitTorrent as well as conventional
downloads oI their products (via HTTP, FTP etc.) to increase availability and to reduce load on their own
servers, especially when dealing with larger Iiles.
|32|
Government
The UK government used BitTorrent to distribute details about how the tax money oI UK citizens was
spent.
|33||34|
Education
Florida State University uses BitTorrent to distribute large scientiIic data sets to its researchers.
|35|
Exhibit 18 Page131
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 150 of 163 Page ID
#:1078
Many universities that have BOINC distributed computing projects have used the BitTorrent Iunctionality oI
the client-server system to reduce the bandwidth costs oI distributing the client side applications used to
process the scientiIic data.
Others
Facebook uses BitTorrent to distribute updates to Facebook servers.
|36|
Twitter uses BitTorrent to distribute updates to Twitter servers.
|37||38|
The Internet Archive added Bittorrent to its Iile download options Ior over 1.3 million existing Iiles, and all
newly uploaded Iiles, in August 2012.
|39||40|
This method is the Iastest means oI downloading media Irom
the Archive.
|39||41|
As oI 2011 BitTorrent has 100 million users and a greater share oI network bandwidth than NetIlix and Hulu
combined.
|4||42|
At any given instant oI time BitTorrent has, on average, more active users than YouTube and Facebook combined.
(This reIers to the number oI active users at any instant and not to the total number oI registered users.)
|4||5|
CableLabs, the research organization oI the North American cable industry, estimates that BitTorrent represents
18 oI all broadband traIIic.
|43|
In 2004, CacheLogic put that number at roughly 35 oI all traIIic on the
Internet.
|44|
The discrepancies in these numbers are caused by diIIerences in the method used to measure P2P
traIIic on the Internet.
|45|
Routers that use network address translation (NAT) must maintain tables oI source and destination IP addresses
and ports. Typical home routers are limited to about 2000 table entries
|citation needed|
while some more expensive
routers have larger table capacities. BitTorrent Irequently contacts 2030 servers per second, rapidly Iilling the
NAT tables. This is a common cause oI home routers locking up.
|46|
Indexing
The BitTorrent protocol provides no way to index torrent Iiles. As a result, a comparatively small number oI
websites have hosted a large majority oI torrents, many linking to copyrighted material without the authorization oI
copyright holders, rendering those sites especially vulnerable to lawsuits.
|47|
Several types oI websites support the
discovery and distribution oI data on the BitTorrent network.
Public torrent-hosting sites such as The Pirate Bay allow users to search and download Irom their collection oI
torrent Iiles. Users can typically also upload torrent Iiles Ior content they wish to distribute. OIten, these sites also
run BitTorrent trackers Ior their hosted torrent Iiles, but these two Iunctions are not mutually dependent: a torrent
Iile could be hosted on one site and tracked by another, unrelated site.
Private host/tracker sites operate like public ones except that they may restrict access to registered users and may
also keep track oI the amount oI data each user uploads and downloads, in an attempt to reduce leeching.
Search engines allow the discovery oI torrent Iiles that are hosted and tracked on other sites; examples include
Mininova, BTDigg, BTJunkie, Torrentz, The Pirate Bay, Eztorrent, and isoHunt. These sites allow the user to ask
Ior content meeting speciIic criteria (such as containing a given word or phrase) and retrieve a list oI links to torrent
Exhibit 18 Page132
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 151 of 163 Page ID
#:1079
Iiles matching those criteria. This list can oIten be sorted with respect to several criteria, relevance (seeders-leechers
ratio) being one oI the most popular and useIul (due to the way the protocol behaves, the download bandwidth
achievable is very sensitive to this value). Bram Cohen launched a BitTorrent search engine on
http://www.bittorrent.com/search that co-mingles licensed content with search results.
|48|
Metasearch engines allow
one to search several BitTorrent indices and search engines at once. DHT search engines monitors the DHT
network and indexes torrents via metadata exchange Irom peers.
However, recently some P2P, decentralized alternatives to Torrent search engines have emerged, see decentralized
keyword search Iurther down the page.
Technologies built on BitTor r ent
The BitTorrent protocol is still under development and thereIore may still acquire new Ieatures and other
enhancements such as improved eIIiciency.
Distributed trackers
On May 2, 2005, Azureus 2.3.0.0 (now known as Vuze) was released,
|49|
introducing support Ior "trackerless"
torrents through a system called the "distributed database." This system is a DHT implementation which allows the
client to use torrents that do not have a working BitTorrent tracker. The Iollowing month, BitTorrent, Inc. released
version 4.2.0 oI the Mainline BitTorrent client, which supported an alternative DHT implementation (popularly
known as "Mainline DHT", outlined in a draIt (http://bittorrent.org/beps/bep0005.html) on their website) that is
incompatible with that oI Azureus.
Current versions oI the oIIicial BitTorrent client, Torrent, BitComet, Transmission and BitSpirit all share
compatibility with Mainline DHT. Both DHT implementations are based on Kademlia.
|50|
As oI version 3.0.5.0,
Azureus also supports Mainline DHT in addition to its own distributed database through use oI an optional
application plugin.
|51|
This potentially allows the Azureus client to reach a bigger swarm.
Another idea that has surIaced in Vuze is that oI virtual torrents. This idea is based on the distributed tracker
approach and is used to describe some web resource. Currently, it is used Ior instant messaging. It is implemented
using a special messaging protocol and requires an appropriate plugin. Anatomic P2P is another approach, which
uses a decentralized network oI nodes that route traIIic to dynamic trackers.
Most BitTorrent clients also use Peer exchange (PEX) to gather peers in addition to trackers and DHT. Peer
exchange checks with known peers to see iI they know oI any other peers. With the 3.0.5.0 release oI Vuze, all
major BitTorrent clients now have compatible peer exchange.
Web seeding
Web seeding was implemented in 2006 as the ability oI BitTorrent clients to download torrent pieces Irom an
HTTP source in addition to the swarm. The advantage oI this Ieature is that a website may distribute a torrent Ior a
particular Iile or batch oI Iiles and make those Iiles available Ior download Irom that same web server; this can
simpliIy long-term seeding and load balancing through the use oI existing, cheap, web hosting setups. In theory, this
would make using BitTorrent almost as easy Ior a web publisher as creating a direct HTTP download. In addition,
it would allow the "web seed" to be disabled iI the swarm becomes too popular while still allowing the Iile to be
readily available.
Exhibit 18 Page133
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 152 of 163 Page ID
#:1080
This Ieature has two distinct and incompatible speciIications.
The Iirst was created by John "TheSHAD0W" HoIIman, who created BitTornado.
|52||53|
From version 5.0
onward, the Mainline BitTorrent client also supports web seeds, and the BitTorrent web site had
|54|
a simple
publishing tool that creates web seeded torrents.
|55|
Torrent added support Ior web seeds in version 1.7.
BitComet added support Ior web seeds in version 1.14. This Iirst speciIication requires running a web service that
serves content by inIo-hash and piece number, rather than Iilename.
The other speciIication is created by GetRight authors and can rely on a basic HTTP download space (using byte
serving).
|56||57|
In September 2010, a new service named Burnbit was launched which generates a torrent Irom any URL using
webseeding.
|58|
There exist server-side solutions that provide initial seeding oI the Iile Irom the webserver via standard BitTorrent
protocol and when the number oI external seeders reach a limit, they stop serving the Iile Irom the original
source.
|59|
RSS Ieeds
Main article: Broadcat ching
A technique called broadcatching combines RSS with the BitTorrent protocol to create a content delivery system,
Iurther simpliIying and automating content distribution. Steve Gillmor explained the concept in a column Ior ZiII-
Davis in December, 2003.
|60|
The discussion spread quickly among bloggers (Ernest Miller,
|61|
Chris Pirillo, etc.).
In an article entitled Broadcat ching with BitTorrent, Scott Raymond explained:
I want RSS Ieeds oI BitTorrent Iiles. A script would periodically check the Ieed Ior new items, and
use them to start the download. Then, I could Iind a trusted publisher oI an Alias RSS Ieed, and
"subscribe" to all new episodes oI the show, which would then start downloading automatically like
the "season pass" Ieature oI the TiVo.
Scott Raymond, scottraymond.net
|62|
The RSS Ieed will track the content, while BitTorrent ensures content integrity with cryptographic hashing oI all
data, so Ieed subscribers will receive uncorrupted content.
One oI the Iirst and popular soItware clients (Iree and open source) Ior broadcat ching is Miro. Other Iree
soItware clients such as PenguinTV and KatchTV are also now supporting broadcatching.
The BitTorrent web-service MoveDigital had the ability to make torrents available to any web application capable
oI parsing XML through its standard REST-based interIace,
|63|
although this has since been discontinued.
Additionally, Torrenthut is developing a similar torrent API that will provide the same Ieatures, as well as Iurther
intuition to help bring the torrent community to Web 2.0 standards. Alongside this release is a Iirst PHP application
built using the API called PEP, which will parse any Really Simple Syndication (RSS 2.0) Ieed and automatically
create and seed a torrent Ior each enclosure Iound in that Ieed.
|64|
Throttling and encryption
Exhibit 18 Page134
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 153 of 163 Page ID
#:1081
Main article: BitTorrent protocol encryption
Since BitTorrent makes up a large proportion oI total traIIic, some ISPs have chosen to throttle (slow down)
BitTorrent transIers to ensure network capacity remains available Ior other uses. For this reason, methods have
been developed to disguise BitTorrent traIIic in an attempt to thwart these eIIorts.
|65|
Protocol header encrypt (PHE) and Message stream encryption/Protocol encryption (MSE/PE) are Ieatures oI
some BitTorrent clients that attempt to make BitTorrent hard to detect and throttle. At the moment Vuze, Bitcomet,
KTorrent, Transmission, Deluge, Torrent, MooPolice, Halite, rTorrent and the latest oIIicial BitTorrent client (v6)
support MSE/PE encryption.
In September 2006 it was reported that some soItware could detect and throttle BitTorrent traIIic masquerading as
HTTP traIIic.
|66|
Reports in August 2007 indicated that Comcast was preventing BitTorrent seeding by monitoring and interIering
with the communication between peers. Protection against these eIIorts is provided by proxying the client-tracker
traIIic via an encrypted tunnel to a point outside oI the Comcast network.
|67|
Comcast has more recently called a
"truce" with BitTorrent, Inc. with the intention oI shaping traIIic in a protocol-agnostic manner.
|68|
Questions about
the ethics and legality oI Comcast's behavior have led to renewed debate about net neutrality in the United
States.
|69|
In general, although encryption can make it diIIicult to determine what is being shared, BitTorrent is vulnerable to
traIIic analysis. Thus, even with MSE/PE, it may be possible Ior an ISP to recognize BitTorrent and also to
determine that a system is no longer downloading but only uploading data, and terminate its connection by injecting
TCP RST (reset Ilag) packets.
Multitracker
Another unoIIicial Ieature is an extension to the BitTorrent metadata Iormat proposed by John HoIIman
|70|
and
implemented by several indexing websites. It allows the use oI multiple trackers per Iile, so iI one tracker Iails,
others can continue to support Iile transIer. It is implemented in several clients, such as BitComet, BitTornado,
BitTorrent, KTorrent, Transmission, Deluge, Torrent, rtorrent, Vuze, Frostwire. Trackers are placed in groups, or
tiers, with a tracker randomly chosen Irom the top tier and tried, moving to the next tier iI all the trackers in the top
tier Iail.
Torrents with multiple trackers
|71|
can decrease the time it takes to download a Iile, but also has a Iew
consequences:
Poorly implemented
|72|
clients may contact multiple trackers, leading to more overhead-traIIic.
Torrents Irom closed trackers suddenly become downloadable by non-members, as they can connect to a
seed via an open tracker.
Decentralized keyword search
Even with distributed trackers, a third party is still required to Iind a speciIic torrent. This is usually done in the Iorm
oI a hyperlink Irom the website oI the content owner or through indexing websites like isoHunt, Torrentz, BTDigg
or The Pirate Bay.
Exhibit 18 Page135
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 154 of 163 Page ID
#:1082
The Tribler BitTorrent client is the Iirst to incorporate decentralized search capabilities. With Tribler, users can Iind
.torrent Iiles that are hosted among other peers, instead oI on a centralized index sites. It adds such an ability to the
BitTorrent protocol using a gossip protocol, somewhat similar to the eXeem network which was shut down in
2005. The soItware includes the ability to recommend content as well. AIter a dozen downloads the Tribler
soItware can roughly estimate the download taste oI the user and recommend additional content.
|73|
In May 2007 Cornell University published a paper proposing a new approach to searching a peer-to-peer network
Ior inexact strings,
|74|
which could replace the Iunctionality oI a central indexing site. A year later, the same team
implemented the system as a plugin Ior Vuze called Cubit
|75|
and published a Iollow-up paper reporting its
success.
|76|
A somewhat similar Iacility but with a slightly diIIerent approach is provided by the BitComet client through its
"Torrent Exchange"
|77|
Ieature. Whenever two peers using BitComet (with Torrent Exchange enabled) connect to
each other they exchange lists oI all the torrents (name and inIo-hash) they have in the Torrent Share storage
(torrent Iiles which were previously downloaded and Ior which the user chose to enable sharing by Torrent
Exchange).
Thus each client builds up a list oI all the torrents shared by the peers it connected to in the current session (or it can
even maintain the list between sessions iI instructed). At any time the user can search into that Torrent Collection list
Ior a certain torrent and sort the list by categories. When the user chooses to download a torrent Irom that list, the
.torrent Iile is automatically searched Ior (by inIo-hash value) in the DHT Network and when Iound it is
downloaded by the querying client which can aIter that create and initiate a downloading task.
Implementati ons
Main article: Comparison oI BitTorrent clients
The BitTorrent speciIication is Iree to use and many clients are open source, so BitTorrent clients have been
created Ior all common operating systems using a variety oI programming languages. The oIIicial BitTorrent client,
Torrent, Xunlei, Vuze and BitComet are some oI the most popular clients.
|78|
Some BitTorrent implementations such as MLDonkey and TorrentIlux are designed to run as servers. For example,
this can be used to centralize Iile sharing on a single dedicated server which users share access to on the
network.
|79|
Server-oriented BitTorrent implementations can also be hosted by hosting providers at co-located
Iacilities with high bandwidth Internet connectivity (e.g., a datacenter) which can provide dramatic speed beneIits
over using BitTorrent Irom a regular home broadband connection.
Services such as ImageShack can download Iiles on BitTorrent Ior the user, allowing them to download the entire
Iile by HTTP once it is Iinished.
The Opera web browser supports BitTorrent,
|80|
as does Wyzo. BitLet allows users to download Torrents directly
Irom their browser using a Java applet. An increasing number oI hardware devices are being made to support
BitTorrent. These include routers and NAS devices containing BitTorrent-capable Iirmware like OpenWrt.
Proprietary versions oI the protocol which implement DRM, encryption, and authentication are Iound within
managed clients such as Pando.
Exhibit 18 Page136
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 155 of 163 Page ID
#:1083
Development
An unimplemented (as oI February 2008) unoIIicial Ieature is Similarity Enhanced TransIer (SET), a technique Ior
improving the speed at which peer-to-peer Iile sharing and content distribution systems can share data. SET,
proposed by researchers Pucha, Andersen, and Kaminsky, works by spotting chunks oI identical data in Iiles that
are an exact or near match to the one needed and transIerring these data to the client iI the "exact" data are not
present. Their experiments suggested that SET will help greatly with less popular Iiles, but not as much Ior popular
data, where many peers are already downloading it.
|81|
Andersen believes that this technique could be immediately
used by developers with the BitTorrent Iile sharing system.
|82|
As oI December 2008, BitTorrent, Inc. is working with Oversi on new Policy Discover Protocols that query the
ISP Ior capabilities and network architecture inIormation. Oversi's ISP hosted NetEnhancer box is designed to
"improve peer selection" by helping peers Iind local nodes, improving download speeds while reducing the loads
into and out oI the ISP's network.
|83|
Legal issues
Main article: Legal issues with BitTorrent
There has been much controversy over the use oI BitTorrent trackers. BitTorrent metaIiles themselves do not store
Iile contents. Whether the publishers oI BitTorrent metaIiles violate copyrights by linking to copyrighted material
without the authorization oI copyright holders is controversial.
Various jurisdictions have pursued legal action against websites that host BitTorrent trackers. High-proIile examples
include the closing oI Suprnova.org, TorrentSpy, LokiTorrent, BTJunkie, Mininova, Demonoid and Oink's Pink
Palace. The Pirate Bay torrent website, Iormed by a Swedish group, is noted Ior the "legal" section oI its website in
which letters and replies on the subject oI alleged copyright inIringements are publicly displayed. On 31 May 2006,
The Pirate Bay's servers in Sweden were raided by Swedish police on allegations by the MPAA oI copyright
inIringement;
|84|
however, the tracker was up and running again three days later.
In the study used to value NBC Universal in its merger with Comcast, Envisional Iound that all oI the top 10,000
torrents on the BitTorrent network violated copyright.
|85|
Between 2010 and 2012, 200,000 people have been sued by copyright trolls Ior uploading and downloading
copyrighted content through BitTorrent.
|6|
In 2011, 18.8 oI North American internet traIIic was used by peer-to-peer networks which equates to 132
billion music Iile transIers and 11 billion movie Iile transIers on the BitTorrent network.
|86|
On April 30, 2012 the UK High Court ordered Iive ISPs to block BitTorrent search engine The Pirate Bay.
|87|
BitTor r ent an d malwar e
Several studies on BitTorrent have indicated that a large portion oI Iiles available Ior download via BitTorrent
contain malware. In particular, one small sample
|88|
indicated that 18 oI all executable programs available Ior
download contained malware. Another study
|89|
claims that as much as 14.5 oI BitTorrent downloads contain
Exhibit 18 Page137
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 156 of 163 Page ID
#:1084
zero-day malware, and that BitTorrent was used as the distribution mechanism Ior 47 oI all zero-day malware
they have Iound.
See also
Bencode
Glossary oI BitTorrent terms
Torrent Iile
Super-seeding
Torrent poisoning
TP (Micro Transport Protocol)
Cache Discovery Protocol
Comparison oI BitTorrent clients
Comparison oI BitTorrent tracker soItware
Comparison oI BitTorrent sites
FastTrack
Magnet URI scheme
Segmented downloading
Similarity Enhanced TransIer
Simple Iile veriIication
Anti-CounterIeiting Trade Agreement
ReIer ences
1. ` Schulze, Hendrik; Klaus Mochalski (2009). "Internet Study 2008/2009"
(http://www.ipoque.com/sites/deIault/Iiles/mediaIiles/documents/internet-study-2008-2009.pdI) . Leipzig,
Germany: ipoque. http://www.ipoque.com/sites/deIault/Iiles/mediaIiles/documents/internet-study-2008-2009.pdI.
Retrieved 3 Oct 2011. "Peer-to-peer Iile sharing (P2P) still generates by Iar the most traIIic in all monitored regions
ranging Irom 43 percent in Northern AIrica to 70 percent in Eastern Europe."
2. ` Cohen, Bram (2001-07-02). "BitTorrent a new P2P app"
(http://Iinance.groups.yahoo.com/group/decentralization/message/3160) . Yahoo eGroups.
http://Iinance.groups.yahoo.com/group/decentralization/message/3160. Retrieved 2007-04-15.
3. ` "BitTorrent and Torrent SoItware Surpass 150 Million User Milestone"
(http://www.bittorrent.com/intl/es/company/about/ces2012150musers) . Bittorrent.com. 2012-01-09.
http://www.bittorrent.com/intl/es/company/about/ces2012150musers. Retrieved 2012-07-09.
4. `
a

b

c
"Iastcompany.com" (http://www.Iastcompany.com/1714001/bittorrent-swells-to-100-million-users) .
Iastcompany.com. http://www.Iastcompany.com/1714001/bittorrent-swells-to-100-million-users. Retrieved 2012-
07-09.
5. `
a

b
"comscore.com"
(http://www.comscore.com/PressEvents/PressReleases/2010/9/comScoreReleasesAugust2010U.S.Online
VideoRankings) . comscore.com. 2010-09-30.
http://www.comscore.com/PressEvents/PressReleases/2010/9/comScoreReleasesAugust2010U.S.Online
VideoRankings. Retrieved 2012-07-09.
6. `
a

b
Jacobsson Purewal, Sarah (2011-08-09). "Copyright Trolls: 200,000 BitTorrent Users Sued Since 2010"
(http://www.pcworld.com/article/237593/copyrighttrolls200000bittorrentuserssuedsince2010.html) . PC
World. http://www.pcworld.com/article/237593/copyrighttrolls200000bittorrentuserssuedsince2010.html.
Retrieved 2012-05-06.
7 ` Bram Cohen (10-Jan-2008) "The BitTorrent Protocol SpeciIication"
Exhibit 18 Page138
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 157 of 163 Page ID
#:1085
7. Bram Cohen (10-Jan-2008). The BitTorrent Protocol SpeciIication
(http://www.bittorrent.org/beps/bep0003.html) . BitTorrent.org. http://www.bittorrent.org/beps/bep0003.html.
Retrieved 20 November 2010.
8. ` Estimating SelI-Sustainability in Peer-to-Peer Swarming Systems
(http://arxiv4.library.cornell.edu/abs/1004.0395v2) by D. Menasche, A. Rocha, E. de Souza e Silva, R. M. Leao, D.
Towsley, A. Venkataramani
9. `
a

b
Urvoy-Keller (December 2006). "Rarest First and Choke Algorithms Are Enough"
(http://conIerences.sigcomm.org/imc/2006/papers/p20-legout.pdI) (PDF). SIGCOMM.
http://conIerences.sigcomm.org/imc/2006/papers/p20-legout.pdI. Retrieved 2012-03-09.
10. ` |1| (http://torrentIreak.com/interview-with-bram-cohen-the-inventor-oI-bittorrent)
11. ` |2| (http://torrentIreak.com/bittorrent-launches-ad-supported-streaming-071218)
12. ` "Theory.org" (http://wiki.theory.org/index.php/BitTorrentSpeciIication) . Wiki.theory.org.
http://wiki.theory.org/index.php/BitTorrentSpeciIication. Retrieved 2012-07-09.
13. `
a

b
Cohen, Bram (October 2002). "BitTorrent Protocol 1.0" (http://www.bittorrent.org/beps/bep0003.html) .
BitTorrent.org. http://www.bittorrent.org/beps/bep0003.html. Retrieved 2008-10-27.
14. ` "UnoIIicial BitTorrent Protocol SpeciIication v1.0"
(http://wiki.theory.org/BitTorrentSpeciIication#InIoDictionary) .
http://wiki.theory.org/BitTorrentSpeciIication#InIoDictionary. Retrieved 2009-10-04.
15. ` "Private Torrents" (http://bittorrent.org/beps/bep0027.html) . Bittorrent.org.
http://bittorrent.org/beps/bep0027.html. Retrieved 2009-10-04.
16. ` "BitComet Banned From Growing Number oI Private Trackers" (http://www.slyck.com/news.php?story1021) .
http://www.slyck.com/news.php?story1021. Retrieved 2009-10-04.
17. `
a

b
Tamilmani, Karthik (2003-10-25). "Studying and enhancing the BitTorrent protocol"
(http://web.archive.org/web/20041119150847/http://mnl.cs.stonybrook.edu/home/karthik/BitTorrent/Robustnesso
IBT.doc) (DOC). Stony Brook University. Archived Irom the original
(http://mnl.cs.stonybrook.edu/home/karthik/BitTorrent/RobustnessoIBT.doc) on 2004-11-19.
http://web.archive.org/web/20041119150847/http://mnl.cs.stonybrook.edu/home/karthik/BitTorrent/RobustnessoI
BT.doc. Retrieved 2006-05-06.
18. ` Unraveling BitTorrent's File Unavailability:Measurements and Analysis
(http://eprints.comp.lancs.ac.uk/2281/1/P2P10.pdI) by Sebastian Kaune, Ruben Cuevas Rumin, Gareth Tyson,
Andreas Mauthe, RalI Steinmetz
19. ` Content Availability and Bundling in Swarming Systems (http://conIerences.sigcomm.org/co-
next/2009/papers/Menasche.pdI) by D. Menasche, A. Rocha, B. Li, D. Towsley, A. Venkataramani
20. ` The Seeder Promotion Problem: Measurements, Analysis and Solution Space
(http://www.dcs.kcl.ac.uk/staII/tysong/Iiles/ICCCN09.pdI) by Sebastian Kaune, Gareth Tyson, Konstantin
Pussep, Andreas Mauthe, Aleksandra Kovacevic and RalI Steinmetz
21. ` See, Ior example, Why Bit Torrent (http://tasvideos.org/WhyBitTorrent.html) at http://tasvideos.org
tasvideos.org.
22. ` "Sub Pop page on BitTorrent.com" (http://www.bittorrent.com/users/subpoprecords/) .
http://www.bittorrent.com/users/subpoprecords/. Retrieved 2006-12-13.
23. ` "DGMlive.com" (http://www.dgmlive.com/help.htm#whatisbittorrent) . DGMlive.com.
http://www.dgmlive.com/help.htm#whatisbittorrent. Retrieved 2012-07-09.
24. ` VODO - About. . URL:http://vo.do/about. Accessed: 2012-04-15. (Archived by WebCite at
http://www.webcitation.org/66wxu53jV)
25. ` Cory Doctorow. Vodo: a Iilesharing service Ior Iilm-makers - Boing Boing. Happy Mutants LLC.. .
URL:http://boingboing.net/2009/10/15/vodo-a-Iilesharing-s.html. Accessed: 2012-04-15. (Archived by WebCite at
http://www.webcitation.org/66wy0PFq1)
26. ` Ernesto. Pioneer One, The BitTorrent Exclusive TV-Series Continues. TorrentFreak. .
URL:https://torrentIreak.com/pioneer-one-the-bittorrent-exclusive-tv-series-continues-101215/. Accessed: 2012-
04-15. (Archived by WebCite at http://www.webcitation.org/66wyOriIB)
27. ` "CBC to BitTorrent Canada's Next Great Prime Minister:"
(http://www.cbc.ca/nextprimeminister/blog/2008/03/canadasnextgreatprimeminis.html) . CBC News. 19
March 2008. http://www.cbc.ca/nextprimeminister/blog/2008/03/canadasnextgreatprimeminis.html. Retrieved
Exhibit 18 Page139
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 158 of 163 Page ID
#:1086
2008-03-19.
28. ` "Bittorrent" (http://nrkbeta.no/bittorrent/) . Nrkbeta.no. http://nrkbeta.no/bittorrent/. Retrieved 2012-07-09.
29. ` "Torrents uploaded by EeuwvandeStad" (http://www.mininova.org/user/EeuwvandeStad) .
http://www.mininova.org/user/EeuwvandeStad.
30. ` Rustad, Roger E. (26 August 2004). "Blog Torrent and Participatory Culture"
(http://grep.law.harvard.edu/article.pl?sid04/08/26/0236209) . Grep Law. http://grep.law.harvard.edu/article.pl?
sid04/08/26/0236209. Retrieved 2006-05-09.
31. ` "Blizzard Downloader" (http://www.wowpedia.org/BlizzardDownloader) . Curse Inc.. 4 November 2010.
http://www.wowpedia.org/BlizzardDownloader. Retrieved 2010-11-04.
32. ` "Complete Download Options List BitTorrent" (http://www.ubuntu.com/getubuntu/downloadmirrors#bt) .
http://www.ubuntu.com/getubuntu/downloadmirrors#bt. Retrieved 2009-05-07.
33. ` HM Government (4). "ombined Online InIormation System" (http://data.gov.uk/dataset/coins) . Data. Gov.Uk
Beta. Controller oI Her Majesty`s Stationery OIIice. http://data.gov.uk/dataset/coins. Retrieved 7 September 2012.
34. ` Ernesto (4). "UK Government Uses BitTorrent to Share Public Spending Data" (http://torrentIreak.com/uk-
government-uses-bittorrent-to-share-public-spending-data-100604/) . TorrentFreak. TorrentFreak.
http://torrentIreak.com/uk-government-uses-bittorrent-to-share-public-spending-data-100604/. Retrieved 7
September 2012.
35. ` "HPC Data Repository" (http://www.hpc.Isu.edu/index.php?optioncomwrapper&viewwrapper&Itemid80) .
http://www.hpc.Isu.edu/index.php?optioncomwrapper&viewwrapper&Itemid80.
36. ` Ernesto (25). "Facebook Uses BitTorrent, and They Love It" (http://torrentIreak.com/Iacebook-uses-bittorrent-
and-they-love-it-100625/) . Torrent Freak. Torrent Freak. http://torrentIreak.com/Iacebook-uses-bittorrent-and-
they-love-it-100625/. Retrieved 7 September 2012.
37. ` Ernesto (10). "Twitter Uses BitTorrent For Server Deployment" (http://torrentIreak.com/twitter-uses-bittorrent-
Ior-server-deployment-100210/) . Torrent Freak. Torrent Freak. http://torrentIreak.com/twitter-uses-bittorrent-
Ior-server-deployment-100210/. Retrieved 7 September 2012.
38. ` Ernesto (16). "BitTorrent Makes Twitter`s Server Deployment 75x Faster" (http://torrentIreak.com/bittorrent-
makes-twitters-server-deployment-75-Iaster-100716/) . Torrent Freak. Torrent Freak.
http://torrentIreak.com/bittorrent-makes-twitters-server-deployment-75-Iaster-100716/. Retrieved 7 September
2012.
39. `
a

b
"Internet Archive Starts Seeding 1,398,875 Torrents" (https://torrentIreak.com/internet-archive-starts-
seeding-1398635-torrents-120807/) . TorrentFreak. August 7, 2012. https://torrentIreak.com/internet-archive-
starts-seeding-1398635-torrents-120807/. Retrieved August 7, 2012.
40. ` Hot List Ior bt1.us.archive.org (Updated August 7 2012, 7:31 pm PDT) (http://bt1.archive.org/hotlist.php) .
Archive.org.
41. ` "Welcome to Archive torrents" (http://archive.org/details/bittorrent) . Archive.org.
42. ` "Iinancialpost.com" (http://business.Iinancialpost.com/2011/07/01/bittorrent-turns-ten/) .
Business.Iinancialpost.com. http://business.Iinancialpost.com/2011/07/01/bittorrent-turns-ten/. Retrieved 2012-07-
09.
43. ` Ellis, Leslie (8 May 2006). "BitTorrent's Swarms Have a Deadly Bite On Broadband Nets"
(http://www.multichannel.com/article/CA6332098.html) . Multichannel News.
http://www.multichannel.com/article/CA6332098.html. Retrieved 2006-05-08.
44. ` Pasick, Adam (4 November 2004). "LiveWire File-sharing network thrives beneath the radar"
(http://www.interesting-people.org/archives/interesting-people/200411/msg00078.html) . Yahoo! News.
http://www.interesting-people.org/archives/interesting-people/200411/msg00078.html. Retrieved 2006-05-09.
45. ` "RCCSP" (http://www.the-resource-center.com/index/telecomseminars.htm) . The-resource-center.com. 2012-
06-11. http://www.the-resource-center.com/index/telecomseminars.htm. Retrieved 2012-07-09.
46. ` "uTorrent's FAQ page"
(http://www.utorrent.com/Iaq.php#ModemsroutersthatareknowntohaveproblemswithP2P) .
http://www.utorrent.com/Iaq.php#ModemsroutersthatareknowntohaveproblemswithP2P.
47. ` "PublicBT Tracker Set To Patch BitTorrent' Achilles' Heel" (http://torrentIreak.com/publicbt-tracker-set-to-
patch-bittorrents-achilles-heel-090712/) . 12 July 2009. http://torrentIreak.com/publicbt-tracker-set-to-patch-
bittorrents-achilles-heel-090712/. Retrieved 14 July 2009.
48. ` Worthington, David; Nate Mook (25 May 2015). "BitTorrent Creator Opens Online Search"
Exhibit 18 Page140
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 159 of 163 Page ID
#:1087
g , ; ( y ) p
(http://www.betanews.com/article/BitTorrentCreatorOpensOnlineSearch/1117065427) . BetaNews.
http://www.betanews.com/article/BitTorrentCreatorOpensOnlineSearch/1117065427. Retrieved 2006-05-09.
49. ` "Vuze Changelog" (http://azureus.sourceIorge.net/changelog.php) . Azureus.sourceIorge.net.
http://azureus.sourceIorge.net/changelog.php.
50. ` "Khashmir.SourceIorge.net" (http://khashmir.sourceIorge.net/) . Khashmir.SourceIorge.net.
http://khashmir.sourceIorge.net/. Retrieved 2012-07-09.
51. ` "Azureus.sourceIorge.net" (http://azureus.sourceIorge.net/plugindetails.php?pluginmlDHT) .
Azureus.sourceIorge.net. http://azureus.sourceIorge.net/plugindetails.php?pluginmlDHT. Retrieved 2012-07-09.
52. ` "HTTP-Based Seeding SpeciIication" (http://bittornado.com/docs/webseed-spec.txt) (TXT).
http://bittornado.com/docs/webseed-spec.txt. Retrieved 2006-05-09.
53. ` John HoIIman, DeHackEd (2008-02-25). "HTTP Seeding BitTorrent Enhancement Proposal 17"
(http://www.bittorrent.org/beps/bep0017.html) . http://www.bittorrent.org/beps/bep0017.html. Retrieved 2012-
02-17.
54. ` "The Torrent Entertainment Network has closed" (http://www.bittorrent.com/btusers/nowplaying/?) .
http://www.bittorrent.com/btusers/nowplaying/?.
55. ` "Publish BitTorrent" (http://web.archive.org/web/20070526065412/http://www.bittorrent.com/publish) .
Archived Irom the original (http://www.bittorrent.com/publish) on 2007-05-26.
http://web.archive.org/web/20070526065412/http://www.bittorrent.com/publish. (archived page Irom May 26,
2007, web.archive.org)
56. ` "HTTP/FTP Seeding Ior BitTorrent" (http://www.getright.com/seedtorrent.html) .
http://www.getright.com/seedtorrent.html. Retrieved 2010-03-18.
57. ` Michael BurIord (2008-02-25). "WebSeed - HTTP/FTP Seeding (GetRight style) BitTorrent Enhancement
Proposal 19" (http://www.bittorrent.org/beps/bep0019.html) . http://www.bittorrent.org/beps/bep0019.html.
Retrieved 2012-02-17.
58. ` "Burn Any Web-Hosted File into a Torrent With Burnbit" (http://torrentIreak.com/burn-any-web-hosted-Iile-into-
a-torrent-with-burnbit-100913/) . TorrentFreak. 2010-09-13. http://torrentIreak.com/burn-any-web-hosted-Iile-
into-a-torrent-with-burnbit-100913/. Retrieved 2012-07-09.
59. ` "PHP based torrent Iile creator, tracker and seed server" (http://php-tracker.org/) . PHPTracker. http://php-
tracker.org/. Retrieved 2012-07-09.
60. ` Gillmore, Steve. BitTorrent and RSS Create Disruptive Revolution
(http://www.eweek.com/article2/0,1895,1413403,00.asp) EWeek.com, 13 December 2003. Retrieved on 22 April
2007.
61. ` Corante.com (http://www.corante.com/importance/)
62. ` Raymond, Scott: Broadcatching with BitTorrent
(http://web.archive.org/web/20040213093750/http://scottraymond.net/archive/4745) . scottraymond.net: 16
December 2003.
63. ` "Move Digital REST API" (http://www.movedigital.com/docs/index.php/MoveDigitalAPI) . Move Digital.
http://www.movedigital.com/docs/index.php/MoveDigitalAPI. Retrieved 2006-05-09. Documentation.
64. ` "Prodigem Enclosure Puller(pep.txt)"
(http://web.archive.org/web/20060526130219/http://prodigem.com/code/pep/pep.txt) (TXT). Prodigem.com.
Archived Irom the original (http://prodigem.com/code/pep/pep.txt) on 2006-05-26.
http://web.archive.org/web/20060526130219/http://prodigem.com/code/pep/pep.txt. Retrieved 2006-05-09. via
Internet Wayback Machine
65. ` "Encrypting Bittorrent to take out traIIic shapers" (http://torrentIreak.com/encrypting-bittorrent-to-take-out-
traIIic-shapers/) . TorrentIreak.com. 2006-02-05. http://torrentIreak.com/encrypting-bittorrent-to-take-out-traIIic-
shapers/. Retrieved 2006-05-09.
66. ` Sales, Ben (September 2006). "ResTech solves network issues"
(http://www.studliIe.com/archives/News/2006/09/27/ResTechsolvesnetworkissues/) . studliIe.com.
http://www.studliIe.com/archives/News/2006/09/27/ResTechsolvesnetworkissues/.
67. ` Comcast Throttles BitTorrent TraIIic, Seeding Impossible (http://torrentIreak.com/comcast-throttles-bittorrent-
traIIic-seeding-impossible/) , TorrentFreak, 17 August 2007
68. ` Broache, Anne (2008-03-27). "Comcast and Bittorrent Agree to Collaborate" (http://www.news.com/8301-
10784 3-9904494-7 html) News com http://www news com/8301-10784 3-9904494-7 html Retrieved 2012-
Exhibit 18 Page141
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 160 of 163 Page ID
#:1088
107843-9904494-7.html) . News.com. http://www.news.com/8301-107843-9904494-7.html. Retrieved 2012-
07-09.
69. ` Soghoian, Chris (2007-09-04). "Is Comcast's BitTorrent Iiltering violating the law?"
(http://www.cnet.com/8301-137391-9769645-46.html) . Cnet.com. http://www.cnet.com/8301-137391-
9769645-46.html. Retrieved 2012-07-09.
70. ` "Multitracker Metadata Entry SpeciIication" (http://www.bittornado.com/docs/multitracker-spec.txt) (TXT).
Bittornado.com. http://www.bittornado.com/docs/multitracker-spec.txt. Retrieved 2006-05-09.
71. ` Called MultiTorrents by indexing website myBittorrent.com (http://www.mybittorrent.com/)
72. ` "P2P:Protocol:SpeciIications:Multitracker"
(http://wiki.depthstrike.com/index.php/P2P:Protocol:SpeciIications:Multitracker#BadImplementations) .
wiki.depthstrike.com.
http://wiki.depthstrike.com/index.php/P2P:Protocol:SpeciIications:Multitracker#BadImplementations. Retrieved
2009-11-13.
73. ` "DecentralizedRecommendation " (https://www.tribler.org/DecentralizedRecommendation) . Tribler.org.
https://www.tribler.org/DecentralizedRecommendation. Retrieved 2012-07-09.
74. ` "Hyperspaces Ior Object Clustering and Approximate Matching in Peer-to-Peer Overlays"
(http://www.cs.cornell.edu/People/egs/papers/hyperspaces.pdI) (PDF). Cornell University.
http://www.cs.cornell.edu/People/egs/papers/hyperspaces.pdI. Retrieved 2008-05-26.
75. ` "Cubit: Approximate Matching Ior Peer-to-Peer Overlays" (http://www.cs.cornell.edu/~bwong/cubit/index.html)
. Cornell University. http://www.cs.cornell.edu/~bwong/cubit/index.html. Retrieved 2008-05-26.
76. ` "Approximate Matching Ior Peer-to-Peer Overlays with Cubit" (http://www.cs.cornell.edu/~bwong/cubit/tr-
cubit.pdI) (PDF). Cornell University. http://www.cs.cornell.edu/~bwong/cubit/tr-cubit.pdI. Retrieved 2008-05-26.
77. ` Torrent Exchange (http://wiki.bitcomet.com/TorrentExchange) . The torrent sharing Ieature oI BitComet.
Retrieved 2010-01-31.
78. ` Van Der Sar, Ernesto. "Thunder Blasts uTorrent`s Market Share Away" (http://torrentIreak.com/thunder-blasts-
utorrents-market-share-away-091204/) . TorrentFreak. Archived (http://www.webcitation.org/61iuRToZn) Irom
the original on 2011-09-15. http://torrentIreak.com/thunder-blasts-utorrents-market-share-away-091204/. Retrieved
2011-09-15.
79. ` "Torrent Server combines a Iile server with P2P Iile sharing" (http://www.turnkeylinux.org/torrentserver) .
Turnkeylinux.org. http://www.turnkeylinux.org/torrentserver. Retrieved 2012-07-09.
80. ` Anderson, Nate (1 February 2007). "Does network neutrality mean an end to BitTorrent throttling?"
(http://arstechnica.com/news.ars/post/20070201-8750.html) . Ars Technica, LLC.
http://arstechnica.com/news.ars/post/20070201-8750.html. Retrieved 2007-02-09.
81. ` Himabindu Pucha, David G. Andersen, Michael Kaminsky (April 2007). "Exploiting Similarity Ior Multi-Source
Downloads Using File Handprints" (http://www.cs.cmu.edu/~dga/papers/nsdi2007-set/) . Purdue University,
Carnegie Mellon University, Intel Research Pittsburgh. http://www.cs.cmu.edu/~dga/papers/nsdi2007-set/.
Retrieved 2007-04-15.
82. ` "Speed boost plan Ior Iile-sharing" (http://news.bbc.co.uk/2/hi/technology/6544919.stm) . BBC News. 12 April
2007. http://news.bbc.co.uk/2/hi/technology/6544919.stm. Retrieved 2007-04-21.
83. ` Johnston, Casey (2008-12-09). "Arstechnica.com" (http://arstechnica.com/news.ars/post/20081209-bittorrent-
has-new-plan-to-shape-up-p2p-behavior.html) . Arstechnica.com. http://arstechnica.com/news.ars/post/20081209-
bittorrent-has-new-plan-to-shape-up-p2p-behavior.html. Retrieved 2012-07-09.
84. ` "The Piratebay is Down: Raided by the Swedish Police" (http://torrentIreak.com/the-piratebay-is-down-raided-
by-the-swedish-police/) . TorrentFreak. 31.05.2006. http://torrentIreak.com/the-piratebay-is-down-raided-by-the-
swedish-police/. Retrieved 2007-05-20.
85. ` "Technical report: An Estimate oI InIringing Use oI the Internet"
(http://documents.envisional.com/docs/Envisional-InternetUsage-Jan2011.pdI) . Envisional. 2011-01-01.
http://documents.envisional.com/docs/Envisional-InternetUsage-Jan2011.pdI. Retrieved 2012-05-06.
86. ` "Piracy Volume in 2011" (http://ethicalIan.com/?p78) . Ethical Fan. 2012-04-29. http://ethicalIan.com/?p78.
Retrieved 2012-05-06.
87. ` Albanesius, Chloe (2012-04-30). "U.K. High Court Orders ISPs to Block The Pirate Bay"
(http://www.pcmag.com/article2/0,2817,2403749,00.asp) . PC Magazine.
http://www.pcmag.com/article2/0,2817,2403749,00.asp. Retrieved 2012-05-06.
88 ` "S hi I M l i Bit T t" (htt // d t /d /14960461/S hi I M l i Bit
Exhibit 18 Page142
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 161 of 163 Page ID
#:1089
88. ` "Searching Ior Malware in Bit Torrent" (http://www.docstoc.com/docs/14960461/Searching-Ior-Malware-in-Bit-
Torrent) . http://www.docstoc.com/docs/14960461/Searching-Ior-Malware-in-Bit-Torrent.
89. ` Hvard Vegge, Finn Michael Halvorsen and Rune Walso Nergrd (2009), Where Only Fools Dare to Tread: An
Empirical Study on the Prevalence oI Zero-day Malware, 2009 Fourth International ConIerence on Internet
Monitoring and Protection
Fur ther r eading
Pouwelse, Johan; et al. (2005). "The Bittorrent P2P File-Sharing System: Measurements and Analysis"
(http://books.google.com/books?
idDnw7E8xzQUMC&lpgPA205&dqThe20Bittorrent20P2P20File-
Sharing20System3A20Measurements20and20Analysis20Johan20Pouwelse202C20P
awel20Garbacki2C20Dick20Epema2C&pgPA205#vonepage&q&IIalse) . Peer-to-Peer
Systems IV. Berlin: Springer. pp. 205216. doi:10.1007/1155898919
(http://dx.doi.org/10.10072F1155898919) . ISBN 978-3-540-29068-1.
http://books.google.com/books?
idDnw7E8xzQUMC&lpgPA205&dqThe20Bittorrent20P2P20File-
Sharing20System3A20Measurements20and20Analysis20Johan20Pouwelse202C20P
awel20Garbacki2C20Dick20Epema2C&pgPA205#vonepage&q&IIalse. Retrieved
September 4, 2011.
Exter nal links
OIIicial BitTorrent website (http://www.bittorrent.com/)
OIIicial BitTorrent SpeciIication (http://www.bittorrent.org/beps/bep0003.html)
BitTorrent (http://www.dmoz.org/Computers/Internet/FileSharing/BitTorrent/) at the Open Directory
Project
Interview with chieI executive Ashwin Navin
(http://streaming.scmp.com/podcasting/upload/NewsBitTorrentjune15.mp3)
UnoIIicial BitTorrent Protocol SpeciIication v1.0 (http://wiki.theory.org/BitTorrentSpeciIication) at
wiki.theory.org
UnoIIicial BitTorrent Location-aware Protocol 1.0 SpeciIication (http://wiki.theory.org/BitTorrentLocation-
awareProtocol1.0SpeciIication) at wiki.theory.org
Michal Czerniawski, Responsibility oI Bittorrent Search Engines Ior Copyright InIringements
(http://papers.ssrn.com/sol3/papers.cIm?abstractid1540913) , at SSRN (December 2009)
Under the hood oI BitTorrent (http://www.stanIord.edu/class/ee380/Abstracts/050216.html) lecture
given by BitTorrent protocol designer, Bram Cohen at StanIord University (video archive (http://stanIord-
online.stanIord.edu/courses/ee380/050216-ee380-100.asx) ).
Tiny perl script to view contents inside torrent Iiles (http://wiki.gotux.net/downloads/btview)
Retrieved Irom "http://en.wikipedia.org/w/index.php?titleBitTorrent&oldid534079799"
Categories: Application layer protocols BitTorrent Computer Iile Iormats File sharing networks
2001 introductions
Navigati on menu
Exhibit 18 Page143
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 162 of 163 Page ID
#:1090
This page was last modiIied on 21 January 2013 at 00:20.
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply.
See Terms oI Use Ior details.
Wikipedia is a registered trademark oI the Wikimedia Foundation, Inc., a non-proIit organization.
Exhibit 18 Page144
Case 2:11-cv-09437-DSF-JC Document 50-3 Filed 01/25/13 Page 163 of 163 Page ID
#:1091

You might also like