You are on page 1of 8

SECTION 230 COMMUNICATIONS

DECENCY ACT DOES NOT PROVIDE


AN ABSOLUTE IMMUNITY.
July 2nd, 2009

Internet service providers often cite Section 230 of the


Communications Decency Act as an absolute shield to liability for
content on a website operated by them. However, Section 230
carries with it numerous exceptions.

Eric Goldman at the Technology & Marketing Law Blog has


identified three examples where websites will always remain
liable for first party content. Read More »
Web Agreements Must Address User-Generated Content in Order to Receive
Section 230 Immunity
June 8th, 2009

So many websites today are driven by user-generated content. While all


websites should contain website agreements, such as terms of use,
privacy policy, and a copyright policy, those websites that allow for user-
generated content should absolutely ensure that they have such
agreements. For sites with UGC, web site owners need to ensure that
they are protected by Section 230 of the Communications Decency Act if
possible. In particular, the following should be considered:

 User-generated content ownership


 User-generated content right of use
 What kind of user-generated content will be allowed, in particular noting
only lawful content
 How unlawful user-generated content will be handled
 Whether the website operator can edit or otherwise have a role in the
user-generated content.

Not only is it important to specifically address each of the above, but it is


also important that the website operator understands what it may and
may not do in order to ensure that it is deemed an interactive computer
service provider rather than a content provider. This distinction is
particularly important when considering immunity under the
Communications Decency Act(CDA). Thus, the website operator’s actual
role and involvement should match that which is delineated in its website
agreements. For example, if it claims that it has no duty to review site
content such as user-generated content, then it should not undertake an
effort to do so. Similarly, should the website operator comment on user-
generated content, it may be crossing the line into being deemed a
content provider and thus waiving its immunity under the CDA.

Ultimately, these considerations need to be addressed when developing


the website, drafting the website agreements, and actually providing the
service. An experienced internet lawyer that understands law and latest
decisions interpreting such laws, including those under the
Communications Decency Act, should be consulted where possible.
http://section230communicationsdecencyact1996.com/web-agreements-must-address-user-
generated-content/2009/06/#more-22

User Generated Content: Section 230 Immunity Exception Requires That You
Not Be An Information Content Provider
May 29th, 2009

Section 230 of the Communications Decency Act of 1996, commonly


referred to as Section 230 by those in internet law space, provides
immunity from civil liability for both providers of and users of “an
interactive computer service” that publish information provided by
others. This essentially shields web hosts from liability for user-
generated content that may be deemed unlawful. As we learned in the
Fair Housing Council of San Fernando Valley v. Roommates.com case
as well as subsequent decisions , in order to avail oneself of the
immunity offered under Section 230, the defendant must not be an
“information content provider”.

If the defendant is in fact deemed an information content provider, it will


no longer be able to claim immunity under Section 230. Instead, in
those cases where a defendant forces its users to provide discriminatory
information as a condition to access, rather than simply providing neutral
tools to enable the user to perform an unlawful or elicit act through user
generated content or otherwise, liability may result.

It is worth noting that Section 230 does not necessarily provide immunity
as it relates to intellectual property law. For example, there is no
immunity for contributory liability for trademark infringement, right of
publicity claims, and other related claims. Ultimately, should an entity
qualify as an information content provider with respect to the information
at issue, Section 230 immunity will not apply. Thus, an understanding of
what an ISP, web host, or other service provider must do in order to
shield itself of liability in connection with user generated content is
critical.

Section 230 Communications Decency Act Does Not Provide An Absolute


Immunity.
July 2nd, 2009

Internet service providers often cite Section 230 of the Communications


Decency Act as an absolute shield to liability for content on a website
operated by them. However, Section 230 carries with it numerous
exceptions.

Eric Goldman at the Technology & Marketing Law Blog has identified
three examples where websites will always remain liable for first party
content.

The first is where the website operator posts their own content. In these
situations they are held liable because they are not simply a computer
service provider but rather a content provider. Just as a user can be
responsible for the user generated content it provides, the website
operator can be responsible and liable for that which it provides on a
website.

The second example pertains to marketing representations. In such


instances where the website operator makes marketing representations,
they may be liable under laws related to contract or false advertising.
These particular situations require the most in-depth analysis and legal
knowledge.

Finally, the recent case of Barnes v. Yahoo held that a website may be
liable under promissory estoppel if it promises, and therefore assumes
an obligation, to remove third party content and fails to do so. The
question becomes how far reaching this decision will be. For example,
will a website operator be held liable when it promises to alter the user
generated content or provide clarification of its own pertaining to that
content? Issues such as this are likely to arise, and a review of the
specific facts pertaining to that user generated content or website
operator actions is critical.

Ultimately, internet service providers that allow for user generated


content must understand the limits of Section 230 Immunity. Contact an
experienced internet lawyer today to understand the application of the
Communication Decency Act to your particular matter.
GOOGLE marked its 20th birthday this week. It
celebrated in fitting style—being lambasted by
politicians in Washington. Its failure to send a senior
executive to a congressional hearing, over Russian use
of tech platforms to meddle in the presidential election
in 2016, was tone-deaf. Like Facebook and Twitter,
whose top brass did show up, Google wields too much
influence to avoid public scrutiny. A vital debate is
under way over whether and how tech platforms should
be held responsible for the content they carry. Angering
legislators increases the chance of a bad outcome.
Back when Google, Facebook, Twitter and others were
babies, the answer that politicians gave on the question
of content liability was clear. Laws such as America’s
Communications Decency Act (CDA), passed in 1996,
largely shielded online firms from responsibility for
their users’ actions. Lawmakers reasoned that the
fledgling online industry needed to be protected from
costly lawsuits. They were to be thought of more as
telecoms providers, neutral venues on which customers
could communicate with each other.
That position is hard to maintain today. Online giants
no longer need protection: they are among the world’s
most successful and influential firms. Nearly half of
American adults get some of their news on Facebook;
YouTube, Google’s video-streaming service, has 1.9bn
monthly logged-on users, who watch around 1bn hours
of video every day. To complaints about trolling, fake
news and extremist videos, the old defence of neutrality
rings hollow. The platforms’ algorithms curate the flow
of content; they help decide what users see.
The pendulum is thus swinging the other way.
Lawmakers are eroding the idea that the platforms have
no responsibility for content. Earlier this year America
passed the SESTA act, which has the worthy aim of
cracking down on sex trafficking; the Department of
Justice this week said it would look into the platforms’
impact on free speech. In Germany the platforms have
strict deadlines to take down hate speech. The tech
giants themselves increasingly accept responsibility for
what appears on their pages, hiring armies of
moderators to remove offending content (see article).
This new interventionism carries two big dangers. One
is that it will entrench the dominance of the giants,
because startups will not be able to afford the burden of
policing their platforms or to shoulder the risk of
lawsuits. The other is that the tech titans become
“ministries of truth”, acting as arbiters of what
billions of people around the world see—and what
they do not. This is no idle worry. Facebook and
YouTube have banned Alex Jones, a notorious
peddler of conspiracy theories. Loathsome as Mr
Jones’s ideas are, defenders of free speech ought to
squirm at the notion that a small set of like-
minded executives in Silicon Valley are deciding
what is seen by an audience of billions.
The weight given to free speech and the
responsibilities of the platforms vary between
countries. But three principles ought to guide the
actions of legislators and the platforms
themselves. The first is that free speech comes in
many flavours. The debate over the platforms is a
melange of concerns, from online bullying to
political misinformation. These worries demand
different responses. The case for holding the tech
firms directly responsible for what they carry is
clear for illegal content. Content that may be
deemed political is far harder to deal with—the
risk is both that platforms host material that is
beyond the pale and take down material that
should be aired.
The second is that it is wrong to try to engineer a
particular outcome for content. Trying to achieve a
balanced news feed, say, is not simply antithetical to the
giants’ business model, which promises personalised
content. It is also a definitional quagmire, in which
honest differences of political view must be categorised.
Tech firms would be forced to act as censors. It would
be better to make platforms accountable for their
procedures: clarify the criteria applied to restrict
content; recruit advisory bodies and user
representatives to help ensure that these criteria are
applied; give users scope to appeal against decisions.
They also need to open their algorithms and data to
independent scrutiny, under controlled conditions.
Only then can society evaluate whether a platform is
discriminating against content or whether material
causes harm.
Arbiters and arbitrariness
The third principle is that small firms should be treated
differently from large ones. The original rationale of the
CDA made sense, but the firms that need protection
now are those that seek to challenge the big tech
platforms. If rules are drawn up to impose liability on
online firms, they ought to contain exemptions for
those below a certain size and reach. Google and its
confrères have achieved extraordinary things in their
short lives. But their bosses would be getting a lot less
heat from Capitol Hill if they had more competition.
This article appeared in the Leaders section of the print edition under the headline "Truth and
power"

https://www.economist.com/leaders/2018/09/08/shoul
d-the-tech-giants-be-liable-for-content

You might also like